Allowing a formal proof system to self improve while avoiding Lobian obstacles.

https://www.lesswrong.com/posts/hchfRj4qa4hFZxhKM/allowing-a-formal-proof-system-to-self-improve-while

[Epistemic status, can’t spot a mistake, but am not confidant that there isn’t one, if you find anything please say so. Posting largely because the community value of a new good idea is larger than any harm that might be caused by a flawed proof. ] Suppose you have an automatic proof checker. Its connected to a source of statements that are sometimes correct proofs, and sometimes not. The proof checker wants to reject all false proofs, and accept as many of the true proofs as possible. It also wants to be able to update its own proof framework. Define \mathbb S to be a set of statements in a particular formalism, say those that are grammatically well defined in PA. Let \mathbb B be any sequence from some alphabet of symbols. Let \mathbb{L}=\left{\mathbb S\times\mathbb B \rightarrow {\top,\bot}\right}and \mathbb V\subset \mathbb L be the set of programs that have the property that \forall v\in \mathbb V:\forall s\in \mathbb S:\left(\exists b\in\mathbb B:v(s,b)\right)\implies sIn other words, \mathbb V is the set of all programs that never prove false statements. We should never leave \mathbb V or need to talk about any program not in it. For v \in \mathbb V write v[s] to mean \exists b\in\mathbb B:v(s,b). Ie v proves s A simple setup would consist of a starting program p_1\in \mathbb V and a rule that says, If p_1[p_2[s]\implies s] then you can add p_2 to your list of trusted provers. If p_1 proves the soundness of p_2, then you can accept any statement when given a proof of it in p_2.
The lobian obstacle is that p_2 must be strictly weaker than p_1, in that p_1 can prove any statement that p_2 can, but p_1 can prove the soundness of p_2 and p_2 can’t prove its own soundness. This means that each trusted prover has to be strictly weaker than the one that generated it. You could start with PA+3^^^3 and say that a few weakenings aren’t a problem, but that isn’t an elegant solution. Note: You can’t get around this problem using cycles. Suppose a[b[s]\implies s]\ b[c[s]\implies s]\ This would imply a[b[c[s]\implies s]\implies (c[s]\implies s)]\ a[b[c[s]\implies s]]\ a[c[s]\implies s] So any cycle could be shrunk by 1, and inductively, shrunk to a self trusting system. I propose instead that you use the rule. If p_1[p_2[s]\implies p_1[s]] then accept any future proofs that are accepted by p_2, and give p_2 all rights given to p_1, including taking p_2[p_3\implies p_2] to mean that p_3 is accepted, and so on recursively. If p_1 is sound, then p_1[p_2[s]\implies p_1[s]]\implies (p_2[s]\implies p_1[s]) so anything that is proven by p_2 can be proven by p_1. If p_1 isn’t sound, it can prove anything anyway. Note that if p_2 is a straightforward isomorphism of p_1 then they are easily proven equivalent. However, if p_2 says "run Turing machine T for length(b) steps, if it doesn’t halt, check if b is a valid proof of s, if T does halt, return \top" then it could be equivalent to p_1 from a second order logic perspective, but p_1 can’t prove that T never halts. Still, this rule allows us to prove anything provable in p_1 and only things provable in p_1, while also allowing the user to add shorthands and semantic sugar. Note that the "proof" b could just be the number of processor cycles you want to run a proof search for before giving up. In fact, this framework lets you swap and mix between hand proved results, and automatically proved results (with search time cut off) as you see fit. This formalism allows a any system containing a proof checker to automatically upgrade itself to a version that has the same Godelian equivalence class, but is more suited to the hardware available.

Comment

https://www.lesswrong.com/posts/hchfRj4qa4hFZxhKM/allowing-a-formal-proof-system-to-self-improve-while?commentId=i2Eqjd7ahNfxqwnb5

I’ve spotted a mistake, I think: the relationship inside the first prover should be an if and only if relationship. This is because if p_2 says everything is a theorem, then p_2[s] \Longrightarrow p_1[s] trivially holds. Thus, the condition to give p_2 "proof privileges" should be p_1\left[ p_2[s] \iff p_1[s] \right]. There might still be some problems from the modal logic, I’ll check when I have some more time. I can’t believe that I was the first one to spot this. The post also has very few upvotes. Did nobody that knows this stuff see this and spot the mistake immediately? Was the post dismissed to start with? (in my opinion unfairly, but perhaps not).

https://www.lesswrong.com/posts/hchfRj4qa4hFZxhKM/allowing-a-formal-proof-system-to-self-improve-while?commentId=rp4hJC6Ki3W4H4wsC

Let f map each prover p1 to one adding (at least) the rule of inference of "If _(p1) proves that _(p1) proves all that p2 does, then f(p1) proves all that p2 does."

It is unclear which blanks are filled with f and which with the identity function to match your proposal. The last f must be there because we can only have the new prover prove additional things. If all blanks are filled with f, f(p1) is inconsistent by Löb’s theorem and taking p2 to be inconsistent.

Comment

https://www.lesswrong.com/posts/hchfRj4qa4hFZxhKM/allowing-a-formal-proof-system-to-self-improve-while?commentId=C8fqiPPWHStTSMyue

Both blanks are the identity function. Here is some psudo code class Prover: ____def new(self): ________self.ps=[PA] ____def prove(self, p, s, b): ________assert p in self.ps ________return p(s,b) ____def upgrade(self, p1, p2, b): ________if self.prove(p1,"forall s:(exists b2: p2(s,b2))=> (exists b1: p2(s,b1))", b) ____________self.ps.append(p2) prover=Prover() prover.upgrade(PA, nPA, proof) Where PA is a specific peano arithmatic proof checker. nPA is another proof checker. and ‘proof’ is a proof that anything nPA can prove, PA can prove too.

Comment

PA+1 can already provide this workflow: Given that nPA proves s and that PA proves all that nPA does, we can get that PA can prove s, and then use the +1 to prove s. And then nnPA can still be handled by PA+1.