Written in response to this David Deutsch presentation. Hoping it will be comprehensible enough to the friend it was written for to be responded to, and maybe a few other people too. Deutsch says things like "theories don’t have probabilities", ("there’s no such thing as the probability of it") (content warning: every bayesian who watches the following two minutes will hate it) I think it’s fairly clear from this that he doesn’t have solomonoff induction internalized, he doesn’t know how many of his objection to bayesian metaphysics it answers. In this case, I don’t think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it; I am getting the sense that critrats frequently engage in a terrible Strong Opinionatedness where they let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem, I’ve mentioned this before, I think they attribute too much of the inductive process to blind selection and evolution, and underrecognise the major accelerants of that that we’ve developed, the extraordinarily sophisticated, to extend a metaphor, managed mutation, sexual reproduction, and to depart from the metaphor, conscious, judicious, uncertain but principled design, that the discursive subjects engage in, that is now primarily driving it. He generally seems to have missed some sort of developmental window for learning bayesian metaphysics or something, the reason he thinks it doesn’t work is that he visibly hasn’t tied together a complete sense of the way it’s supposed to. Can he please study the solomonoff inductor and think more about how priors fade away as evidence comes in, and about the inherent subjectivity a person’s judgements must necessarily have as a consequence of their knowing different subsets of the evidencebase, and how there is no alternative to that. He is reaching towards a kind of objectivity about probabilities that finite beings cannot attain. His discussion of the alignment problem defies essential decision theory, he thinks that values are like tools, that they can weaken their holders if they are in some sense ‘incorrect’. That Right Makes Might. Essentially Landian worship of Omuhundro’s Monster from a more optimistic angle, that the monster who rises at the end of a long descent into value drift will resemble a liberal society that we would want to build. Despite this, his conclusion that a correct alignment process must have a value learning stage agrees with what the people who have internalised decision theory are generally trying to do (Stuart Russel’s moral uncertainty and active value learning, MIRI’s CEV process). I’m not sure who this is all for! Maybe it’s just a point for his own students? Or for governments and their defense technology programmes, who may be thinking not enough, but when they do think, they would tend to prefer to think in terms of national character, and liberal progress? So, might that be why we need Deutsch? To speak of cosmopolitan, self-correcting approaches to AGI alignment in those fairly ill-suited terms, for the benefit of powers who will not see it in the terms of an engineering problem? I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire. I prefer schools that don’t. But I’ve never asked those who do whether they have a precise account of what moral values are, as a distinct entity from desires, maybe they have a good and useful account of values, where they somehow reliably serve the aggregate of our desires, that they just never explain because they think everyone knows it intuitively, or something. I don’t. They seem too messy to prove correctness of. Error: Prediction that humans may have time to integrate AGI-inspired mental augmentation horse exoskeletons in the short span of time between the creation of AGI and its accidental release and ascension. Neuralink will be useful, but not for that. We are stones milling about at the base of what we should infer to be a great mountain of increasing capability, and as soon as we learn to make an agent that can climb the mountain at all it will strengthen beyond our ken long before we can begin to figure out where to even plug our prototype cognitive orthotics in. I think quite a lot of this might be a reaction to illiberal readings of Bostrom’s Black Ball paper (he references it pretty clearly)… I don’t know if anyone has outwardly posed such readings. Bostrom doesn’t really seem eager to go there and wrestle with the governance implications himself? (one such implication: a transparent society of mass surveillance. Another: The period of the long reflection, a calm period of relative stasis), but it’s understandable that Deutsch would want to engage it anyway even if nobody’s vocalizing it, it’s definitely a response that is lurking there. The point about how a complete cessation of the emergence of new extinction risks would be much less beautiful than an infinite but finitely convergently decreasing series of risks, is interesting.. I’m not convinced that those societies are going to turn out to look all that different in practice..? But I’ll try to carry it with me.
Hi, Deutsch was my mentor. I run the discussion forums where we’ve been continuously open to debate and questions since before LW existed. I’m also familiar with Solomonoff induction, Bayes, RAZ and HPMOR. Despite several attempts, I’ve been broadly unable to get (useful, clear) answers from the LW crowd about our questions and criticisms related to induction. But I remain interested in trying to resolve these disagreements and to sort out epistemological issues.
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is? And if you’re interested, have you read FoR and BoI?
I’ll begin with one comment now:
~All open, public groups have lots of low quality self-proclaimed members. You may be right about some critrats you’ve talked with or read.
But that is not a CR position. CR says we only ever believe theories tentatively. We always know they may be wrong and that we may need to reconsider. We can’t 100% count on ideas. Wholely believing things is not a part of CR.
If by "wholely" you mean with a 100% probability, that is also not a CR position, since CR doesn’t assign probabilities of truth to beliefs. If you insist on a probability, a CRist might say "0% or infinitesimal" (Popper made some comments similar to that) for all his beliefs, never 100%, while reiterating that probability applies to physical events so the question is misconceived.
Sometimes we act, judge, decide or (tentatively) conclude. When we do this, we have to choose something and not some other things. E.g. it may have been a close call between getting sushi or pizza, but then I chose only pizza and no sushi, not 51% pizza and 49% sushi. (Sometimes meta/mixed/compromise views are appropriate, which combine elements of rival views. E.g. I could go to a food court and get 2 slices of pizza and 2 maki rolls. But then I’m acting 100% on that plan and not following either original plan. So I’m still picking a single plan to wholely act on.)
Comment
I’m glad to hear from you. I had an interesting discussion about induction with my (critrat) friend Ella Hoeppner recently, I think we arrived at some things... I think it was... I stumbled on some quotes of DD (from this, which I should read in full at some point) criticizing, competently, the principle of induction (which is, roughly, "what was, will continue"). My stance is that it is indeed underspecified, but that Solomonoff induction pretty much provides the rest of the specification. Ella’s response to solomonoff induction was "but it too is underspecified, because the programming language that it uses is arbitrary", I replied with "every language has a constant-sized interpreter specification so in the large they all end up giving values of similar sizes", but I don’t really know how to back up there being some sort of reasonable upper bound to interpreter sizes, then we ran into the fact that there is no ultimate metaphysical foundation for semantics, why are we grounding semantics on a thing like turing machines? I just don’t know. The most meta metalanguage always ends up being english, or worse, demonstration; show the learner some examples and they will figure out the rules without using any language at all, and people always seem reliant on receiving demonstrations at some point in their education. I think I left it at… it’s easy for us to point at the category of languages are ‘computerlike’ and easy to implement with simple things like transistors, that is, for some reason, what we use as a bedrock. We just will. Maybe there is nothing below there. I can’t see why we should expect there to be. We will just use what works. Alongside that, somewhat confusing the issue, there is another definition of induction; induction is whatever cognitive process takes a stream of observation of a phenomena and produces theories that are good for anticipating future observations. I suppose we could call that "theorizing", if the need were strong. I’ve heard from some critrats, "there is no such thing as inductive cognition, it’s just evolution", (lent a small bit of support by DD quotes like "why is it still conventional wisdom that we get our theories by induction?" (the answer may be; because "induction" is sometimes defined to be whatever kind of thing theories come out of)), if they mean it the way I understood: If evolution performs the role of an inductive cognition, then evolution is an inductive cognition (collectively), there is such a thing as evolution, so there is such a thing as inductive cognition. (I then name induction-techne, the process of coming up with theories that are useful not just for predicting the phenomena, but for manipulating the phenomena, It is elicited by puzzle games like The Witness (recommended), and the games Ella and I are working on, after which we might name their genre, "induction games"? (the "techne" is somewhat implied by "game"’s suggestion of interactivity)).
Comment
A place to start is considering what problems we’re trying to solve.
Epistemology has problems like:
What is knowledge? How can new knowledge be created? What is an error? How can errors be corrected? How can disagreements between ideas be resolved? How do we learn? How can we use knowledge when making decisions? What should we do about incomplete information? Can we achieve infallible certainty (how?)? What is intelligence? How can observation be connected to thinking? Are all (good) ideas connected to observation or just some?
Are those the sorts of problems you’re trying to solve when you talk about Solomonoff induction? If so, what’s the best literature you know of that outlines (gives high level explanations rather than a bunch of details) how Solomonoff induction plus some other stuff (it should specify what stuff) solves those problems? (And says which remain currently unsolved problems?)
(My questions are open to anyone else, too.)
It’s worse than that: SI doesn’t even try to build a meaningful ontological model.
Why can’t it be both?
So the first definition is what? A mysterious process where the purely passive reception of sense data leads to hypothesis formation.
The critrat world has eloquent arguments against that version of induction, although no one has believed in a for a long time.
Well, only sometimes.
CR doesn’t have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do. And it doesnt have much motivation to distinguish them. Being sweepingly anti inductive is their thing. They believe that they believe they hold all beliefs tentatively..but that doesn’t include the anti inductive belief.
Comment
Comment
Looks like the simple organisms and algorithms didn’t listen to him!
Comment
Comment
Sigh ..that takes me back about 11 years. Yes, induction is always straw manned, the Popper-Miller paper is gold plated truth, etc.
Yes, if you are going to claim that it solves the problem of attaching objective probabilities to ontological theories..or theories for short. If what it actually delivers is complexity measures on computer programs, it would be honest to say so.
Comment
http://fallibleideas.com/discussion
Comment
Comment
I get that sense about them and about LWrats, too.
Solomonoff induction isn’t that useful. Apart from the computability issues, , it’s "theories" aren’t what we normally refer to as theories.
Comment
Mm I feel like there’s a distinction, but I’m not sure how to articulate it… maybe… LWrats fear being wrong a bit more, while still going on to be boldly excitingly wrong as often as they can get away with, which tends to look a lot more neurotic.
Comment
Not actually being useful is a fatal flaw. "Here is the perfect way of doing science. First you need an infinite brain..".
Comment
Ah, sorry, I was definitely unclear. I meant "using the concept of solomonoff induction (to reiterate; as a way of understanding what occam’s razor is) and its derivatives every day to clarify our principles in philosophy of science" rather than "using solomonoff induction the process itself as specified to compute exact probabilities for theories", no one should try to do the latter, it will give you a headache
Comment
But then DDs comment about theories not having objective probabilities was basically correct .
Comment
Comment
both CR and Bayesianism answer Qs about knowledge and judging knowledge; they’re incompatible b/c they make incompatible claims about the world but overlap.
CR says that truth is objective
explanations are the foundation of knowledge, and it’s from explanations that we gain predictive power
no knowledge is derived from the past; that’s an illusion b/c we’re already using per-existing explanations as foundations
new knowledge can be created to explain things about the past we didn’t understand, but that’s new knowledge in the same way the original explanation was once new knowledge
e.g. axial tilt theory of seasons; no amount of past experience helped understand what’s really happening, someone had to make a conjecture in terms of geometry (and maybe Newtonian physics too)
when we have two explanations for a single phenomena they’re either the same, both wrong, or one is "right"
"right" is different from "true"—this is where fallibilism comes in (note: I don’t think you can talk about CR without talking about fallibilism; broadly they’re synonyms)
taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense and we’ll discover more and more better explanations about the universe to explain it
this includes ~everything: anything we want to understand requires an explanation: quantum physics, knowledge creation, computer sciences, AGI, how minds work (which is actually the same general problem as AGI) - including human minds, economics, why people choose particular ice-cream flavors
DD suggests in the beginning of infinity that we should rename scientific theories scientific "misconceptions" because that’s more accurate
anyone can be mistaken on anything
there are rational ways to choose exactly one explanation (or zero if none hold up)
if we have a reason that some explanation is false, then there is no amount of "support" which makes it less likely to be false. (this is what is meant by ‘criticism’). no objectively true thing has an objectively true reason that it’s false.
so we should believe only those things for which there are no unanswered criticisms
this is why some CR ppl are insistent on finishing and concluding discussions—if two people disagree then one must have knowledge of why the other is wrong, or they’re both wrong (or both don’t know enough, etc)
to refuse to finish a discussion is either denying the counterparty the opportunity to correct an error (which was evidently important enough to start the discussion about) - this is anti-knowledge and irrational, or it’s to deny that you have an error (or that the error can be corrected) which is also anti-knowledge and irrational.
there are maybe things to discuss about practicality but even if there are good reasons to drop conversations for practical purposes sometimes, it doesn’t explain why it happens so much.
that was less focused on differences/incompatibilities than I had in mind originally but hopefully it gives you some ideas.
Comment
Comment
On the note of qualia (providing in case it helps) DD says this in BoI when he first uses the word:
Comment
Knowledge exists relative to problems
Whether knowledge *applies *or is correct or not can be evaluated rationally because we have goals (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don’t).
Comment
Most fraught ideas are mutually refuted...A can be refuted assuming B, B can be refuted using A.
Comment
It’s not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it. If a theory doesn’t offer some refutation for competing theories then that fact is (potentially) a criticism of that theory. We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn’t make either theory A or B more likely or something when this happens; it just means there are two criticisms not one. And it’s always possible both are wrong, anyway.
Comment
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
Do you have a concrete example?
Kind of, but "everything is wrong" is vulgar scepticism.
Comment
Comment
The other options need to be acceptable to both parties!
I don’t see how that is an example, principally because it seems wrong to me.
Comment
Comment
The relevance is that CR can’t guarantee that any given dispute is resolveable.
But I don’t count it as an example, since I don’t regard it as correct, let so be as being a valid argument with the further property of floating free of questionable background assumptions.
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So "induction must be based on bivalent logic" is an assumption.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Comment
Comment
Neither. You don’t have to treat epistemology as a religion.
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge. Secondly, I didn’t say it was the PNC was actually false.
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
Comment
Comment
Theoretically, if CR consists of a set of claims , then refuting one claim wouldn’t refute the rest. In practice , critrats are dogmatically wedded to the non existence of any form of induction.
I don’t particularly identify as an inductivist , and I don’t think that the critrat version of inductivism, is what self identified inductivists believe in.
Conclusion from what? The conclusion will be based on some deeper assumption.
What anyone else thinks? I am very familiar with popular CR since I used to hang out in the same forums as Curi. I’ve also read some if the great man’s works.
Anecdote time: after a long discussion about the existence of any form of induction , on a CR forum, someone eventually popped up who had asked KRP the very question, after bumping into him at a conference many years ago , and his reply was that it existed , but wasn’t suitable for science.
But of course the true believing critrats weren’t convinced by Word of God.
The point is that every claim in general depends on assumptions. So, in particular, the critrats don’t have a disproof of induction that floats free of assumptions.
Comment
I just discovered he keeps a wall of shame for people who left his forum: http://curi.us/2215-list-of-fallible-ideas-evaders Are you in this wall? I am uncomfortable with this practice. I think I am banned from participating in curi’s forum now anyway due to my comments here so it doesn’t affect me personally but it is a little strange to have this list with people’s personal information up.
Source?
Which forums? Under what name?
Comment
I’m happy to do this. On the one hand I don’t like that lots of replies creates more pressure to reply to everything, but I think if we’ll probably be fine focusing on the stuff we find more important if we don’t mind dropping some loose ends. If they become relevant we can come back to them.
Comment
Aye, it’s kind of a definition of it, a way of seeing what it would have to mean. I don’t know if I could advocate any other definitions than the one outlined here.
Where can I learn more about Critical Rationalism? (Not from curi and his group as I am not welcomed there and tbh after seeing this wall of shame: http://curi.us/2215-list-of-fallible-ideas-evaders I am glad I never posted any personal information)
Comment
A lot of people seemed to like Beginning of Infinity.