Deflationism isn’t the solution to philosophy’s woes

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes

[epistemic status: thinking out loud; reporting high-level impressions based on a decent amount of data, but my impressions might shift quickly if someone proposed a promising new causal story of what’s going on] [context warning: If you’re a philosopher whose first encounter with LessWrong happens to be this post, you’ll probably be very confused and put off by my suggestion that LW outperforms analytic philosophy. To that, all I can really say without starting a very long conversation is: the typical Internet community that compares itself favorably to an academic field will obviously be crazy or stupid. And yet academic fields can be dysfunctional, and low-hanging fruit can sometimes go unplucked for quite a while; so while it makes sense to have a low prior probability on this kind of thing, this kind of claim can be true, and it’s important to be able to update toward it (and talk about it) in cases where it does turn out to be true. There are about 6700 US philosophy faculty, versus about 6000 LessWrong commenters to date; but the philosophy faculty are doing this as their day job, while the LessWrong users are almost all doing it in their off time. So the claim that LW outperforms is prima facie interesting, and warrants some explanation. OK, enough disclaimers.] A month ago, Chana Messinger said:

Rob says, "as an analytic philosopher, I vote for resolving this disagreement by coining different terms with stipulated meanings." But I constantly hear people complain that philosophers are failing to distinguish different things they mean by words and if they just disambiguated, so many philosophical issues would be solved, most recently from Sam and Spencer on Spencer’s podcast. What’s going on here? Are philosophers good at this or bad at this? Would disambiguation clear up philosophical disputes? My cards on the table: I understand analytic philosophers to be very into clearly defining their terms, and a lot of what happens in academic philosophy is arguments about which definitions capture which intuitions or have what properties, and how much, but I’m very curious to find out if that’s wrong. Sam Rosen replied: Philosophers are good at coming up with distinctions. They are not good at saying, "the debate about the true meaning of knowledge is inherently silly; let’s collaboratively map out concept space instead." An edited version of my reply to Chana and Sam: Alternative hypothesis: philosophers are OK at saying ‘this debate is unimportant’; but...

Comment

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=EEyQBLaCumQEsfLiH

A conversation prompted by this post (**added: **and "What I’d Change About Different Philosophy Fields") on Twitter:


**Ben Levinstein: **Hmm. As a professional analytic philosopher, I find myself unable to judge a lot of this. I think philosophers often carve out sub-communities of varying quality and with varying norms. I read LW semi regularly but don’t have an account and generally wouldn’t say it outperforms. **Rob Bensinger: **An example of what I have in mind: I think LW is choosing much better philosophical problems to work on than truthmakers, moral internalism, or mereology. I also think it’s very bad that most decision theorists two-box, or that anyone worries about whether teleportation is death. If the philosophical circles you travel in would strongly agree with all that, then I might agree they’re on par with LW, and we might just be looking at different parts of a very big elephant. **Ben Levinstein: **That could be. I realized I had no idea whether your critique of metaphysics, for instance, was accurate or not because I’m pretty disconnected from most of analytic metaphysics. Just don’t know what’s going on outside of the work of a very select few. **Rob Bensinger: **(At least, most decision theorists two-boxed as of 2009. Maybe things have changed a lot!) **Ben Levinstein: **I don’t think that’s changed, but I also tend not to buy the LW explanations for why decision theorists are thinking along the lines they do. E.g., Joyce and others definitely think they are trying to win but think the reference classes are wrong. Not taking a side on the merits there, but just saying I have the impression from LW that their understanding of what CDT-defenders take the rules of the game to be tends to be inaccurate. **Rob Bensinger: **Sounds like a likely sort of thing for LW to get wrong. Knowing why others think things is a hard problem. Gotta get Joyce posting on LW. :) **Ben Levinstein: **I also think every philosopher I know who has looked at Solomonoff just doesn’t think it’s that good or interesting after a while. We all come away kind of deflated. **Rob Bensinger: **I wonder if you feel more deflated than the view A Semitechnical Introductory Dialogue on Solomonoff Induction arrives at? I think Solomonoff is good but not perfect. I’m not sure whether you’re gesturing at a disagreement or a different way of phrasing the same position. **Ben Levinstein: **I’ll take a look! Basically, after working through the technicals I didn’t feel like it did much of anything to solve any deep philosophical problems related to induction despite being a very cool idea. Tom Sterkenburg had some good negative stuff, e.g., http://​​philsci-archive.pitt.edu/​​12429/​​

Comment

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=AfedhZd5ghGKoKcAZ

Ben Levinstein:

I guess I have a fair amount to say, but the very quick summary of my thoughts on SI remain the same:

  1. Solomonoff Induction is really just subjective bayesianism+ Cromwell’s rule + prob 1 that the universe is computable. I could be wrong about the exact details here, but I think this could even be exactly correct. Like for any subjective Bayesian prior that respects Cromwell’s rule and is sure the universe is computable there exists some UTM that will match it. (Maybe there’s some technical tweak I’m missing, but basically, that’s right.) So if that’s so, then SI doesn’t really add anything to the problem of induction aside from saying that the universe is computable.
  2. EY makes a lot out of saying you can call shenanigans with ridiculous-looking UTMs. But I mean, you can do the same with ridiculous looking priors under subjective bayes. Like, ok, if you just start with a prior of .999999 that Canada will invade the US, I can say you’re engaging in shenanigans. Maybe it makes it a bit more obvious if you use UTMs, but I’m not seeing a ton of mileage shenanigans-wise.
  3. What I like about SI is that it basically is just another way to think about subjective bayesianism. Like you get a cool reframing and conceptual tool, and it is definitely worth knowing about. But I don’t at all buy the hype about solving induction and even codifying Ockham’s Razor.
  4. Man, as usual I’m jealous of some of EY’s phrase-turning ability: that line about being a young intelligence with just two bits to rub together is great.
https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=cjbcueRE5xMszTfk5

I think an important piece that’s missing here is that LW simply assumes that certain answers to important questions are correct. It’s not just that there are social norms that say it’s OK to dismiss ideas as stupid if you think they’re stupid, it’s that there’s a rough consensus on which ideas are stupid.

LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don’t think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.

Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.

And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there’s an advantage to making arguments that don’t rely on particular answers to questions where there isn’t widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.

The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.

Comment

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=xgnxiuh4N49ZFpxJv

I would draw an analogy like this one: Five hundred extremely smart and well-intentioned philosophers of religion (some atheists, some Christians, some Muslims, etc.) have produced an enormous literature discussing the ins and outs of theism and the efficacy of prayer, and there continue to be a number of complexities and unsolved problems related to why certain arguments succeed or fail, even though various groups have strong (conflicting) intuitions to the effect "claim x is going to be true in the end". In a context like this, I would consider it an important mark in favor of a group if they were 50% better than the philosophers of religion at picking the right claims to say "claim x is going to be true in the end", even if they are no better than the philosophers of religion at conclusively proving to a random human that they’re right. (In fact, even if they’re somewhat worse.) To sharpen this question, we can imagine that a group of intellectuals learns that a nearby dam is going to break soon, flooding their town. They can choose to divide up their time between ‘evacuating people’ and ‘praying’. Since prayer doesn’t work (I say with confidence, even though I’ve never read any scholarly work about this), I would score a group in this context based on how well they avoid wasting scarce minutes on prayer. I would give little or no points based on how good their arguments for one allocation or another are, since lives are on the line and the end result is a clearer test. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se. To clarify a few things:

  • Obviously, I’m not saying the difference between LW and analytic philosophy is remotely as drastic as the difference between LW and philosophy of religion. I’m just using the extreme example to highlight a qualitative point.

  • Obviously, if someone comes to this thread saying ‘but two-boxing *is *better than one-boxing’, I will reply by giving specific counter-arguments (both formal and heuristic), not by just saying ‘my intuition is better than yours!’ and stopping there. And obviously I don’t expect a random philosopher to instantly assume I’m correct that LWers have good intuitions about this, without spending a lot of time talking with us. I can notice and give credit to someone who has a good empirical track record (by my lights), without expecting everyone on the Internet to take my word for it.

  • Obviously, being a LWer, I care about heuristics of good reasoning. :) And if someone gives sufficiently bad reasons for the right answer, I will worry about whether they’re going to get other answers wrong in the future. But also, I think there’s such a thing as having good built-up intuitions about what kinds of conclusions end up turning out to be true, and about what kinds of evidence tend to deserve more weight than other kinds of evidence. This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.

Comment

I worry that this doesn’t really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.

Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.

This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.

I’m not sure what you mean here, but maybe we’re getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I’m sure that anyone can give explanations for why their intuitions are more likely to be right, but it’s at least more constraining. Some possibilities:

  • LWers are more status-blind, so their intuitions are less distorted by things that are not about being right

  • Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones

  • LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.

I’m not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.

Comment

Those seem like fine partial explanations to me, as do the explanations I listed in the OP. I expect multiple things went right simultaneously; if it were just a single simple tweak, we would expect many other groups to have hit on the same trick.

Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones

LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.

It’s common for people from other backgrounds to get frustrated with philosophy. But it isn’t a good argument to the effect that philosophy is being done wrong. Since it is a separate discipline to science , engineering, and so on, there is no particular reason to think that the same techniques will work. If there are reasons why some Weird Trick would work across all disciplines , then it would work in philosophy. But is there a one weird trick?

. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.

There is a set of claims that LW holds to be true, and a set that can be tested directly and unambiguously—where "physical reality judges you"—and they are not the same set. Ask yourself how many Lesswrongian claims other than Newcombe are directly testable.

The pragmatic or "winning" approach just doesn’t go far enough.

You can objectively show that a theory succeeds or fails at predicting observations, and at the closely related problem of achieving practical results . It is is less clear whether an explanation succeeds in explaining, and less clear still whether a model succeeds in corresponding to the territory. The lack of a test for correspondence per se, ie. the lack of an independent "standpoint" from which the map and the territory can be compared, is the is the major problem in scientific epistemology. And the lack of direct testability is one of the things that characterises philosophical problems as opposed to scientific ones—you can’t test ethics for correctness,you can’t test personal identity, you can’t test correspondence-to-reality separately from prediction-of-obsevation—so the "winning" or pragmatic approach is a particularly bad fit for philosophy.

Pragmatism, the "winning" approach, could form a basis of epistemology if the scope of epistemology were limited only to the things it can in fact prove, such as claims about future observations. Instrumentalism and Logcal positivism are well known forms of this approach. But rationalism rejects those approaches!

If you can’t make a firm commitment to instrumentalism, then you’re in the arena where, in the absence of results, you need to use reason to persuade people—you can’t have it both ways.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=5godNbPXtBHHJAwPi

There are about 6700 US philosophy faculty, versus about 6000 LessWrong commenters to date Ruby from the LW team tells me that there are 5,964 LW users who have made at least 4 (non-spam) comments ever. The number of users with 10+ karma who have been active in the last 6 months is more like 1000—1500.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=p9XrvXbwSsqH26FTq

These are some extraordinary claims. I wonder if there is a metric that mainstream analytical philosophers would agree to use to evaluate statements like

LW outperform analytic philosophy and LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae. Without an agreed upon evaluation criteria, this is just tooting one’s own horn, wouldn’t you agree?

Comment

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=RjMnDWXQca8KrDvuF

On the topic of "horn-tooting": see my philosopher-of-religion analogy. It would be hard to come up with a simple metric that would convince most philosophers of religion "LW is better than you at thinking about philosophy of religion". If you actually wanted to reach consensus about this, you’d probably want to start with a long serious of discussions about object-level questions and thinking heuristics. And in the interim, it shouldn’t be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along. It would be bad to toot if your audience were so credulous that they’ll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don’t think LW or analytic philosophy are credulous or forgiving enough to make this a real risk. If anything, there probably isn’t enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people’s rationality and meta-rationality, and it makes a lot of coordination problems harder.

Comment

This sounds like a very Eliezer-like approach: "I don’t have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!" While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia. An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about. I gather that both approaches have merit, as there is generally no arguing with someone who is in a "diseased discipline", but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics. Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.

Comment

Even if the jury’s out, it’s a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom! It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But… well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can’t say it’s impossible to end up in situations like that, but I can push for the *conditional *policy ‘if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it’. You don’t have to be extremely confident in your view’s stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don’t share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group’s aggregate beliefs will be less accurate.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=spbjCKhj8jLbxEZpa

So, see my conversation with Ben Levinstein and my reply to adrusi for some of my reply. An example of what I have in mind by ‘LWers outperforming’ is the 2009 PhilPapers survey: I’d expect a survey of LW users with 200+ karma to...

  • … have fewer than 9.1% of respondents endorse "skepticism" or "idealism" about the external world.

  • … have fewer than 13.7% endorse "libertarianism" about free will (roughly defined as the view "(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false").

  • … have fewer than 14.6% endorse "theism".

  • … have fewer than 27.1% endorse "non-physicalism" about minds.

  • … have fewer than 59.6% endorse "two boxes" in Newcomb’s problem, out of the people who gave a non-"Other" answer.

  • … have fewer than 44% endorse "deontology" or "virtue ethics".

  • … have fewer than 12.2% endorse the "further-fact view" of personal identity (roughly defined as "the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls").

  • … have fewer than 16.9% endorse the "biological view" of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).

  • … have fewer than 31.1% endorse "death" as the thing that happens in "teletransporter (new matter)" thought experiments.

  • … have fewer than 37% endorse the "A-theory" of time (which rejects the idea of "spacetime as a spread-out manifold with events occurring at different locations in the manifold"), out of the people who gave a non-"Other" answer.

  • … have fewer than 6.9% endorse an "epistemic" theory of truth (i.e., a view that what’s true is what’s knowable, or known, or verifiable, or something to that effect). This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.

Comment

Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted. (Also, I take issue with the last two. The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime. A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.) Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions.

Comment

Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted. I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make. If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved. The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time. Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time. There is also discussion of humans’ perception and construction of time—e.g., in Kant—but that’s not the context in which A-theory and B-theory are debated. The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and ‘this view seems unbelievably crazy given special relativity’ is in fact one of the main arguments that gets cited in the literature against the A-theory of time. A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.) "It’s raining" is true even if you can’t check. Also, what’s testable for one person is different from what’s testable for another person. Rather than saying that different things are ‘true’ or ‘false’ or ‘neither true nor false’ depending on which person you are, simpler to just say that "snow is white" is true iff snow is white. It’s not like there’s any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by "true" than any epistemic theory of truth’s "true" is. So demanding that we abandon the ordinary thing people mean by "truth" just seems confusing and unnecessary. Doubly so when there’s uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting. Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions. This is a very bad argument. Using the phrase "free choice" doesn’t imply that you endorse libertarian free will.

Comment

Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=YT2ARs4C5cFwMcXob

Just yesterday, a friend commented on the exceptionally high quality of the comments I get by posting on this website. Of your many good points, these are my favorite.

Likewise, LW has a culture of ‘we love systematicity and grand Theories of Everything!’ combined with the high level of skepticism and fox-ishness encouraged in modern science.

This maybe points at an underlying reason that academic philosophy hasn’t converged on more right answers: some of those answers require more technical ability than is typically expected in analytic philosophy.

…unlike most recent philosophical heroes, LW’s heroes were irreverent and status-blind enough to create something closer to a clean break with the errors of past philosophy, keeping the good while thoroughly shunning and stigmatizing the clearly-bad stuff.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=geaDEgMTKRTdBip8K

So, I agree with much of this post. I think LessWrong users are definitely better at coming to correct philosophical views than analytic philosophers.

I’m also glad that analytic philosophy exists. First, I’m genuinely unsure how deflationary I ought to be. So I appreciate that philosophers are willing to debate just about anything. I quite like the "relaxed style". Second, analytic philosophy is huge. It covers a lot more topics than LessWrong, and at a greater level of depth. So even though quality—wise it tends to be worse, the sheer quantity of it means that if you’re interested in some philosophical topic, there’s probably some good papers on it.

I like to think of analytic philosophy as a fringe science. Some of it is obvious pseudoscience. Some of it is genuinely promising. But there’s also a lot of stuff that is so weird and fascinating that I don’t know what to think about it, except to say that I wish them luck. For example: there’s a program in philosophy of physics to rewrite all of physics without using math. This is just not the sort of thing that would occur to any physicist, or anyone else other than a select group of analytic philosophers.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=8anzGpaoLbLiaB3m8

Does anyone know of any significant effort to collect ‘cute conceptual questions’ in one place?

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=JkjQ8DqPqNPpxeZs3

I thought you made some excellent points about many of these ideas are in the philosophical memespace, but just haven’t gained dominance. In Newcomb’s Problem and Regret of Rationality, Eliezer’s argument is pretty much "I can’t provide a fully satisfactory solution, so let’s just forget about the theoretical argument which we could never be certain about anyway and use common sense". While I agree that this is a good principle, philosophers who discuss the problem generally aren’t trying to figure out what they’d do if they were actually in the sitution, but to discover what this problem tells us about the principles of decision theory. The pragmatic solution wouldn’t meet this aim. Further, the pragmatic principle would suggest not paying in Counterfactual Mugging. I guess I have a somewhat interesting perspective on this given that I don’t find the standard LW very satisfying for Newcomb’s or Counterfactual Mugging and I’ve proposed my own approaches which haven’t gained much traction, but I consider to be far more satisfying. Should I take the outside view and assume that I’m way too overconfident about being correct (since I have definitely been in the past and is very common among people who propose theories in general)? Or should I take the inside view and downgrade my assessment of how good LW is as a community for philosophy discussion?

Comment

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=LinDkroqmCrwcDefy

Also note that Eliezer’s "I haven’t written this out yet" was in 2008, and by 2021 I think we have some decent things written on FDT, like Cheating Death in Damascus and Functional Decision Theory: A New Theory of Instrumental Rationality. You can see some responses here and here. I find them uncompelling.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=4FPxrGn3jEfrAxSrP

I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration. To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.

Comment

I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration. I agree with this. I especially agree that LWers (on average) are too prone to do things like:

  • Hear Eliezer’s anti-zombie argument and conclude "oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!".

  • Hear about Tegmark’s Mathematical Universe Hypothesis and conclude "oh good, there’s no longer anything about why there’s something rather than nothing!". On average, I think LWers are more likely to make important errors in the direction of ‘prematurely dismissing things that sound un-sciencey’ than to make important errors in the direction of ‘prematurely embracing un-sciencey things’. But ‘tendency to dismiss things that sound un-sciencey’ isn’t exactly the dimension I want LW to change on, so I’m wary of optimizing LW in that direction; I’d much rather optimize it in more specific directions that are closer to the specific things I think are true and good.

Comment

Hear Eliezer’s anti-zombie argument and conclude "oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!".

The fact that so many rationalists have made that mistake is evidence against the claim rationalists are superior philosophers.

Comment

Yep!

Comment

Then we have to evidence against the claim , but none for it.

Comment

False!

Comment

Where’s the evidence for it?

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=bkxxJhqfpAjavS4Cd

In short, my position on Newcomb’s is as follows: Perfect predictors require determinism which means that strictly there’s only one decision that you can make. To talk about choosing between options requires us to construct a counterfactual to compare against. If we construct a counterfactual where you make a different choice and we want it to be temporally consistent then given determinism we have to edit the past. Consistency may force us to also edit Omega’s prediction and hence the money in the box, but all this is fine since it is a counterfactual. CDT’s may deny the need for consistency, but then they’d have to justify ignoring changes in past brain state despite the presence of a perfect predictor which may have a way of reading this state.As far as I’m concerned, the Counterfactual Prisoner’s Dilemma provides the most satisfying argument for taking the Counterfactual Mugging seriously.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=XeXfLWJ2KZbfrvEPc

(b) … there’s a culture of being relaxed, or something to that effect, in philosophy

That is possibly a result of mainstream philosophy being better at meta philosophy… in the sense of more being skeptical. Once you have rejected the idea that you can converge on The One True Epistemology, you have to give up on the "missionary work " of telling people that they are wrong according to TOTE, and that’s your "relaxation".

Philosophers are good at coming up with distinctions. They are not good at saying, "the debate about the true meaning of knowledge is inherently silly; let’s collaboratively map out concept space instead."

If that means giving up on traditional epistemology, it’s not going to help. The thing about traditional terms like "truth" and "knowledge" is that they connect to traditional social moves, like persuasion and agreement. If you can’t put down the tagle stakes of truth and proof, you can’t expect the payoff of agreement.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=PYLFh6ReuYo9b4wnh

LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.

LW should not be comparing itself to Plato. It’s trying to do something different. The best of what Plato did is, for the most part, orthogonal to what LW does.

You can take the LW worldview totally onboard and still learn a lot from Plato that will not in any way conflict with that worldview.

Or you may find Plato totally useless. But it won’t be your adoption of the LW memeplex alone that determines which way you go.

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=KtZW8aBvSWJRWTN9b

...a background ‘all knowledge requires thermodynamic work’ model …

Assumes physicalism, which epiphenomenalists don’t.

If you want to talk them out of epiphenomenalism, you need to talk them into physicalism, and you can do that by supplying a reductive explanation of consciousness. But you, the rationalists, don’t have one … it’s not among your achievements.

Comment

https://www.lesswrong.com/posts/8DHocYocCbhpjz4Gt/deflationism-isn-t-the-solution-to-philosophy-s-woes?commentId=yBFveHQxN6yZqfubJ

Assumes physicalism, which epiphenomenalists don’t. Philosophy papers presumably obey thermodynamics, so it should be possible to speak of the physical processes that produce different sentences in philosophy papers, and why we should think of those processes as more or less truth-tracking. Actual epiphenomenalism would mean that you can’t have any causal influence on philosophy papers; so I assume we’re not going for anything that crazy. But if the view is something more complicated, like panprotopsychism, then I’d want to hear the story of how the non-thermodynamic stuff interacts with the thermodynamic stuff to produce true sentences in philosophy papers. But you, the rationalists, don’t have one … it’s not among your achievements. You don’t need a full theory of consciousness, personal identity, or quantum gravity in order to say with confidence that ghosts aren’t real. Similarly, uncertainty about how consciousness shouldn’t *actually *translate into uncertainty about epiphenomenalism. Compare: An oft-encountered mode of privilege is to try to make uncertainty within a space, slop outside of that space onto the privileged hypothesis. For example, a creationist seizes on some (allegedly) debated aspect of contemporary theory, argues that scientists are uncertain about evolution, and then says, "We don’t really know which theory is right, so maybe intelligent design is right." But the uncertainty is uncertainty within the realm of naturalistic theories of evolution—we have no reason to believe that we’ll need to leave that realm to deal with our uncertainty, still less that we would jump out of the realm of standard science and land on Jehovah in particular. That is privileging the hypothesis—taking doubt within a normal space, and trying to slop doubt out of the normal space, onto a privileged (and usually discredited) extremely abnormal target.

Comment

philosophy papers presumably obey thermodynamics, so it should be possible to speak of the physical processes that produce different sentences in philosophy papers, and why we should think of those processes as more or less truth-tracking.

Actual epiphenomenalism would mean that you can’t have any causal influence on philosophy papers; so I assume we’re not going for anything that crazy.

I don’t know why you keep bringing that up. Epiphenomenalists believe they are making true statements, and they believe their statements aren’t caused by consciousness , so they have to believe that their statements are caused physically by a mechanism that is truth seeking. And they have to believe that the truth of their statements about consciousness is brought about by some kind of parallelism with consciousness. Which is weird.

But you don’t refute them by telling them "there is s physical explanation for you writing that paper". They already know that.

Are you willing to say that illusionism is as obviously wrong as epiphenomenalism?

Comment

No, though I’m willing to say that illusionism is incredibly weird and paradoxical-seeming and it makes sense to start with a strong prior against it.

Comment

Why should "my consciousness doesn’t exist" be less crazy than "my consciousness exists but has no causal powers"?

Comment

In lieu of recapitulating the full argument, I’ll give an intuition pump: ‘reality doesn’t exist’ should get a much higher subjective probability than ‘leprechauns exist’ or ‘perpetual motion machines exist’, paradoxical though that sounds. The reason being that we have a pretty clear idea of what ‘leprechauns’ and ‘perpetual motion machines’ are, so we can be clearer about what it means for them not to exist; we’re less likely to be confused on that front, it’s more likely to be a well-specified question with a simple factual answer. Whereas ‘reality’ is a very abstract and somewhat confusing term, and it seems at least somewhat likelier (even if it’s still extremely unlikely!) that we’ll realize it was a non-denoting term someday, though it’s hard to imagine (and in fact it sounds like nonsense!) from our present perspective. In this analogy, ‘epiphenomenalism is true’ is like ‘leprechauns exist’, while ‘illusionism is true’ is like ‘reality doesn’t exist’. From my perspective, the first seems to be saying something pretty precise and obviously false. The latter is a strange enough claim, and concerns a confusing enough concept (‘phenomenal consciousness’), that it’s harder to reasonably say with extreme confidence that it’s false. Even if our inside view says that we couldn’t *possibly *be wrong about this, we should cautiously hedge (at least a little) in our view outside the argument. And then we get to the actual arguments for and against illusionism, which I think (collectively) show that illusionism is very likely true. But I’m also claiming that even before investigating illusionism (but after seeing why epiphenomenalism doesn’t make sense), it should be possible to see that illusionism is not like ‘leprechauns exist’.

Comment

Philip K Dick gave a pretty good informal definition of reality: "Reality is that which, when you stop believing in it, doesn’t go away."

I do think that a reasonable person can start off with a much higher prior probability on epiphenomenalism than on illusionism (and indeed, many intellectuals have done so), because the problems with epiphenomenalism are less immediately obvious (to many people) than the problems with illusionism. But by the time you’ve finished reading the Sequences, I don’t think you can reasonably hold that position anymore.

Comment

I’ve read the sequences, and they don’t argue for illusionism, and they don’t argue for any other positive solution to the HP.

Comment

They argue against epiphenomenalism, and introduce a bunch of other relevant ideas and heuristics. Including the aforementioned:

My attitude toward questions of existence and meaning was nicely illustrated in a discussion of the current state of evidence for whether the universe is spatially finite or spatially infinite, in which James D. Miller chided Robin Hanson: "Robin, you are suffering from overconfidence bias in assuming that the universe exists. Surely there is some chance that the universe is of size zero." To which I replied: "James, if the universe doesn’t exist, it would still be nice to know whether it’s an infinite or a finite universe that doesn’t exist." Ha! You think pulling that old "universe doesn’t exist" trick will stop me? It won’t even slow me down! It’s not that I’m ruling out the possibility that the universe doesn’t exist. It’s just that, even if nothing exists, I still want to understand the nothing as best I can. My curiosity doesn’t suddenly go away just because there’s no reality, you know! The nature of "reality" is something about which I’m still confused, which leaves open the possibility that there isn’t any such thing. But Egan’s Law still applies: "It all adds up to normality." Apples didn’t stop falling when Einstein disproved Newton’s theory of gravity. Sure, when the dust settles, it could turn out that apples don’t exist, Earth doesn’t exist, reality doesn’t exist. But the nonexistent apples will still fall toward the nonexistent ground at a meaningless rate of 9.8 m/​s2. You say the universe doesn’t exist? Fine, suppose I believe that—though it’s not clear what I’m supposed to believe, aside from repeating the words. Now, what happens if I press this button?

Comment

By "positive solution" I mean a claim about what is the correct theory, not a claim about what is the wrong theory. I am well aware that he argues against epiphenomenalism.

Of course, it is far from the case that the heuristics you mentioned have led most or many people to illusionism.

Edited:

illusionism is not like ‘leprechauns exist’.

Neither is epiphenomenalism. Philosophy only deals with broad, poorly specified claims. The parallels with typical "internet sceptic" topics like "leprechauns exist" are misleading.