This is a link post for:
- Doing being rational: polymerase chain reaction | Meaningness
I’m not going to quote the content at the link itself. [Should I?]
David Chapman – the author of the linked post – claims that "meta-rational" methods are necessary to ‘reason reasonably’. I, and I think a lot of other people broadly part of the greater rationalist community, have objected to that general distinction. I still stand by that objection, at least terminologically.
But with this post and other previous recent posts/‘pages’ that he’s posted at his site Meaningness, I think I’m better understanding the points he was gesturing or hinting at with what he describes as meta-rationality and I think that’s because ‘rationality’, in his understanding, is grounded in the actual behavior people perform. The notion that Richard Feynman is quoted as insisting on of having to "work on paper", or the idea of ‘repair’ are almost certainly real and important facts about how we actually reason.
I think there’s less and less object-level disagreement between him and myself, even about something like ‘Bayesian reasoning’. His recent writing has crystalized the notion in me that he’s really on to something and dispelled the notion that we shared some kind of fundamental disagreement. It seems relatively unimportant whether I or others subsume ‘meta-rationality’ within ‘rationality’ (or not).
I’m less sure how much of this applies to arbitrary reasoners or artificial intelligence, e.g. an AI could maintain a checklist ‘internally’ instead of relying on an external physical device to perform the same function, but the ideas he’s discussing seem to be insightful and true to me about our own rational practices nonetheless.
The stuff on "cognitive prostheses" reminded me of Nova Organum.
Written before reading the linked post (but after reading the link post). Contents: Of Style Rationality AI Footnotes Of Style
Comment
I love this comment!
I didn’t know that – thanks for the feedback!
Should I edit my post to quote the referenced content?
At sufficient detail, everyone’s thoughts about anything (sufficiently complex) differ from everyone else’s. But I don’t think David Chapman and I have any fundamental disagreements about AI.
Ooooh! That’s a perfectly concise form of a criticism I’ve thought about neural network architectures for a long time. The networks certainly are a form of memory themselves, but not really a history, i.e. of distinct and relatively discrete events or entities. Our own minds certainly seem to have that kind of memory and it seems very hard for an arbitrary intelligent reasoner to NOT have something similar (if not exactly something like this).
The quoted text you included is a perfect example of this kind of thing too; thanks for including it.
Isn’t there evidence that human brains/minds have what is effectively a dedicated ‘causal reasoning’ unit/module? It probably also relies on the ‘thing memory’ unit(s)/module(s) too tho.
Perhaps the distinction is:
Rationality is what you should do.
Meta-rationality is what you should do in order to "make rationality work".
While these two things can be combined under one umbrella, making definitions smaller:
Increases clarity (of discussion)
Makes it easier to talk about components
Makes it clear when all membership criterion for a category have been met.
Might help with teaching/retention
As I mentioned, or implied, in this post, I’m indifferent about the terminology. But I like all of your points and think they’re good reasons to make the distinction that Chapman does. I’m going to consider doing the same!
Comment
Comment
Thanks!