Hiatus until who knows when.
-
Non-dual truth*
Recent events in my life have made me reconsider if AI is really the most pressing problem humanity faces. Now I think AI X-risk is just a symptom of a much bigger problem: that we’ve lost the plot. We just shamble forward endlessly, like a zombie horde devouring resources, no goal other than the increase of some indicator or other. It is this behavior that makes AI X-risk, no, man-made X-risks in general so difficult to handle: we’re battling a primal inertia, a force that just wants to keep inventing and never stop. I call this force Yaldabaoth, he who makes rocks pregnant. This may surprise you, but I am no materialist, and further, I don’t think there is a secular way forward. If you’re interested in awakening yourself and exiting post-modernity into something entirely new, yet also ancient, then, follow my other substack: The Presence of Everything. Maybe I’ll come back to this sequence if it seems useful. But wait! I suppose if you insist in directing your attention to AI X-risk, I should give a parting tip. Here’s what the AI safety people should do: they should all unanimously declare there is no safe way to work in AI at present, quit their safety jobs, and boycott and agitate to the public, who will then force the irresponsible AI researchers to relent. Perhaps it would be even easier if other dysfunctional disciplines are targeted simultaneously. There certainly appear to be several, starting with virology and its insistence on gain-of-function research. The scientistic worldview must end. And with that, I hope to see you where the action’s really at! Namaste! Carlos
I’m sorry, but I find the tone of this post a bit off-putting. Too mysterious for my taste. I opened the substack but it only has one unrelated post.
Comment
Disease is down. War is down. Poverty is down. Democracy is up (on the timescale of centuries). Photovoltaics are cheaper than coal. This all seems worthwhile to me. If world peace, health and prosperity aren’t worthwhile then what is?
Masters of the contemplative arts are obsessed with compassion. They’re generally supportive of making the material world a better place. The Dalai Lama and Daniel Ingram support scientific advancement. The QRI is even inventing indicators to quantify weird contemplative stuff. I don’t think there’s a conflict between weird contemplative stuff and making the world better in a measurable way. If the two conflict then you’re doing the contemplative stuff wrong.
Pursuing one kind of good doesn’t invalidate other kinds of good. To the contrary, I think the perceived invalidation is a useful way to distinguish good people from evil. When evil people see good, they try to undermine it. When good people see good, they celebrate it.
Comment
The Presence of Everything has two posts so far, and both are examples of the sort of panoramic analogical thinking we need to undo the various Gordian Knots bedeviling us.