Msg Len

https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len

I’ll be brief, omit needless words. Intelligence is prediction is compression because Compression is finding a code that makes the data shorter And codeword lengths are probabilities So codes are probability distributions But probability distributions are prediction strategies.

Comment

https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len?commentId=gaX5BnuwepBrzo27k

And prediction strategies are almost optimization procedures?

https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len?commentId=dHq9hoQavxTA5TxaL

Did your really need to say that you’d be brief? Wasn’t it enough to say that you’d omit needless words? :)

Comment

https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len?commentId=5jkPReQ2brTn635Dd

But then he’d lose the Strunk and White allusion.

https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len?commentId=4PqpRpak9eKS56ER5

I approve the haikuesque format. Do you agree that the "bijection" Intelligence → Prediction preserves more structure than Prediction → Compression?