Oslo on IRC jokingly summarizing part of a debate:
""""""″politics is the mindkiller" is an applause light" is a fully general counterargument" is deeply wise" is a semantic stop sign" is why our kind can’t cooperate" is a fake explanation"
At the Columbus megameetup, some people actually printed out a set of cards (as a stand-alone deck) and played the game. I don’t know who of two people has the source file, but I can find out...
All have to say is if someone actually makes this game, there has to be room for the awesomeness of quines.
After all, "is an applause light" is an applause light, isn’t it?
""""""″"politics is the mindkiller" is an applause light" is a fully general counterargument" is deeply wise" is a semantic stop sign" is why our kind can’t cooperate" is a fake explanation" is a mindkiller"
Someone has been regularly downvoting every thing I’ve posted in the past couple months (not just a single karmassasination). I really don’t care about the karma (so please DO NOT upvote any of my previous posts in order to "fix" it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker’s Rules and all.
Do I understand it correctly that the behavior you describe is "downvote every new comment from user X when it appears" (as opposed to "go to user X’s history and downvote a lot of their old comments at the same time")?
Because when hearing about karma assassinations, I always automatically assumed the latter form; only the words "early downvote" in Nancy’s comment made me realize the former form is also possible.
A possible technical fix could be to not display the user comment’s karma until at least three votes were made or at least one day has passed.
Also, off-topic: Crocker’s Rules seem to be popular in out culture; maybe it would be nice to integrate them into LW user interface. For example user could add their "anonymous feedback URL" in preferences, and a new icon "Reply Anonymously" would then be displayed below all user’s comments and articles.
Theoretically it might be useful for people to be able to set a visible flag "Talk to me under Crocker’s Rules"—but I suspect that it will immediately degenerate into a status sign.
If I declare Crocker’s Rules and you write something rude in a reply to me, other LW readers still see it. So even if I am perfectly okay with it (and I shouldn’t have declared CR otherwise), you might lose some status in the eyes of the observers who don’t properly evaluate the context of your reply.
If you send me a private message, we get rid of the observers. Unless I play dirty and later show the private message to someone else. Anonymous feedback would prevent me from doing so.
But yes, for 99% of cases, sending private message would be enough, anonymization is not needed. And we already have that option here.
Crocker’s Rules, as I understand them, are about efficient conveyance of meaning without the extra baggage of social niceties. The are not about the ability to express unpopular views without social consequences which is where private messages or anonymity shine.
If you are concerned about observers misinterpreting the context you can always add a little [This post is under Crocker’s Rules] tag somewhere.
Crocker’s rules are not directly about anonymity no, but if you want to maximise your chances of receiving honest feedback an anonymous contact method is valuable.
Random thought: I’ve long known that police can often extract false confessions of crimes, but I only just now made the connection to the AI box experiment. In both cases you have someone being convinced to say or do something manifestly against their best interest. In fact, if anything I think the false confessions might be an even stronger result, just because of the larger incentives involved. People can literally be persuaded to choose to go to prison, just by some decidedly non-superhuman police officers. Granted, it’s all done in person, so stronger psychological pressure can be applied. But still: a false confession! Of murder! Resulting in jail!
I think I have to revise downwards my estimate of how secure humans are.
Humans are extremely susceptible to the arguments they have not been inoculated against. These arguments van be religious, scientific, emotional, financial, anything. One example is the new immigrants from certain places falling for get-rich-quick scams in disproportionally large numbers (not so much anymore, since the knowledge has spread). Or certain LW regulars believing Roko’s basilisk. Or become vegan (not all mind hacking is necessarily negative).
I would conjecture that every single one of us has open ports to be exploited (some more so than others), and someone with a good model of you, be it a super-smart AI or a police negotiator, can manipulate you into willingly doing stuff you would never have expected to be convinced of doing before having heard the argument.
I can’t see why you claim it’s a stronger result. In the AI box experiment, the power is entirely in the gatekeeper’s hands; in an interrogation situation the suspect is virtually powerless. This distinction is important because even the illusion of having power is enough to make someone less susceptible to persuasion.
Plus, police don’t sit down with suspects in a chat room. They use ‘enhanced interrogation techniques’, methods such as an unfamiliar environment, threat of violence (or actual violence in some cases), and various other threats. An AI cannot do any of this to a gatekeeper unless the gatekeeper explicitly lets it out.
That’s all certainly true, but the AI box experiment is still a game at heart. The gatekeeper loses and he’s out, what, fifty bucks or something? (I know some games have been played—and won, I think? - with higher stakes, and those are indeed impressive). The suspect "loses" and he’s out 20+ years of his life. It’s hard to make a comparison but I think the two results are at least comparable, even with the power imbalance.
Some LWers may be interested in a little bet/investment opportunity I’m setting up. I have become increasingly disgusted with what I’ve learned about the currently active Bitcoin+Tor black markets post-Silk-Road—specifically, with BlackMarket Reloaded & Sheep. I am also frustrated that customers are flocking to them, and they all seem absurdly optimistic. So, I am preparing to make a large public four-part escrowed bet with any comers on the upcoming demise of BMR & Sheep in the coming year, in the hopes that by putting money where my mouth is, I may shock at least a few of them into sanity and perhaps even profit off the more deluded ones.
The problem is, I feel I can afford to risk ฿1 ($200), but I’m not sure that this will be enough to impress anyone when split over 4 bets ($50 a piece). So I am willing to accept up to ฿1 in investments from anyone, to increase the amount I can wager. The terms are simple: whatever fraction of the bankroll you send, that’s your share of any winnings. If we bet ฿2 and you sent ฿1, then you get half the winnings if any. (I am not interested in taking any cut here.)
My full writeup of the bet, with some statistics helping motivate the death probabilities I am betting based on: http://pastebin.com/bEuryTuF
If you are interested, you can reply here, or contact me at gwern@gwern.net, or we can chat on Freenode (as gwern or just visit #lesswrong). I am currently ignoring private messages on LW, so don’t do that.
Also, please don’t express interest unless you are genuinely fine with potentially losing your investment: given my best estimate of the probabilities & their correlations, there’s somewhere >10% chance that we would lose all 4 bets as both BMR & Sheep survive the full year.
EDIT: if you really want to get in, I’ll still take your bitcoins, but I think I have enough investors now, thanks everyone.
That was a per-person limit; I may close it down soon, though (฿3 plus my own bitcoin and recent appreciation, should enough to impress people, and beyond that, I think there’s diminishing returns).
If so, no way in hell those are conditionally independent.
Of course they are not conditionally independent, that’s why I gave it as a lower bound.
Specifically, I think we can agree that whatever the exact relationships, the failure of one bet will increase the chance of failure of all the others: if the 6-month sheep bet fails, then the 12-month becomes more likely to fail, and to a smaller degree, the BMR ones become more likely to fail. And not the other way around. Hence independence is the best-case scenario, and so it’s the lower bound, and that’s why I wrote "*>*10%".
Hmm, about 100 downvotes in the last couple of days, 1 per comment or so, suggest that someone here is royally pissed off at me. I wish I knew the reason. On the bright side, at least this forum provides some indication of a problem. When this happens to me IRL, I either never find out about it or deduce it months or years later based on second-hand information, rumors, or, in some cases, denied promotions/requests/opportunities. I wonder if this is a common experience? Situations like this is a significant reason why I would likely jump in with both feet if offered a chance to join a telepathic society.
Well, "this" is broad, but I expect that failing to notice enmity, and relatedly being unaware of consequent social attacks, is a pretty common experience, especially in "polite" social contexts (that is, ones in which overt expressions of conflict violate social norms).
"Crocker’s Rules" are an attempt to subvert this; you might find it useful to declare that you operate under them… though I would expect not… in cases like you describe I expect that the downvoter(s) will not wish to be identified.
As someone with no particular aptitude in general niceties, I always welcome Crocker’s rules, and mistakenly assume that others do, too.
I wish you luck in deciphering the reason(s).
My best (but still low-confidence) guess, based on the timing and on being overly critical in a comment is that this may have been taken as overly harsh.
For what it is worth, I really liked your comment. Though I guess I’d be pissed (for a minute) if someone said it to me. I didn’t read the whole discussion, but she seemed pretty passioniate about her views. When I get that way, nothing makes me angrier than someone (rightly) pointing out that I’m "too passioniate" to discuss this clearly.
PubMed is allowing comments. Only people who have publications at PubMed will be permitted to comment. I predict that PubMed will find it needs human moderators.
I recently realized that I think the stuff I already know about the history of science, math, etc., is really inherently interesting and fascinating to me, but that I’ve never actually thought about going out of my way to learn more on the subject. Does anybody on here have one really good book on the subject to recommend? I’ve already read Science and the Enlightenment by Hankins.
The Copernican Revolution, by Kuhn is one of the best science histories I’ve ever read.
The folk-tale version of how we adopted heliocentric cosmology is something like this: "Aristotle and Ptolemy thought the world was arranged as concentric crystalline spheres. Copernicus proposed a new model that better fit the data, and it was opposed by the Church. Ultimately thanks to the Reformation and the Enlightenment, the correct model won out."
None of those claims is right, and Kuhn does a great job explaining the true story. He explains what problem Copernicus thought he was solving and how well he solved it.
I second the recommendation of The Copernican Revolution, and suggest another book on the same topic: Arthur Koestler’s The Sleepwalkers.
Koestler was a great novelist (his best known novel, Darkness at Noon, rivals 1984 in its portrayal of totalitarian thought) and a brilliant, eclectic and sometimes bizarre thinker. The Sleepwalkers is a grand history of astronomy and cosmology from ancient times to Newton, with the bulk of the focus on Copernicus, Kepler and Galileo.
Pros: Fascinating and very detailed biographical information on these three figures (and others like Tycho Brahe), presented in a way that reads like a novel, indeed a page-turner. His biography of Kepler is especially unforgettable, very different from a dry academic presentation. The historical presentation is peppered with opinionated philosophical and even sociological detours.
Cons: unbalanced covering of different topics, subjective and somewhat biased viewpoints. In particular, his interpretation of the relationship between Kepler and Galileo, and of Galileo’s dealings with the Church, is colored by what seems to be a strong personal dislike of Galileo. His interpretation of the reasons why the heliocentric model was rejected in ancient times is also unreliable.
As long as his interpretations are taken with a grain of salt (or balanced with a more objective presentation like Kuhn’s) I would definitely recommend it; it is the most enjoyable book on history of science I have read.
According to him, the ancient heliocentric model of Aristarchus was clearly superior in simplicity and predictive power to the geocentric models of Ptolemy and others, and was abandoned for irrational reasons (religiously or ideologically motivated). From what I understand, the mainstream academic position is that, analyzed in context and without hindsight, the ancient rejection of the heliocentric theory was quite reasonable. Previous discussion in Less Wrong.
I think it is better to say that the rejection could have been reasonable, that we cannot rule out that possibility, not that we can rule out the possibility that it was not reasonable.
My interpretation is that Hipparchus was geocentric, perhaps for good reason, and everyone else was geocentric the bad reason that Hipparchus had data, and data was high status, not because they were convinced by the data. In any event, his data does not rule out the distances Archimedes proposes in the Sand Reckoner, probably following Aristarchus. But I don’t think it is even really established that Hipparchus was geocentric, just that Ptolemy said so.
Update: Nope, history is bullshit. Hipparchus was not geocentric. Maybe Ptolemy said he was, but what did he know? Other ancient sources say that he refused to pick sides, not knowing how to distinguish the hypotheses. At the very least this shows that the heliocentric hypothesis was alive and well. Asking why they discarded it is wrong question. Frankly, I’m with Russo: the heliocentric hypothesis was standard.
Possibly I should add that I read that when I was quite young (13ish?) and haven’t reread since. It doesn’t contain anything remotely resembling advanced maths—it’s definitely about history and the philosophy of the concept. I obviously found it memorable though, so although the writing may have been so terrible I didn’t notice at 13, it’s unlikely.
I notice that the latest two posts from Yvain’s blog haven´t shown up in the "recent from rationality blogs" field. If this is due to a decision to no longer include his blog among those that are linked, I believe this to be a mistake. Yvain’s blog is in my view perhaps the most interesting and valuable among those that are/were linked. And although I am in no danger of missing his updates myself, the same might not be true of all LW readers that may be interested in his writing.
Having just got a Kindle Paperwhite, I’m surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I’ve implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I’m pretty sure there’s a lot of untapped scope for the intelligent assembly and presentation of reading material.
So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?
I think I set up mutt (and presumably some other software) just so that I could email files to my kindle from the command line; and I have an instapaper bookmarklet to do the same with webpages. I haven’t used either very much recently, but that seems to pretty much cover my "getting content onto it" needs.
I have the same Instapaper bookmarklet. I’ve also set up Instapaper to forward a digest of all my Feedly content that I mark as "save for later". It turns out I only seem to use this feature for (a) incredibly long blog posts I probably shouldn’t be reading at work, and (b) highly NSFW blog posts I probably shouldn’t be reading at work. This makes for an interesting combination.
I’m fairly unsatisfied with the Kindle email document conversion, mainly because it doesn’t do anything intelligent with document metadata. As it happens, I’ve been playing around with automated document metadata extraction, so I might see if I can put together a clever alternative.
k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.
(And even though the output is a bit bigger than the originals, I didn’t encounter any problems with 600 page books… the result was about 50 megs tops.)
Readability can be set up to send articles to it, and/or do a daily collection. Feedly can send rss feeds to it.
The user interface of the kindle is the real limitation, it fine for reading books/articles but pretty useless for going through large numbers of files.
I’ve been reminded of something Paul Graham said in his Dangerously Ambitious Startup Ideas essay, about how email is becoming a grossly inefficient to-do list for most people, and it could be worth instigating a whole new to-do protocol from the ground up, which had the degenerate case email equivalent of "to-do: read the following text".
So I’ve started looking through my emails to see what messages I receive which are essentially "read this text". It’s become quite apparent that there aren’t that many, and most of them are requests or suggestions to do something else online, (one point for Paul Graham), but there are a few obvious examples where this does happen, such as event itineraries, e-tickets, boarding passes, etc. These tend to be de facto documents, though, so it’s not especially insightful.
Reflecting back on LessWrong’s past, I’ve noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled "Applying Bayes Theorem." Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
"Lower Bounds on Superintelligence". While a lot of LW content is carefully researched, much of what’s posted in support of the singularity hypothesis seems to devolve into just-so stories. I’d like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I’m looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
"Trading with entities that are smarter than you". Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
"How to get a stupid person to let you out of a box". Along with, I think, many people who’ve never done it, I find the results of the AI-box experiment highly implausible. I can’t even imagine a superintelligent persuading me to let it out, or, equivalently, I can’t imagine persuading even someone very stupid to let me out. I know the most successful AI players are keeping their strategies secret for reasons I don’t understand (if nothing else, it seems to imply those strategies are exceedingly fragile), but if there’s anyone who has a robust strategy that’s even partially effective I’d be very interested to see it.
"From printing results to destroying all humans"—to me this is the weakest part of the MIRI et al case, and I think most objections we see are variants on this theme. It’s obvious that an oracle-like AI would have to interact with the universe in some sense. It’s obvious that an AI with unbounded ability to interact with the universe would most likely rapidly destroy all humans. It’s nonobvious that there is no possible way to code an AI that can reliably tell the difference between the two, and a solution to this problem naively seems rather more tractable than solving Friendliness in full generality. I’d like to see an exploration of this problem.
"When your gut won’t shut up and multiply." The recent downvoted discussion post seems to be in this area, suggesting the wider community is perhaps less interested than I, but I’d love to see some practical advice on effective decision strategies when one’s calculated best action is intuitively morally dubious, with anecdotes of the success or failure of particular approaches.
"Times when I noticed I was confused". In theory, noticing you’re confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn’t. I’d like to see more examples of when this has and hasn’t worked in practice, and useful habits to acquire that make you more likely to be able to notice.
"Times when I noticed I was confused". In theory, noticing you’re confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn’t. I’d like to see more examples of when this has and hasn’t worked in practice, and useful habits to acquire that make you more likely to be able to notice.
Most of my examples here are trite individually, but significant collectively; that is, I remember the habit more easily than any particular examples. There have been situations where I had some niggling doubt, said "I’m confused, I ought to resolve this uncertainty," and after research concluded that I was wrong and by acting early I saved myself some hardship. But while I’m certain there have been at least three of those, I have trouble remembering them or thinking that the ones I do remember are worth sharing.
I almost got scammed today. I received a very official looking piece of mail, "billing" me a few hundred bucks. Normally I would be able to see through it immediately, but this particular one caught me off guard. I am usually very good about being skeptical and it disappointed me that I almost fell for it. What I think happened was that, my familiarity heuristic was exploited.
I have business with a certain state and it was familiar for me to receive correspondences from various agencies and pay all sorts of different fees. So when I got this letter in the mail, it didn’t raise any flags. I was curious to check online but not because I was suspicious but rather annoyed that I wasn’t aware of this fee, that is when I discovered I was almost duped.
This isn’t a particularly new scam, I have heard of it before, but when it happened to me, I almost didn’t notice. What I learned from this whole thing is to be vigilant against letting my guard down to con artist that exploit the familiarity heuristic. I was so familiar with bills that I glanced over small print indicating that "this is a solicitation". I might have received these scams before but regarding my car payment or mortgage, but I was able to easily pick them out because I didn’t have car payments or a mortgage, obvious scam was obvious. But then I get hit right where I am familiar, and then it wasn’t so obvious.
This is the letter. I was less careful than usual (I should have read through it), but because it had information about me and is consistent with what I might see on a normal basis, I let me guard down. I only attempted to check the fee schedules to see why I had missed something like this, all the while assuming that I probably did.
There are some interesting points in there, especially about the fact that most people make themselves like what seems ‘cultured’ (I’ve definitely seen this type of appeal to majority among my friends—I was nearly roasted alive when I mentioned I honestly don’t enjoy a particular classical composer).
There are also some fallacies in there too.
Anyway, the part where he talks about trickery is interesting:
What counts as a trick? Roughly, it’s something done with contempt for the audience. For example, the guys designing Ferraris in the 1950s were probably designing cars that they themselves admired. Whereas I suspect over at General Motors the marketing people are telling the designers, "Most people who buy SUVs do it to seem manly, not to drive off-road. So don’t worry about the suspension; just make that sucker as big and tough-looking as you can."
I question this premise. It seems to imply that the purpose behind the art determines its quality, and not the art itself. For instance, if you have two identical paintings, but one was drawn with the intention of making money, and the other was drawn for true artistic merit, the latter one somehow has more value (and is thus of ‘better taste’) than the former.
At any rate, in the end that paragraph was the closest I got to his definition of ‘taste’ - the ability to recognize trickery in artistic works.
And especially this paragraph about people with good taste:
Or to put it more prosaically, they’re the people who (a) are hard to trick, and (b) don’t just like whatever they grew up with.
Finally,
I wrote this essay because I was tired of hearing "taste is subjective" and wanted to kill it once and for all.
While the insights presented are interesting (in providing a window to the author’s mind, at least), It has not actually succeeded in this purpose.
I think it’s just elliptic rather than fallacious.
Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don’t get to communicate. So there is something they are all picking up on, but it isn’t a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)
Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.
That’s his basic argument for taste being a thing and it doesn’t need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.
Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.
This really isn’t about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person’s perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.
Google is your friend, but keep in mind that "yoga" is an umbrella term for a large variety of exercises. In particular, yoga as an Indian discipline aimed at reaching moksha, the liberation from the reincarnation cycle, is rather different from yoga as practiced in the West with the goal of losing 10 lbs.
I would add that the same thing goes for meditation, anaerobic exercise and aerobic exercise as well.
All those terms include a lot of different activities.
I saw one study that indicated that meditation did not lower blood pressure, refuting earlier studies, but that yoga did. Can’t find it now however. The wikipedia page on meditation research might be useful.
also this
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take. To apologize for that, people say that evolution is hard to predict because it’s directionless, e.g. it doesn’t necessarily lead to more complexity, larger number of individuals, larger total mass, etc. That leads to the question, is there some deep reason why we can’t find any numerical parameter that is predictably increased by evolution, or is it just that we haven’t looked hard enough?
Plenty of people predict that increased antibiotica use will lead to a raise in antibiotica resistance among bacteria.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
A tree has to get lucky to survive the beginning. If it surives the beginning it can however grow tall and win.
Let’s say you keep the enviroment stable for 2 billion years. Everything evolves naturally. Then you take tree seeds and bring them back to the present time. I think there a good chance that such a tree would outcompete grass at growing in glades.
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take.
Fossils don’t really get used as the central evidence of common descent anymore. These days common descent usually get’s determined by looking at the DNA.
In my experience people who discuss evolution online that do focus on fossils are usually atheists who behave as if their atheism is a religion. They think it’s important to defend Darwin against the creationists. On the other hand they aren’t up to date with the current science on evolution.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
You seem to be predicting that grasses have smaller genomes than trees, but wheat is famous for having a huge genome. Here’s a table of a few plants. Maybe wheat is an outlier and I’d be interested if you had documentation of some pattern, but I’ve always heard that there is none.
Do you have evidence that the variation in genome size among multicellular organisms is not variation in waste? Added: As far as I know, the consensus is that it is. If you disagree with the consensus, you should acknowledge that’s what you’re doing.
Do you have evidence that the variation in genome size among multicellular organisms is not variation in waste?
I haven’t made a claim that strong. To the extend I made a claim it’s not all variation in genome size between multicellular organisms is due to different amount of waste.
And no I don’t intend to claim something that’s out of consensus in this topic. To the extend I might differ on this topic from consensus consider that to be errors.
If I remember right then one reason for plants like grasses to have long genomes was to have multiple copies of genes to speed up protein production.
What do you mean, "predict"? It has been empirically observed, a lot.
cousin made the claim that we can only say something about evolution that happened in the past. I say that we can confidently predict that increasing antibiotica resistance among bacteria will continue in the future.
Huh? It doesn’t work like that at all. For one thing, the "environment" isn’t stable.
Firstly describing complex system in a ew words is seldom completely accurate. The question is whether it’s a useful mental model for thinking about it.
In this case the idea I wanted to communicate is that it’s very useful to think about the speed of iterations and the competitive advantage that a specis gets by having as advantage of hundred of millions of iterations over their competitors.
The enviroment doesn’t have to be stable for the argument that I made. In changing enviroments a spezies with faster iterations adapts faster. A lot of genetic adaptions are also about housekeeping genes that are useful in most enviroments.
Evolution leads to a higher level of fitness in the environment, but the problem is that the environment itself is constantly changing in unpredictable ways. It’s like an optimization process where the utility function itself is contantly changing. That’s why it’s very hard to reliably quantify fitness. For instance, billions of years ago, the increase in oxygen in the atmosphere killed a lot of existing organisms and forced aerobic bacteria on to the scene.
Replies to comments that attempted to point out a numerical parameter that’s increased by evolution. (I’d be more interested in comments pointing out a deep reason why we can’t find such a numerical parameter, but there were no such comments.)
lmm:
Life "wants" to spread, so perhaps an increase in the volume in which life can be found?
That’s been steady for awhile now.
ChristianKl:
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
David_Gerard:
Total number of species (including extinct).
That can’t decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Lumifer:
The chances of successful transmission of genes across generations given a stable environment.
That seems to be contradicted by the possibility of evolutionary suicide.
The number of offspring surviving to reproductive age
Humans don’t have more offspring than bacteria in average conditions, and have much fewer offspring in ideal conditions.
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
More particularly, the equilibrium size of the DNA is very roughly inversely correlated with population size. A larger population size is better at filtering out disadvantagous traits. It’s not linear—there are discontinuities as decreasing population size eliminates natural selection’s ability to select against different things. And those things sometimes can even go on to be selected for for other reasons—there are genomic structures that are important for eukaryotes that could probably never have evolved in a bacterium because to get to them you need to go through various local minima of fitness.
Soil bacteria can have trillions of individuals per cubic meter of dirt and they actually experience direct evolution towards lower genome size—more DNA means more sites at which something could mutate and become problematic and they actually feel this force. Eukaryotes go up in volume by a factor of ~1000 and go down in population by at least as much, and lose much of the ability to select against introns and middling amounts of intergenic DNA and expanding repeat-based centromere elements.
Multicellular creatures with piddlingly tiny population sizes compared to microbes lose much of the ability to select against selfish transposon DNA elements, gigantic introns and gene deserts, and their promoter elements get fragmented into pieces strewn across many kilobases rather than one compact transcriptional regulation element of a few dozen to a few hundred base pairs (granted, we’ve also been able to make good use of some of these things for interesting purposes from our adaptive immune system to the concerted regulation of our hox gene clusters that regulate our body plans). They also become very sensitive to the particular character of the transposons or DNA repair machinery of their particular lineage and wind up random-walking like crazy up and down an order of magnitude or two in genome size as a result.
Thanks! I was hoping you’d show up, it’s always nice to get a lesson :-)
Going back to the original question, are there any "general purpose adaptations" that never disappear once they show up? Does evolution act like a ratchet in any way at all?
Closest thing I can think of from what I know without going through literature is the building up of chains of dependencies. Once you have created a complex system that needs every bit to function, it has a tendency to stay as a unit or completely leave.
You can see that in a couple contexts. One is ‘subfunctionalization’. Gene duplications are fairly common across evolution—one gene gets duplicated into two identical genes and they are free to evolve separately. You usually hear about that in the context of one getting a new function, but that’s actually comparatively rare. Much more likely is both copies breaking slightly differently until now both of them are necessary. A major component of the ATP-generating apparatus in fungi went through this: a subunit that is elsewhere composed of a ring of identical proteins now has to be composed of a ring of two alternating almost identical proteins neither of which can do the job on its own. Ray-finned fish recently went through a whole-genome duplication, and a number of their developmental transcription factors are now subfunctionalized such that, say, one does the job in the head end and the other does its job in the tail end.
Another context is the organism I work in, yeast. I like to call yeast "a fungus that is trying its damndest to become a bacterium". It lives in a context much like many bacteria and it has shrunk its genome down to maybe 2.5x that of an E. coli and its generation time down to 90 minutes. But it still has 40 introns hanging out in less than 1% of its genes so it needs a fully functional spliceosome complex to be able to process those transcripts lest those 40 genes utterly fail all at once, and it has most of the hallmarks of eukaryotic genome structure and regulation (in a neat, smaller, more research-friendly package). That being said it has lost a few big eukaryotic systems, like nonsense-mediated RNA decay and RNA interference, and they left relatively little trace behind.
Sure, but mostly because evolution’s so good at it. The fact that evolution so quickly filled a tidal pool, so quickly filled all the tidal pools, so quickly filled the oceans, so quickly covered the land, is evidence of strength rather than weakness.
There does seem to be a "punctuated equilibrium" effect here; life fills a region, appears static for a while, but then makes a breakthrough and rapidly fills another region. It could be argued that this is also true of things that humans optimize for: human population growth has abruptly rapidly accelerated at least twice (invention of agriculture, industrial revolution). Slavery was everywhere in the ancient world, then eliminated across most of it in the space of a century. Gay marriage went from hopefully-it-will-happen-in-my-lifetime to anyone who opposed it being basically shunned. Scientific and technological breakthroughs tend to look a lot like this.
Generalizing this to all optimization processes would be very speculative.
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
From bacteria that lived a long time ago. Not from those that live today that had many iterations to optimize themselves. Different bacteria species can also much better exchange genes with each other than vertebrates that need viruses to do so.
Implying that humans evolved from the kind of bacterias that are around today might be more wrong than saying that the bacteria we see know evolved from humans. There more evolutionary distance between todays bacteria and those from which humans descended and humans and those bacteria from which they descended.
Yeah, and there are often bacteria in a single flower pot that are less related to each other than you are to the potted plant. But both bacteria still have a much smaller genome than you or the plant, maybe because genome size matters for reproduction speed for them, but is insignificant for us.
That can’t decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Just apply Occam.
That seems to be contradicted by the possibility of evolutionary suicide.
Possibility wouldn’t contradict anything, a high enough probability would.
That seems to be contradicted by the possibility of evolutionary suicide.
Evolutionary suicide seems to be someone’s theoretical idea. Is there any evidence that it happens in evolution in reality?
In any case, are you basically trying to find the directionality of evolution? On a meta level higher than "adapted to the current environment"? There probably isn’t. Evolution is a quite simple mechanism, it just works given certain conditions. It is not goal-oriented, it’s just how the world is.
However if I were forced to find something correlated with evolution, I’d probably say complexity.
Damn it. It was going to be a better example because I was going to give the actual genera (Aspidoscelis and Cnemidophorus) of whiptail lizards whose species keep going down this path and then I got distracted and didn’t do that. Oops.
Depends on your time frame. Looking at the whole history of life on Earth evolution certainly correlates with complexity, looking at the last few million years, not so much.
I understand the argument about the upper limit of genetic information that can be sustained. I am somewhat suspicious of it because I’m not sure what will happen to this argument if we do NOT assume a stable environment (so the target of the optimization is elusive, it’s always moving) and we do NOT assume a single-point optimum but rather imagine a good-enough plateau on which genome could wander without major selection consequences.
But I haven’t thought about it enough to form a definite opinion.
I think you just don’t give an amoeba much credit because it’s no multicellular organism. It’s genome is 100-200 times the size of the human. As it’s that big it seems like we haven’t sequenced all of it so we don’t know how many genes it has.
We also know very little about amoeba. Genetic analysis suggests that the do exchange genes with each other in some form but we don’t know how.
Amoeba probably express a lot of stuff phenotypically that we don’t yet understand.
Why should there be a numerical parameter predictably increased by evolution? Why not look for a numerical parameter predictably increased by continental drift? or by prayer? by ostriches?
One of the key pieces of justification for FAI is the idea of "optimization process". Evolution is given as an example of such process, unlike continental drift or ostriches. It seems natural to ask what parameter is optimized.
Just FYI, I interpret that question very differently than your original.
Why don’t you start with a simpler example, like a thermostat? Would you not call that an optimization process, minimizing the difference between observed and desired temperature?
Most of your rejections of suggestions in this thread would also reject the thermostat. An ideal thermostat keeps the temperature steady. Its utility function never improves, let alone monotonically. A real thermostat is even worse, continually taking random steps back. In extreme weather, it runs continually, but never gets anywhere near goal. It only optimizes within its ability. Similarly, evolution does not expand life without bound, because it has reached its limit of its ability to exploit the planet. This limit is subject to the fluctuations of climate. But the main limit on evolution is that it is competing with itself. Eliezer suggests that it is better to make it plural, "because fox evolution works at cross-purposes to rabbit evolution." I think most teleological errors about evolution are addressed by making it plural.
Also, thermostats occasionally commit suicide by burning down the building and losing control of future temperature. (PS—I think the best example of evolutionary suicide are genes that hijack meiosis to force their propagation, doubling their fitness in the short term. I’ve been told that ones that are sex-linked have been observed to very quickly wipe out the population, but I can’t find a source. Added: the phase is "meiotic drive," though I still don’t have an example leading to extinction.)
Do you mean to say that the expected inclusive fitness of a randomly selected creature from the population goes up with time? Well, if we sum that up over the whole population, we obtain the total number of offspring—right? And dividing that by the current population, we see that the expected inclusive fitness of a randomly selected creature is simply the population’s growth rate. The problem is that evolution does not always lead to >1 population growth rate. Eliezer gave a nice example of that: "It’s quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case the sustainable wolf population will decrease as new wolves replace old."
While I don’t know of any simple or convenient numerical parameter, I’d note that we do have some handy non-retrospective pieces of evidence for evolution by natural selection, such as the induced occurrence of evolutionary benchmarks such as multicellularity.
In general, there are some adaptations which are highly predictable under certain circumstances, but there may not be any sort of meaningful measure we can use for evolution of organisms over time which aren’t a function of their relationship with their environment.
I think whatever numerical parameter evolution raised generally (not always) in respect to its environment, it would have to do with meaningful complexity , however that can be numerically expressed, and local decrease in entropy. Design would cause those too, but hypothesizing it would violate occam’s razor.
Different environments and different substrates for mutation cause different kinds of evolutions.
One main thing that happens with a long enough period of selection in a simple, stable environment on a microorganism is a shrinking of the genome.
You quite simply will not find a simple parameter perpetually increased by evolution. Whatever works better for that base organism in that particular environment will become more common. One thing being selected for under all circumstances and showing up all the time is just not the reality.
we can’t find any numerical parameter that is predictably increased by evolution
The chances of successful transmission of genes across generations given a stable environment. The number of offspring surviving to reproductive age is a good first-order approximation.
If you want something more tangible, predictions what features evolution would lose are rather easy—those that are (energy-)expensive and are useless in the new environment.
There have been plenty of evolutionary simulations, surely they provide some testable predictions. I vaguely recall one of them: that new adaptations tend to propagate first in small isolated groups and only then spread through the rest of the species. I don’t recall if this has been tested through the fossil records. I am sure there are many more testable predictions. Like how fish locked in a dark cave or murky water tend to lose eyesight. But the exact path is probably too hard to predict. For example, marine mammals did not develop gills. Or that mammals develop intelligence by growing Neocortex, while birds use DVR (dorsal ventricular ridge) or maybe Nidopallium for the same purpose.
Huh? Even if you accept the estimates that your link points to, the amount of information in mammalian genome and optimization power of evolution are VERY different things.
If you can narrow down the number of possible lifeforms to one in 2^n, that’s n bits of optimization power, and n bits of information as to what the final lifeform is.
If life is getting more and more optimal, then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal, and we have more than 25 megabytes of information as to what that lifeform is.
then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal
You go and wait. I’ll do other things in the meantime :-) Do you have any intuition how large that number is?
and we have more than 25 megabytes of information as to what that lifeform is.
You’ve spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
Imagine the case where there’s one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn’t tell you what these creatures are.
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn’t limit how evolution can optimize with different creatures in different places.
Do you have any intuition how large that number is?
It’s not going through them one at a time.
You’ve spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
It’s not a simple English description, but narrowing down the possibilities by a factor of two is always one bit of information. It doesn’t matter whether it’s "the first bit is one", "the xor of all the bits is one" or even "it’s a hash of something starting with a one using X algorithm, which is a bijection".
Imagine the case where there’s one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn’t tell you what these creatures are.
It’s the one with a higher inclusive genetic fitness. That’s what evolution optimizes for.
If evolution has n bits of optimization power, that’s equivalent to saying that if you order all possible lifeforms based on how optimal they are, this is going to be in the top 1/2^n of them. (It’s actually somewhat more complicated, since it’s more likely to be higher up and there’s some chance of it being lower, but that’s the basic idea.)
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn’t limit how evolution can optimize with different creatures in different places.
It does vary based on what lifeform you’re looking at, since they all have different mutation rates and different numbers of children, but there’s always a limit to the information, and I’m pretty sure that it’s pretty much always a limit that’s already been hit.
By my calculations, if you had the entire earth’s surface covered by a solid meter-thick layer of bacteria for 4.6 billion years and each bacterium lived for 1 hour, that would be approximately 2^155 bacteria having lived and died.
You can massively increase genetic information (inasmuch as that actually means much in biology) very quickly with very simple genetic changes. It’s not a case of searching through every possible 1 bit change.
narrowing down the possibilities by a factor of two is always one bit of information
Provided, of course, that your space of possibilities is finite and you know what it is. In the case of evolution you don’t.
that’s equivalent to saying that if you order all possible lifeforms
I don’t understand what does "all possible lifeforms" mean. Does not compute.
but there’s always a limit to the information, and I’m pretty sure that it’s pretty much always a limit that’s already been hit.
Which limit? The limit of information in the mammalian genome? Or the limit of evolution—whatever exists is the pinnacle an no better (given the same environment) can be achieved?
Brienne Strohl mentioned a website called Gingko on facebook which allows you to write documents in the form of nested trees.
I’ve been playing around with it today and found it very useful, being able to write ideas out in a disordered way seems to get around some of my perfectionism issues and stop me procrastinating. The real test is whether I continue to use it in the future, I’ll try to check back in a month or so.
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
I’m familiar with the discussion around fluid intelligence but what do we know about raising speed? Do we know how to conduct training to improve it?
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
When did you start, recently? I may be wrong but, I think average scores are matched to your peers regardless of time spent on the game. So if you just started exercises your score is being compared to everyone’s score even those that have been learning how to play that particular game for a long time.
PROFILE: Chemotherapy adjuvant specifically designed for glioblastomas of neuronal origin. By mimicking natural neural differentiation factors, it causes these tumors to regress from resilient high-grade neuroblasts towards more typical neurons, making them easy targets for stronger chemotherapeutic agents.
BANNED BECAUSE: During differentiation process, malignant nerve cells form connections to healthy nerve cells and to each other. As a result, tumor forms a functioning neural network effectively "telepathically" connected to healthy brain. Patients report feelings of overwhelming guilt as tumor accesses patient’s memories and emotions and realizes its role as a parasitic cancer, followed by its utter terror as it realizes it is about to be killed. Many patients refuse to continue with chemotherapy regimen; those who continue make a complete physical recovery but are psychologically scarred for life as they experience every moment of the tumor’s death as if it were their own.
My favorite item in the Yvain’s list of fictional banned drugs.
A response to Aaron Freeman’s "You Want a Physicist to Speak at Your Funeral."
If I had a physicist speak at my funeral, I would hope that he would talk about a lot more than the conservation of energy. I don’t particularly care about what happens to my energy.
If I am lucky, he will speak about relativity. My family will probably have the mistaken intuition that only things in the present are truly real. Teach them about spacetime. They need to know that time and space are connected—that me being in the past is just like me being far away. The difference is that we will only have one way communication. Even if they will no longer be able talk to me, I will still talk to them through memories.
If I am not so lucky, he will speak about quantum mechanics. If I die young, my family will be grieving over the potential future I have lost. Teach them about many worlds. They need to know that our world is constantly splitting—that just before I died, the world split off a different future in which I am still alive. There is another world, just as real as our own, in which I survive. This world will even interact with our own in very tiny ways.
I want a physicist to speak at my funeral. I want everyone to understand that my continued existence is way more verifiable than a religious afterlife and way more substantial than a simple conservation of energy.
Upvoted since it’s a little harsh for ‘us’ to tell someone that something is better suited for open thread and then to downvote it without explanation when it goes there...
Genuinely (if admittedly idly) curious: if this was your only reason for upvoting, do you now feel like you should retract your upvote since the comment would no longer be net-downvoted without it?
What work has been done with the causality/probability of ontological loops? For example, if I have two boxes, one with a million dollars in it, and I’m given the option to open one of them and then go back to change what I did (with various probabilities for choice of box, success of time travel, and so on), is there existing literature telling me how likely I am to walk out with a million dollars?
Obviously the answer will change depending on which version of time travel you use (invariant, universe switching, totally variant, etc.)
Some HPMOR speculation Spoilers up to current chapter. After writing this, I checked the last LessWrong thread on HPMOR, and at least one component of this has already been noticed by other people, but others have not been, I think.
I was disappointed in the last chapter, gung nqhygf jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr runs contrary to common sense and to the rest of the book.
Sorry, I was used to your fic’s to higher standards of believability of human behavior than canon’s.
I think wizard culture has some different ideas from your culture.
I must be missing something, because even Harry had trouble being taken seriously by most adults for most of the story, and no other (first-year) children were anywhere near his level. Yet suddenly so many of them seem to be taken seriously by their relatives and by all the most powerful wizards. And they didn’t even have to save the Earth from the Formics.
It’s still the culture that throws kids on a Hippogryff and tells them to get going.
And as Daphne notes in her thoughts, the children are standing in for their parents and speaking their parents’ orders; they are acting as spokespersons for their families, and the others are treating them as such.
I suspect that had more to do with Harry’s involvement than anything else.
"gung [crbcyr ehaavat guvatf] jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr" vf n ybg zber cynhfvoyr jura bar bs gurz vf n puvyq.
Oslo on IRC jokingly summarizing part of a debate:
This has the makings of a card game or something.
Comment
http://lesswrong.com/lw/d2w/cards_against_rationality/
At the Columbus megameetup, some people actually printed out a set of cards (as a stand-alone deck) and played the game. I don’t know who of two people has the source file, but I can find out...
Comment
Ugh, so the underscores marking italics thing also works within URLs? (OTOH the link does go to the right place.)
Some of my friends and I were already thinking about making something like this — good to see there is a good start available!
It does, doesn’t it...
All have to say is if someone actually makes this game, there has to be room for the awesomeness of quines. After all, "is an applause light" is an applause light, isn’t it?
Comment
"is an applause light" is actually a boo light not an applause light. However it is true that "is a boo light" is a boo light.
http://jsbin.com/ibebih/3
Comment
Who isn’t?
This thing is priceless.
Comment
Check out the discussion thread about the thing:
http://lesswrong.com/lw/egt/made_a_silly_meta_thing/
Comment
Thanks, it’s awesome. Arguably better than the actual lesswrong context, tbh. :-(
FTFY
Someone has been regularly downvoting every thing I’ve posted in the past couple months (not just a single karmassasination). I really don’t care about the karma (so please DO NOT upvote any of my previous posts in order to "fix" it), but I do worry that if someone is doing it to me, they are possibly doing it to other/new people and driving them off, so I wanted to point out publicly that this behaviour is NOT OKAY.
Anyways, if you have a problem with me, feel free to tell me about it here: http://www.admonymous.com/daenerys . Crocker’s Rules and all.
Comment
I’ve been getting an early downvote on my posts, too. I can afford it, but it does seem malicious.
Do I understand it correctly that the behavior you describe is "downvote every new comment from user X when it appears" (as opposed to "go to user X’s history and downvote a lot of their old comments at the same time")?
Because when hearing about karma assassinations, I always automatically assumed the latter form; only the words "early downvote" in Nancy’s comment made me realize the former form is also possible.
A possible technical fix could be to not display the user comment’s karma until at least three votes were made or at least one day has passed.
Also, off-topic: Crocker’s Rules seem to be popular in out culture; maybe it would be nice to integrate them into LW user interface. For example user could add their "anonymous feedback URL" in preferences, and a new icon "Reply Anonymously" would then be displayed below all user’s comments and articles.
Comment
Not only that, but I’ve been getting the downvotes on my posts, not my comments. I wouldn’t call this karma assassination—maybe karma harassment.
Crocker’s Rules aren’t about anonymity.
Theoretically it might be useful for people to be able to set a visible flag "Talk to me under Crocker’s Rules"—but I suspect that it will immediately degenerate into a status sign.
Comment
If I declare Crocker’s Rules and you write something rude in a reply to me, other LW readers still see it. So even if I am perfectly okay with it (and I shouldn’t have declared CR otherwise), you might lose some status in the eyes of the observers who don’t properly evaluate the context of your reply.
If you send me a private message, we get rid of the observers. Unless I play dirty and later show the private message to someone else. Anonymous feedback would prevent me from doing so.
But yes, for 99% of cases, sending private message would be enough, anonymization is not needed. And we already have that option here.
Comment
Crocker’s Rules, as I understand them, are about efficient conveyance of meaning without the extra baggage of social niceties. The are not about the ability to express unpopular views without social consequences which is where private messages or anonymity shine.
If you are concerned about observers misinterpreting the context you can always add a little [This post is under Crocker’s Rules] tag somewhere.
Crocker’s rules are not directly about anonymity no, but if you want to maximise your chances of receiving honest feedback an anonymous contact method is valuable.
Random thought: I’ve long known that police can often extract false confessions of crimes, but I only just now made the connection to the AI box experiment. In both cases you have someone being convinced to say or do something manifestly against their best interest. In fact, if anything I think the false confessions might be an even stronger result, just because of the larger incentives involved. People can literally be persuaded to choose to go to prison, just by some decidedly non-superhuman police officers. Granted, it’s all done in person, so stronger psychological pressure can be applied. But still: a false confession! Of murder! Resulting in jail!
I think I have to revise downwards my estimate of how secure humans are.
Comment
Humans are extremely susceptible to the arguments they have not been inoculated against. These arguments van be religious, scientific, emotional, financial, anything. One example is the new immigrants from certain places falling for get-rich-quick scams in disproportionally large numbers (not so much anymore, since the knowledge has spread). Or certain LW regulars believing Roko’s basilisk. Or become vegan (not all mind hacking is necessarily negative).
I would conjecture that every single one of us has open ports to be exploited (some more so than others), and someone with a good model of you, be it a super-smart AI or a police negotiator, can manipulate you into willingly doing stuff you would never have expected to be convinced of doing before having heard the argument.
I can’t see why you claim it’s a stronger result. In the AI box experiment, the power is entirely in the gatekeeper’s hands; in an interrogation situation the suspect is virtually powerless. This distinction is important because even the illusion of having power is enough to make someone less susceptible to persuasion.
Plus, police don’t sit down with suspects in a chat room. They use ‘enhanced interrogation techniques’, methods such as an unfamiliar environment, threat of violence (or actual violence in some cases), and various other threats. An AI cannot do any of this to a gatekeeper unless the gatekeeper explicitly lets it out.
Comment
That’s all certainly true, but the AI box experiment is still a game at heart. The gatekeeper loses and he’s out, what, fifty bucks or something? (I know some games have been played—and won, I think? - with higher stakes, and those are indeed impressive). The suspect "loses" and he’s out 20+ years of his life. It’s hard to make a comparison but I think the two results are at least comparable, even with the power imbalance.
Actual people are also using a hell of a lot more than text.
Some LWers may be interested in a little bet/investment opportunity I’m setting up. I have become increasingly disgusted with what I’ve learned about the currently active Bitcoin+Tor black markets post-Silk-Road—specifically, with BlackMarket Reloaded & Sheep. I am also frustrated that customers are flocking to them, and they all seem absurdly optimistic. So, I am preparing to make a large public four-part escrowed bet with any comers on the upcoming demise of BMR & Sheep in the coming year, in the hopes that by putting money where my mouth is, I may shock at least a few of them into sanity and perhaps even profit off the more deluded ones.
The problem is, I feel I can afford to risk ฿1 ($200), but I’m not sure that this will be enough to impress anyone when split over 4 bets ($50 a piece). So I am willing to accept up to ฿1 in investments from anyone, to increase the amount I can wager. The terms are simple: whatever fraction of the bankroll you send, that’s your share of any winnings. If we bet ฿2 and you sent ฿1, then you get half the winnings if any. (I am not interested in taking any cut here.)
My full writeup of the bet, with some statistics helping motivate the death probabilities I am betting based on: http://pastebin.com/bEuryTuF
If you are interested, you can reply here, or contact me at gwern@gwern.net, or we can chat on Freenode (as gwern or just visit #lesswrong). I am currently ignoring private messages on LW, so don’t do that.
Also, please don’t express interest unless you are genuinely fine with potentially losing your investment: given my best estimate of the probabilities & their correlations, there’s somewhere >10% chance that we would lose all 4 bets as both BMR & Sheep survive the full year.
EDIT: if you really want to get in, I’ll still take your bitcoins, but I think I have enough investors now, thanks everyone.
Comment
I will be price matching whatever gwern personally puts in.
The bet has gone live at http://www.reddit.com/r/SilkRoad/comments/1pko9y/the_bet_bmr_and_sheep_to_die_in_a_year/
Is that a per-person maximum, or are you only accepting up to that much worth of bets?
Edit: I have contacted gwern via IRC and invested 1 BTC.
Comment
That was a per-person limit; I may close it down soon, though (฿3 plus my own bitcoin and recent appreciation, should enough to impress people, and beyond that, I think there’s diminishing returns).
don’t you mean chance of losing every bet?
If so, no way in hell those are conditionally independent. If not, what did you mean?
Comment
Yes.
Of course they are not conditionally independent, that’s why I gave it as a lower bound.
Specifically, I think we can agree that whatever the exact relationships, the failure of one bet will increase the chance of failure of all the others: if the 6-month sheep bet fails, then the 12-month becomes more likely to fail, and to a smaller degree, the BMR ones become more likely to fail. And not the other way around. Hence independence is the best-case scenario, and so it’s the lower bound, and that’s why I wrote "*>*10%".
Comment
Ah, I see. I was confused by the ‘=’ sign.
Hmm, about 100 downvotes in the last couple of days, 1 per comment or so, suggest that someone here is royally pissed off at me. I wish I knew the reason. On the bright side, at least this forum provides some indication of a problem. When this happens to me IRL, I either never find out about it or deduce it months or years later based on second-hand information, rumors, or, in some cases, denied promotions/requests/opportunities. I wonder if this is a common experience? Situations like this is a significant reason why I would likely jump in with both feet if offered a chance to join a telepathic society.
Comment
Did you see that Daenerys and NancyLebovitz experienced a similar problem? Seems likely someones doing it systematically to several accounts.
Comment
Thanks, I missed that discussion.
Well, "this" is broad, but I expect that failing to notice enmity, and relatedly being unaware of consequent social attacks, is a pretty common experience, especially in "polite" social contexts (that is, ones in which overt expressions of conflict violate social norms).
"Crocker’s Rules" are an attempt to subvert this; you might find it useful to declare that you operate under them… though I would expect not… in cases like you describe I expect that the downvoter(s) will not wish to be identified.
I wish you luck in deciphering the reason(s).
Comment
As someone with no particular aptitude in general niceties, I always welcome Crocker’s rules, and mistakenly assume that others do, too.
My best (but still low-confidence) guess, based on the timing and on being overly critical in a comment is that this may have been taken as overly harsh.
Comment
For what it is worth, I really liked your comment. Though I guess I’d be pissed (for a minute) if someone said it to me. I didn’t read the whole discussion, but she seemed pretty passioniate about her views. When I get that way, nothing makes me angrier than someone (rightly) pointing out that I’m "too passioniate" to discuss this clearly.
PubMed is allowing comments. Only people who have publications at PubMed will be permitted to comment. I predict that PubMed will find it needs human moderators.
Comment
PubMed’s comment system will have some form of human moderation before 2015.
People who have publications at PubMed can have passwords stolen.
I recently realized that I think the stuff I already know about the history of science, math, etc., is really inherently interesting and fascinating to me, but that I’ve never actually thought about going out of my way to learn more on the subject. Does anybody on here have one really good book on the subject to recommend? I’ve already read Science and the Enlightenment by Hankins.
Comment
The Copernican Revolution, by Kuhn is one of the best science histories I’ve ever read.
The folk-tale version of how we adopted heliocentric cosmology is something like this: "Aristotle and Ptolemy thought the world was arranged as concentric crystalline spheres. Copernicus proposed a new model that better fit the data, and it was opposed by the Church. Ultimately thanks to the Reformation and the Enlightenment, the correct model won out."
None of those claims is right, and Kuhn does a great job explaining the true story. He explains what problem Copernicus thought he was solving and how well he solved it.
Comment
I agree that it is a good book. But it helps to be aware that Kuhn substantially simplifies a lot of what is going on. See for example here and here.
Awesome! I loved Kuhn’s Structure of Scientific Revolutions, and it seems like an interesting subject, besides.
I second the recommendation of The Copernican Revolution, and suggest another book on the same topic: Arthur Koestler’s The Sleepwalkers.
Koestler was a great novelist (his best known novel, Darkness at Noon, rivals 1984 in its portrayal of totalitarian thought) and a brilliant, eclectic and sometimes bizarre thinker. The Sleepwalkers is a grand history of astronomy and cosmology from ancient times to Newton, with the bulk of the focus on Copernicus, Kepler and Galileo.
Pros: Fascinating and very detailed biographical information on these three figures (and others like Tycho Brahe), presented in a way that reads like a novel, indeed a page-turner. His biography of Kepler is especially unforgettable, very different from a dry academic presentation. The historical presentation is peppered with opinionated philosophical and even sociological detours.
Cons: unbalanced covering of different topics, subjective and somewhat biased viewpoints. In particular, his interpretation of the relationship between Kepler and Galileo, and of Galileo’s dealings with the Church, is colored by what seems to be a strong personal dislike of Galileo. His interpretation of the reasons why the heliocentric model was rejected in ancient times is also unreliable.
As long as his interpretations are taken with a grain of salt (or balanced with a more objective presentation like Kuhn’s) I would definitely recommend it; it is the most enjoyable book on history of science I have read.
Comment
Could you elaborate?
Comment
According to him, the ancient heliocentric model of Aristarchus was clearly superior in simplicity and predictive power to the geocentric models of Ptolemy and others, and was abandoned for irrational reasons (religiously or ideologically motivated). From what I understand, the mainstream academic position is that, analyzed in context and without hindsight, the ancient rejection of the heliocentric theory was quite reasonable. Previous discussion in Less Wrong.
Comment
I think it is better to say that the rejection could have been reasonable, that we cannot rule out that possibility, not that we can rule out the possibility that it was not reasonable.
My interpretation is that Hipparchus was geocentric, perhaps for good reason, and everyone else was geocentric the bad reason that Hipparchus had data, and data was high status, not because they were convinced by the data. In any event, his data does not rule out the distances Archimedes proposes in the Sand Reckoner, probably following Aristarchus. But I don’t think it is even really established that Hipparchus was geocentric, just that Ptolemy said so.
Update: Nope, history is bullshit. Hipparchus was not geocentric. Maybe Ptolemy said he was, but what did he know? Other ancient sources say that he refused to pick sides, not knowing how to distinguish the hypotheses. At the very least this shows that the heliocentric hypothesis was alive and well. Asking why they discarded it is wrong question. Frankly, I’m with Russo: the heliocentric hypothesis was standard.
I really enjoyed The Nothing That Is by Robert Kaplan. It’s about the history of the concept (and the numeral) zero.
Comment
Possibly I should add that I read that when I was quite young (13ish?) and haven’t reread since. It doesn’t contain anything remotely resembling advanced maths—it’s definitely about history and the philosophy of the concept. I obviously found it memorable though, so although the writing may have been so terrible I didn’t notice at 13, it’s unlikely.
I notice that the latest two posts from Yvain’s blog haven´t shown up in the "recent from rationality blogs" field. If this is due to a decision to no longer include his blog among those that are linked, I believe this to be a mistake. Yvain’s blog is in my view perhaps the most interesting and valuable among those that are/were linked. And although I am in no danger of missing his updates myself, the same might not be true of all LW readers that may be interested in his writing.
Comment
I think it is likely due to the political and controversial nature of those last two posts. I would be surprised if this was not the reason.
Having just got a Kindle Paperwhite, I’m surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I’ve implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I’m pretty sure there’s a lot of untapped scope for the intelligent assembly and presentation of reading material.
So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?
Comment
I think I set up mutt (and presumably some other software) just so that I could email files to my kindle from the command line; and I have an instapaper bookmarklet to do the same with webpages. I haven’t used either very much recently, but that seems to pretty much cover my "getting content onto it" needs.
Comment
I have the same Instapaper bookmarklet. I’ve also set up Instapaper to forward a digest of all my Feedly content that I mark as "save for later". It turns out I only seem to use this feature for (a) incredibly long blog posts I probably shouldn’t be reading at work, and (b) highly NSFW blog posts I probably shouldn’t be reading at work. This makes for an interesting combination.
I’m fairly unsatisfied with the Kindle email document conversion, mainly because it doesn’t do anything intelligent with document metadata. As it happens, I’ve been playing around with automated document metadata extraction, so I might see if I can put together a clever alternative.
k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.
(And even though the output is a bit bigger than the originals, I didn’t encounter any problems with 600 page books… the result was about 50 megs tops.)
Readability can be set up to send articles to it, and/or do a daily collection. Feedly can send rss feeds to it.
The user interface of the kindle is the real limitation, it fine for reading books/articles but pretty useless for going through large numbers of files.
Comment
I’ve been reminded of something Paul Graham said in his Dangerously Ambitious Startup Ideas essay, about how email is becoming a grossly inefficient to-do list for most people, and it could be worth instigating a whole new to-do protocol from the ground up, which had the degenerate case email equivalent of "to-do: read the following text".
So I’ve started looking through my emails to see what messages I receive which are essentially "read this text". It’s become quite apparent that there aren’t that many, and most of them are requests or suggestions to do something else online, (one point for Paul Graham), but there are a few obvious examples where this does happen, such as event itineraries, e-tickets, boarding passes, etc. These tend to be de facto documents, though, so it’s not especially insightful.
Reflecting back on LessWrong’s past, I’ve noticed a pattern of article voting that seems almost striking to me: Questions do not get upvoted nearly on the same order as answers do.
Perhaps it would be useful to have a thread where LessWrong could posit topics and upvote the article titles that it would be most interested in reading? For example, I am now drafting a post titled "Applying Bayes Theorem." Provided I can write high-quality content under that title, I expect LessWrong would be intensely interested in this on account of not fully grasping exactly how to do so.
So as a trial run: What topics currently elude your understanding, and what might the title of a high-quality article that addressed that topic be?
Comment
"Lower Bounds on Superintelligence". While a lot of LW content is carefully researched, much of what’s posted in support of the singularity hypothesis seems to devolve into just-so stories. I’d like to see a dry, carefully footnoted argument for why an intelligence that was able to derive correct theories from evidence, or generate creative ideas, much faster than humans would necessarily rapidly acquire the ability to eliminate all human life. In particular I’m looking for historical analogies, cases where new discoveries with important practical implications were definitely delayed not just due to e.g. industrial capacity, but solely through human stupidity.
"Trading with entities that are smarter than you". Given the ability of highly intelligent entities to predict the future better than you can, and deceive without outright lying, what kind of trades or bets is it wise to enter into with such entities? What kind of safeguards would you need to have in place?
"How to get a stupid person to let you out of a box". Along with, I think, many people who’ve never done it, I find the results of the AI-box experiment highly implausible. I can’t even imagine a superintelligent persuading me to let it out, or, equivalently, I can’t imagine persuading even someone very stupid to let me out. I know the most successful AI players are keeping their strategies secret for reasons I don’t understand (if nothing else, it seems to imply those strategies are exceedingly fragile), but if there’s anyone who has a robust strategy that’s even partially effective I’d be very interested to see it.
"From printing results to destroying all humans"—to me this is the weakest part of the MIRI et al case, and I think most objections we see are variants on this theme. It’s obvious that an oracle-like AI would have to interact with the universe in some sense. It’s obvious that an AI with unbounded ability to interact with the universe would most likely rapidly destroy all humans. It’s nonobvious that there is no possible way to code an AI that can reliably tell the difference between the two, and a solution to this problem naively seems rather more tractable than solving Friendliness in full generality. I’d like to see an exploration of this problem.
"When your gut won’t shut up and multiply." The recent downvoted discussion post seems to be in this area, suggesting the wider community is perhaps less interested than I, but I’d love to see some practical advice on effective decision strategies when one’s calculated best action is intuitively morally dubious, with anecdotes of the success or failure of particular approaches.
"Times when I noticed I was confused". In theory, noticing you’re confused sounds like an effective heuristic. But the explanation in the sequences only gave a retroactive example of when Eliezer should have applied it, and didn’t. I’d like to see more examples of when this has and hasn’t worked in practice, and useful habits to acquire that make you more likely to be able to notice.
Comment
Most of my examples here are trite individually, but significant collectively; that is, I remember the habit more easily than any particular examples. There have been situations where I had some niggling doubt, said "I’m confused, I ought to resolve this uncertainty," and after research concluded that I was wrong and by acting early I saved myself some hardship. But while I’m certain there have been at least three of those, I have trouble remembering them or thinking that the ones I do remember are worth sharing.
Comment
That’s the kind of position I see frequently here—but from the outside it’s very unconvincing. So I’d very much like to see concrete examples.
I almost got scammed today. I received a very official looking piece of mail, "billing" me a few hundred bucks. Normally I would be able to see through it immediately, but this particular one caught me off guard. I am usually very good about being skeptical and it disappointed me that I almost fell for it. What I think happened was that, my familiarity heuristic was exploited.
I have business with a certain state and it was familiar for me to receive correspondences from various agencies and pay all sorts of different fees. So when I got this letter in the mail, it didn’t raise any flags. I was curious to check online but not because I was suspicious but rather annoyed that I wasn’t aware of this fee, that is when I discovered I was almost duped.
This isn’t a particularly new scam, I have heard of it before, but when it happened to me, I almost didn’t notice. What I learned from this whole thing is to be vigilant against letting my guard down to con artist that exploit the familiarity heuristic. I was so familiar with bills that I glanced over small print indicating that "this is a solicitation". I might have received these scams before but regarding my car payment or mortgage, but I was able to easily pick them out because I didn’t have car payments or a mortgage, obvious scam was obvious. But then I get hit right where I am familiar, and then it wasn’t so obvious.
Comment
the one time I’ve fallen for phishing was when I received an email purporting to be from my bank literally the day after I signed up for an account.
Interesting. Feel free to offer more details.
Comment
This is the letter. I was less careful than usual (I should have read through it), but because it had information about me and is consistent with what I might see on a normal basis, I let me guard down. I only attempted to check the fee schedules to see why I had missed something like this, all the while assuming that I probably did.
Comment
Wow, it does look very official. Without checking online, how is one supposed to know that there is no "Labor Compliance Office" in California.
What is ‘taste’ (as in, artistic taste)? And what differentiates ‘good taste’ from ‘bad taste’?
Comment
I suggest Taste for Makers and How Art Can Be Good by PG.
Comment
There are some interesting points in there, especially about the fact that most people make themselves like what seems ‘cultured’ (I’ve definitely seen this type of appeal to majority among my friends—I was nearly roasted alive when I mentioned I honestly don’t enjoy a particular classical composer).
There are also some fallacies in there too.
Anyway, the part where he talks about trickery is interesting:
I question this premise. It seems to imply that the purpose behind the art determines its quality, and not the art itself. For instance, if you have two identical paintings, but one was drawn with the intention of making money, and the other was drawn for true artistic merit, the latter one somehow has more value (and is thus of ‘better taste’) than the former.
At any rate, in the end that paragraph was the closest I got to his definition of ‘taste’ - the ability to recognize trickery in artistic works.
And especially this paragraph about people with good taste:
Finally,
While the insights presented are interesting (in providing a window to the author’s mind, at least), It has not actually succeeded in this purpose.
Comment
I think it’s just elliptic rather than fallacious.
Paul Graham basically argues for artistic quality as something people have a natural instinct to recognize. The sexual attractiveness of bodies might be a more obvious example of this kind of thing. If you ask 100 people to rank pictures another 100 people of the opposite sex by hotness, the ranks will correlate very highly even if the rankers don’t get to communicate. So there is something they are all picking up on, but it isn’t a single property. (Symmetry might come closest but not really close, i.e. it explains more than any other factor but not most of the phenomenon.)
Paul Graham basically thinks artistic quality works the same way. Then taste is talent at picking up on it. For in-metaphor comparison, perhaps a professional photographer has an intuitive appreciation of how a tired woman would look awake, can adjust for halo effects, etc., so he has a less confounded appreciation of the actual beauty factor than I do. Likewise someone with good taste would be less confounded about artistic quality than someone with bad taste.
That’s his basic argument for taste being a thing and it doesn’t need a precise definition, in fact it would suggest giving a precise definition is probably AI-complete.
Now the contempt thing is not a definition, it is a suggested heuristic for identifying confounders. To look at my metaphor again, if I wanted to learn about beauty-confounders, tricks people use to make people they have no respect for think woman are hotter than they are (in other words porn methods) would be a good place to start.
This really isn’t about the thing (beuty/artistic quality) per se, more about the delta between the thing and the average person’s perception of it. And that actually is quite dependent on how much respect the artist/"artist" has for his audience.
Is there research on the benefits of yoga compared to meditation, anaerobic exercise and aerobic exercise? Or any subset of these for that matter.
Comment
Google is your friend, but keep in mind that "yoga" is an umbrella term for a large variety of exercises. In particular, yoga as an Indian discipline aimed at reaching moksha, the liberation from the reincarnation cycle, is rather different from yoga as practiced in the West with the goal of losing 10 lbs.
Comment
I would add that the same thing goes for meditation, anaerobic exercise and aerobic exercise as well. All those terms include a lot of different activities.
O.o
(Anyway, I’m surprised that I’m surprised—I know people do even weirder things to lose weight.)
(BTW: I do do yoga, but more for fun than for any of its practical benefits, which could be achieved in more cost-effective ways.)
I saw one study that indicated that meditation did not lower blood pressure, refuting earlier studies, but that yoga did. Can’t find it now however. The wikipedia page on meditation research might be useful. also this
What kinds of benefits are you looking for? It seems likely they don’t optimize the same things.
Most "predictions of evolution" that can be found online are more about finding past evidence of common descent (e.g. fossils) rather than predicting the future path that evolution will take. To apologize for that, people say that evolution is hard to predict because it’s directionless, e.g. it doesn’t necessarily lead to more complexity, larger number of individuals, larger total mass, etc. That leads to the question, is there some deep reason why we can’t find any numerical parameter that is predictably increased by evolution, or is it just that we haven’t looked hard enough?
Comment
Plenty of people predict that increased antibiotica use will lead to a raise in antibiotica resistance among bacteria.
Organisms like bacteria that have much more iterations behind them then humans also tend to have less waste in their DNA.
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
A tree has to get lucky to survive the beginning. If it surives the beginning it can however grow tall and win.
Let’s say you keep the enviroment stable for 2 billion years. Everything evolves naturally. Then you take tree seeds and bring them back to the present time. I think there a good chance that such a tree would outcompete grass at growing in glades.
Fossils don’t really get used as the central evidence of common descent anymore. These days common descent usually get’s determined by looking at the DNA. In my experience people who discuss evolution online that do focus on fossils are usually atheists who behave as if their atheism is a religion. They think it’s important to defend Darwin against the creationists. On the other hand they aren’t up to date with the current science on evolution.
Comment
Grasses beat trees at growing in glades with animals that eat plants. Why? Grass has more iterations behind them and is therefore better optimized for the enviroment than the trees.
You seem to be predicting that grasses have smaller genomes than trees, but wheat is famous for having a huge genome. Here’s a table of a few plants. Maybe wheat is an outlier and I’d be interested if you had documentation of some pattern, but I’ve always heard that there is none.
Comment
If you want to be exact I didn’t say genome size but waste. Through mutation inactived genes, retroviruses and so on. It takes time to remove them.
Comment
Do you have evidence that the variation in genome size among multicellular organisms is not variation in waste? Added: As far as I know, the consensus is that it is. If you disagree with the consensus, you should acknowledge that’s what you’re doing.
Comment
I haven’t made a claim that strong. To the extend I made a claim it’s not all variation in genome size between multicellular organisms is due to different amount of waste.
And no I don’t intend to claim something that’s out of consensus in this topic. To the extend I might differ on this topic from consensus consider that to be errors.
If I remember right then one reason for plants like grasses to have long genomes was to have multiple copies of genes to speed up protein production.
What do you mean, "predict"? It has been empirically observed, a lot.
Huh? It doesn’t work like that at all. For one thing, the "environment" isn’t stable.
Comment
cousin made the claim that we can only say something about evolution that happened in the past. I say that we can confidently predict that increasing antibiotica resistance among bacteria will continue in the future.
Firstly describing complex system in a ew words is seldom completely accurate. The question is whether it’s a useful mental model for thinking about it. In this case the idea I wanted to communicate is that it’s very useful to think about the speed of iterations and the competitive advantage that a specis gets by having as advantage of hundred of millions of iterations over their competitors.
The enviroment doesn’t have to be stable for the argument that I made. In changing enviroments a spezies with faster iterations adapts faster. A lot of genetic adaptions are also about housekeeping genes that are useful in most enviroments.
Evolution leads to a higher level of fitness in the environment, but the problem is that the environment itself is constantly changing in unpredictable ways. It’s like an optimization process where the utility function itself is contantly changing. That’s why it’s very hard to reliably quantify fitness. For instance, billions of years ago, the increase in oxygen in the atmosphere killed a lot of existing organisms and forced aerobic bacteria on to the scene.
Replies to comments that attempted to point out a numerical parameter that’s increased by evolution. (I’d be more interested in comments pointing out a deep reason why we can’t find such a numerical parameter, but there were no such comments.)
lmm:
That’s been steady for awhile now.
ChristianKl:
Evolution can both add and remove junk DNA. Humans are descended from bacteria.
David_Gerard:
That can’t decrease by definition, and will increase under any mechanism that gives nonzero chance of speciation, e.g. if God decides to create new species at random.
Lumifer:
That seems to be contradicted by the possibility of evolutionary suicide.
Humans don’t have more offspring than bacteria in average conditions, and have much fewer offspring in ideal conditions.
Comment
More particularly, the equilibrium size of the DNA is very roughly inversely correlated with population size. A larger population size is better at filtering out disadvantagous traits. It’s not linear—there are discontinuities as decreasing population size eliminates natural selection’s ability to select against different things. And those things sometimes can even go on to be selected for for other reasons—there are genomic structures that are important for eukaryotes that could probably never have evolved in a bacterium because to get to them you need to go through various local minima of fitness.
Soil bacteria can have trillions of individuals per cubic meter of dirt and they actually experience direct evolution towards lower genome size—more DNA means more sites at which something could mutate and become problematic and they actually feel this force. Eukaryotes go up in volume by a factor of ~1000 and go down in population by at least as much, and lose much of the ability to select against introns and middling amounts of intergenic DNA and expanding repeat-based centromere elements.
Multicellular creatures with piddlingly tiny population sizes compared to microbes lose much of the ability to select against selfish transposon DNA elements, gigantic introns and gene deserts, and their promoter elements get fragmented into pieces strewn across many kilobases rather than one compact transcriptional regulation element of a few dozen to a few hundred base pairs (granted, we’ve also been able to make good use of some of these things for interesting purposes from our adaptive immune system to the concerted regulation of our hox gene clusters that regulate our body plans). They also become very sensitive to the particular character of the transposons or DNA repair machinery of their particular lineage and wind up random-walking like crazy up and down an order of magnitude or two in genome size as a result.
Comment
Thanks! I was hoping you’d show up, it’s always nice to get a lesson :-)
Going back to the original question, are there any "general purpose adaptations" that never disappear once they show up? Does evolution act like a ratchet in any way at all?
Comment
Closest thing I can think of from what I know without going through literature is the building up of chains of dependencies. Once you have created a complex system that needs every bit to function, it has a tendency to stay as a unit or completely leave.
You can see that in a couple contexts. One is ‘subfunctionalization’. Gene duplications are fairly common across evolution—one gene gets duplicated into two identical genes and they are free to evolve separately. You usually hear about that in the context of one getting a new function, but that’s actually comparatively rare. Much more likely is both copies breaking slightly differently until now both of them are necessary. A major component of the ATP-generating apparatus in fungi went through this: a subunit that is elsewhere composed of a ring of identical proteins now has to be composed of a ring of two alternating almost identical proteins neither of which can do the job on its own. Ray-finned fish recently went through a whole-genome duplication, and a number of their developmental transcription factors are now subfunctionalized such that, say, one does the job in the head end and the other does its job in the tail end.
Another context is the organism I work in, yeast. I like to call yeast "a fungus that is trying its damndest to become a bacterium". It lives in a context much like many bacteria and it has shrunk its genome down to maybe 2.5x that of an E. coli and its generation time down to 90 minutes. But it still has 40 introns hanging out in less than 1% of its genes so it needs a fully functional spliceosome complex to be able to process those transcripts lest those 40 genes utterly fail all at once, and it has most of the hallmarks of eukaryotic genome structure and regulation (in a neat, smaller, more research-friendly package). That being said it has lost a few big eukaryotic systems, like nonsense-mediated RNA decay and RNA interference, and they left relatively little trace behind.
Sure, but mostly because evolution’s so good at it. The fact that evolution so quickly filled a tidal pool, so quickly filled all the tidal pools, so quickly filled the oceans, so quickly covered the land, is evidence of strength rather than weakness.
There does seem to be a "punctuated equilibrium" effect here; life fills a region, appears static for a while, but then makes a breakthrough and rapidly fills another region. It could be argued that this is also true of things that humans optimize for: human population growth has abruptly rapidly accelerated at least twice (invention of agriculture, industrial revolution). Slavery was everywhere in the ancient world, then eliminated across most of it in the space of a century. Gay marriage went from hopefully-it-will-happen-in-my-lifetime to anyone who opposed it being basically shunned. Scientific and technological breakthroughs tend to look a lot like this.
Generalizing this to all optimization processes would be very speculative.
From bacteria that lived a long time ago. Not from those that live today that had many iterations to optimize themselves. Different bacteria species can also much better exchange genes with each other than vertebrates that need viruses to do so.
Implying that humans evolved from the kind of bacterias that are around today might be more wrong than saying that the bacteria we see know evolved from humans. There more evolutionary distance between todays bacteria and those from which humans descended and humans and those bacteria from which they descended.
Comment
Yeah, and there are often bacteria in a single flower pot that are less related to each other than you are to the potted plant. But both bacteria still have a much smaller genome than you or the plant, maybe because genome size matters for reproduction speed for them, but is insignificant for us.
Just apply Occam.
Possibility wouldn’t contradict anything, a high enough probability would.
Evolutionary suicide seems to be someone’s theoretical idea. Is there any evidence that it happens in evolution in reality?
In any case, are you basically trying to find the directionality of evolution? On a meta level higher than "adapted to the current environment"? There probably isn’t. Evolution is a quite simple mechanism, it just works given certain conditions. It is not goal-oriented, it’s just how the world is.
However if I were forced to find something correlated with evolution, I’d probably say complexity.
Comment
Species of nightshade tend to evolve to become self-fertile, before dying out due to lack of genetic diversity.
Comment
Is this your source?
Link? Lots of plants are self-fertile and do quite well...
Better example: parthenogenic lizard species.
Comment
What makes that example better?
Comment
Damn it. It was going to be a better example because I was going to give the actual genera (Aspidoscelis and Cnemidophorus) of whiptail lizards whose species keep going down this path and then I got distracted and didn’t do that. Oops.
This doesn’t seem to be the case either
Comment
Depends on your time frame. Looking at the whole history of life on Earth evolution certainly correlates with complexity, looking at the last few million years, not so much.
I understand the argument about the upper limit of genetic information that can be sustained. I am somewhat suspicious of it because I’m not sure what will happen to this argument if we do NOT assume a stable environment (so the target of the optimization is elusive, it’s always moving) and we do NOT assume a single-point optimum but rather imagine a good-enough plateau on which genome could wander without major selection consequences.
But I haven’t thought about it enough to form a definite opinion.
Complexity in what way? Kolmogoroph complexity of DNA?
Comment
No, complexity of the phenotype.
Comment
How would you go about measuring that complexity?
Comment
I don’t know. Eyeballing it seems to be a good start.
Why do you ask? Do you think that such things are unmeasurable or there are radically different ways of measuring them or what?
Comment
I have a hard time trying to form a judgement about whether a human is more or less complex than a dinosaur via eyeballing.
Is a grasshopper more of less complex than a human?
Comment
Well, would you have problems arranging the following in the order of complexity: a jellyfish, a tree, an amoeba, a human..?
Comment
Yes.
I think you just don’t give an amoeba much credit because it’s no multicellular organism. It’s genome is 100-200 times the size of the human. As it’s that big it seems like we haven’t sequenced all of it so we don’t know how many genes it has.
We also know very little about amoeba. Genetic analysis suggests that the do exchange genes with each other in some form but we don’t know how.
Amoeba probably express a lot of stuff phenotypically that we don’t yet understand.
Sabre-toothed tigers and mammoths.
Comment
Huh? Sense make not.
Why should there be a numerical parameter predictably increased by evolution? Why not look for a numerical parameter predictably increased by continental drift? or by prayer? by ostriches?
Comment
One of the key pieces of justification for FAI is the idea of "optimization process". Evolution is given as an example of such process, unlike continental drift or ostriches. It seems natural to ask what parameter is optimized.
Comment
Just FYI, I interpret that question very differently than your original.
Why don’t you start with a simpler example, like a thermostat? Would you not call that an optimization process, minimizing the difference between observed and desired temperature?
Most of your rejections of suggestions in this thread would also reject the thermostat. An ideal thermostat keeps the temperature steady. Its utility function never improves, let alone monotonically. A real thermostat is even worse, continually taking random steps back. In extreme weather, it runs continually, but never gets anywhere near goal. It only optimizes within its ability. Similarly, evolution does not expand life without bound, because it has reached its limit of its ability to exploit the planet. This limit is subject to the fluctuations of climate. But the main limit on evolution is that it is competing with itself. Eliezer suggests that it is better to make it plural, "because fox evolution works at cross-purposes to rabbit evolution." I think most teleological errors about evolution are addressed by making it plural.
Also, thermostats occasionally commit suicide by burning down the building and losing control of future temperature. (PS—I think the best example of evolutionary suicide are genes that hijack meiosis to force their propagation, doubling their fitness in the short term. I’ve been told that ones that are sex-linked have been observed to very quickly wipe out the population, but I can’t find a source. Added: the phase is "meiotic drive," though I still don’t have an example leading to extinction.)
Inclusive reproductive fitness.
Comment
Do you mean to say that the expected inclusive fitness of a randomly selected creature from the population goes up with time? Well, if we sum that up over the whole population, we obtain the total number of offspring—right? And dividing that by the current population, we see that the expected inclusive fitness of a randomly selected creature is simply the population’s growth rate. The problem is that evolution does not always lead to >1 population growth rate. Eliezer gave a nice example of that: "It’s quite possible to have a new wolf that expends 10% more energy per day to be 20% better at hunting, and in this case the sustainable wolf population will decrease as new wolves replace old."
While I don’t know of any simple or convenient numerical parameter, I’d note that we do have some handy non-retrospective pieces of evidence for evolution by natural selection, such as the induced occurrence of evolutionary benchmarks such as multicellularity.
In general, there are some adaptations which are highly predictable under certain circumstances, but there may not be any sort of meaningful measure we can use for evolution of organisms over time which aren’t a function of their relationship with their environment.
I think whatever numerical parameter evolution raised generally (not always) in respect to its environment, it would have to do with meaningful complexity , however that can be numerically expressed, and local decrease in entropy. Design would cause those too, but hypothesizing it would violate occam’s razor.
Different environments and different substrates for mutation cause different kinds of evolutions.
Comment
One main thing that happens with a long enough period of selection in a simple, stable environment on a microorganism is a shrinking of the genome.
You quite simply will not find a simple parameter perpetually increased by evolution. Whatever works better for that base organism in that particular environment will become more common. One thing being selected for under all circumstances and showing up all the time is just not the reality.
The chances of successful transmission of genes across generations given a stable environment. The number of offspring surviving to reproductive age is a good first-order approximation.
If you want something more tangible, predictions what features evolution would lose are rather easy—those that are (energy-)expensive and are useless in the new environment.
There have been plenty of evolutionary simulations, surely they provide some testable predictions. I vaguely recall one of them: that new adaptations tend to propagate first in small isolated groups and only then spread through the rest of the species. I don’t recall if this has been tested through the fossil records. I am sure there are many more testable predictions. Like how fish locked in a dark cave or murky water tend to lose eyesight. But the exact path is probably too hard to predict. For example, marine mammals did not develop gills. Or that mammals develop intelligence by growing Neocortex, while birds use DVR (dorsal ventricular ridge) or maybe Nidopallium for the same purpose.
Total number of species (including extinct).
Life "wants" to spread, so perhaps an increase in the volume in which life can be found?
Newly created islands may have "weird" biospheres initially, but evolve towards a more "normal" set of niches over time?
Comment
But why would life get more optimal? Evolution has finite optimization power, and it has long ago already reached this limit.
Comment
Huh? Even if you accept the estimates that your link points to, the amount of information in mammalian genome and optimization power of evolution are VERY different things.
Comment
How do you figure?
If you can narrow down the number of possible lifeforms to one in 2^n, that’s n bits of optimization power, and n bits of information as to what the final lifeform is.
If life is getting more and more optimal, then we can simply wait until we know that less than one in 2^25 million lifeforms are that optimal, and we have more than 25 megabytes of information as to what that lifeform is.
Comment
You go and wait. I’ll do other things in the meantime :-) Do you have any intuition how large that number is?
You’ve spent all that 25Mb for an index into the lifeform space but you have not budgeted any information for the actual description of the lifeform.
Imagine the case where there’s one bit. It tells you whether creature-0 or creature-1 is optimal. But it doesn’t tell you what these creatures are.
In any case, all these numbers are based on the resistance of Earth mammals to genetic drift. That really doesn’t limit how evolution can optimize with different creatures in different places.
Comment
It’s not going through them one at a time.
It’s not a simple English description, but narrowing down the possibilities by a factor of two is always one bit of information. It doesn’t matter whether it’s "the first bit is one", "the xor of all the bits is one" or even "it’s a hash of something starting with a one using X algorithm, which is a bijection".
It’s the one with a higher inclusive genetic fitness. That’s what evolution optimizes for.
If evolution has n bits of optimization power, that’s equivalent to saying that if you order all possible lifeforms based on how optimal they are, this is going to be in the top 1/2^n of them. (It’s actually somewhat more complicated, since it’s more likely to be higher up and there’s some chance of it being lower, but that’s the basic idea.)
It does vary based on what lifeform you’re looking at, since they all have different mutation rates and different numbers of children, but there’s always a limit to the information, and I’m pretty sure that it’s pretty much always a limit that’s already been hit.
Comment
By my calculations, if you had the entire earth’s surface covered by a solid meter-thick layer of bacteria for 4.6 billion years and each bacterium lived for 1 hour, that would be approximately 2^155 bacteria having lived and died.
You can massively increase genetic information (inasmuch as that actually means much in biology) very quickly with very simple genetic changes. It’s not a case of searching through every possible 1 bit change.
Provided, of course, that your space of possibilities is finite and you know what it is. In the case of evolution you don’t.
I don’t understand what does "all possible lifeforms" mean. Does not compute.
Which limit? The limit of information in the mammalian genome? Or the limit of evolution—whatever exists is the pinnacle an no better (given the same environment) can be achieved?
Something like "humans will have larger skulls and smaller teeth"?
Comment
But we know that isn’t true.
Brienne Strohl mentioned a website called Gingko on facebook which allows you to write documents in the form of nested trees.
I’ve been playing around with it today and found it very useful, being able to write ideas out in a disordered way seems to get around some of my perfectionism issues and stop me procrastinating. The real test is whether I continue to use it in the future, I’ll try to check back in a month or so.
After doing lumonsity exercises for a bunch of days I find that my speed/concentration scores are below 1000 (1000 is supposed to be average) while memory is at 1460 and problem solving at 1360.
I’m familiar with the discussion around fluid intelligence but what do we know about raising speed? Do we know how to conduct training to improve it?
Comment
When did you start, recently? I may be wrong but, I think average scores are matched to your peers regardless of time spent on the game. So if you just started exercises your score is being compared to everyone’s score even those that have been learning how to play that particular game for a long time.
Comment
In case you are interested in the scores. At present I have 241 Lumosity points that I earned over the last month.
I used Lumosity in the past with a different account, probably 2 years ago. I think I might have gotten 500 point back then.
I use the free version. I have other experience with speed tests that also suggest that I’m relatively week in that area.
PROFILE: Chemotherapy adjuvant specifically designed for glioblastomas of neuronal origin. By mimicking natural neural differentiation factors, it causes these tumors to regress from resilient high-grade neuroblasts towards more typical neurons, making them easy targets for stronger chemotherapeutic agents.
BANNED BECAUSE: During differentiation process, malignant nerve cells form connections to healthy nerve cells and to each other. As a result, tumor forms a functioning neural network effectively "telepathically" connected to healthy brain. Patients report feelings of overwhelming guilt as tumor accesses patient’s memories and emotions and realizes its role as a parasitic cancer, followed by its utter terror as it realizes it is about to be killed. Many patients refuse to continue with chemotherapy regimen; those who continue make a complete physical recovery but are psychologically scarred for life as they experience every moment of the tumor’s death as if it were their own.
My favorite item in the Yvain’s list of fictional banned drugs.
A response to Aaron Freeman’s "You Want a Physicist to Speak at Your Funeral."
If I had a physicist speak at my funeral, I would hope that he would talk about a lot more than the conservation of energy. I don’t particularly care about what happens to my energy.
If I am lucky, he will speak about relativity. My family will probably have the mistaken intuition that only things in the present are truly real. Teach them about spacetime. They need to know that time and space are connected—that me being in the past is just like me being far away. The difference is that we will only have one way communication. Even if they will no longer be able talk to me, I will still talk to them through memories.
If I am not so lucky, he will speak about quantum mechanics. If I die young, my family will be grieving over the potential future I have lost. Teach them about many worlds. They need to know that our world is constantly splitting—that just before I died, the world split off a different future in which I am still alive. There is another world, just as real as our own, in which I survive. This world will even interact with our own in very tiny ways.
I want a physicist to speak at my funeral. I want everyone to understand that my continued existence is way more verifiable than a religious afterlife and way more substantial than a simple conservation of energy.
Comment
Upvoted since it’s a little harsh for ‘us’ to tell someone that something is better suited for open thread and then to downvote it without explanation when it goes there...
Comment
Genuinely (if admittedly idly) curious: if this was your only reason for upvoting, do you now feel like you should retract your upvote since the comment would no longer be net-downvoted without it?
What work has been done with the causality/probability of ontological loops? For example, if I have two boxes, one with a million dollars in it, and I’m given the option to open one of them and then go back to change what I did (with various probabilities for choice of box, success of time travel, and so on), is there existing literature telling me how likely I am to walk out with a million dollars?
Obviously the answer will change depending on which version of time travel you use (invariant, universe switching, totally variant, etc.)
Comment
A good place to start for this might be Scott Aaronson’s lecture on Time Travel from his "Quantum Computing Since Democritus" course.
Some HPMOR speculation Spoilers up to current chapter. After writing this, I checked the last LessWrong thread on HPMOR, and at least one component of this has already been noticed by other people, but others have not been, I think.
Comment
I was disappointed in the last chapter, gung nqhygf jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr runs contrary to common sense and to the rest of the book.
Comment
Yeah, cause that never happens in canon.
I think wizard culture has some different ideas from your culture.
Comment
Sorry, I was used to your fic’s to higher standards of believability of human behavior than canon’s.
I must be missing something, because even Harry had trouble being taken seriously by most adults for most of the story, and no other (first-year) children were anywhere near his level. Yet suddenly so many of them seem to be taken seriously by their relatives and by all the most powerful wizards. And they didn’t even have to save the Earth from the Formics.
Comment
It’s still the culture that throws kids on a Hippogryff and tells them to get going.
And as Daphne notes in her thoughts, the children are standing in for their parents and speaking their parents’ orders; they are acting as spokespersons for their families, and the others are treating them as such.
Comment
*Hippogriff
Which part would you never do if you (as board member) were righteously angry at Dumbledore?
Comment
I’d never let a child do the public announcement of my decision.
Comment
Why not, if they could do it? This seems a foolish rejection of a class of tools. See Malala Yousafzai.
I suspect that had more to do with Harry’s involvement than anything else. "gung [crbcyr ehaavat guvatf] jbhyq frg nfvqr gurve pbaivpgvbaf naq cerwhqvprf naq yrg puvyqera cynl n fvtavsvpnag ebyr" vf n ybg zber cynhfvoyr jura bar bs gurz vf n puvyq.
We’re a day out—this should be Oct 21-27. Next one: Oct 28-Oct 35. (cough)
Comment
When I posted, it was still the 20th in my timezone, so that’s what I went with.