One open question in AI risk strategy is: Can we trust the world’s elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
-
Otherwise smart people say unreasonable things about AI safety.
-
Many people who believed AI was around the corner didn’t take safety very seriously.
-
Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.
-
AI may arrive rather suddenly, leaving little time for preparation.
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don’t actually endorse this argument):
-
If AI is preceded by visible signals, elites are likely to take safety measures. Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.
-
AI is likely to be preceded by visible signals. Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. "Human-level performance at X" benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.
-
Therefore, safety measures will likely be taken.
-
If safety measures are taken, then elites will navigate the creation of AI just fine. Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI." Arms races not insurmountable.
The basic structure of this ‘argument for hope’ is due to Carl Shulman, though he doesn’t necessarily endorse the details. (Also, it’s just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
-
Elites often fail to take effective action despite plenty of warning.
-
I think there’s a >10% chance AI will not be preceded by visible signals.
-
I think the elites’ safety measures will likely be insufficient.
Obviously, there’s a lot more for me to spell out here, and some of it may be unclear. The reason I’m posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I’d like to know:
-
Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
-
What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk (for the purposes of illuminating elites’ likely response to AI risk)?
-
What are some good studies on elites’ decision-making abilities in general?
-
Has the increasing availability of information in the past century noticeably improved elite decision-making?
What does RSI stand for?
Comment
"recursive self improvement".
Okay, I’ve now spelled this out in the OP.
Lately I’ve been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.
I’ll post quotes from the audiobooks I listen to as replies to this comment.
Comment
From Watts’ Everything is Obvious:
Nor is it just forecasters of long-term social and technology trends that have lousy records. Publishers, producers, and marketers—experienced and motivated professionals in business with plenty of skin in the game—have just as much difficulty predicting which books, movies, and products will become the next big hit as political experts have in predicting the next revolution. In fact, the history of cultural markets is crowded with examples of future blockbusters—Elvis, Star Wars, Seinfeld, Harry Potter, American Idol—that publishers and movie studios left for dead while simultaneously betting big on total failures. And whether we consider the most spectacular business meltdowns of recent times—Long-Term Capital Management in 1998, Enron in 2001, WorldCom in 2002, the near-collapse of the entire financial system in 2008 — or spectacular success stories like the rise of Google and Facebook, what is perhaps most striking about them is that virtually nobody seems to have had any idea what was about to happen. In September 2008, for example, even as Lehman Brothers’ collapse was imminent, Treasury and Federal Reserve officials — who arguably had the best information available to anyone in the world — failed to anticipate the devastating freeze in global credit markets that followed. Conversely, in the late 1990s the founders of Google, Sergey Brin and Larry Page, tried to sell their company for $1.6M. Fortunately for them, nobody was interested, because Google went on to attain a market value of over $160 billion, or about 100,000 times what they and everybody else apparently thought it was worth only a few years earlier.
Comment
More (#1) from Everything is Obvious:
Given how different these methods were, what we found was surprising: All of them performed about the same. To be fair, the two prediction markets performed a little better than the other methods, which is consistent with the theoretical argument above. But the very best performing method—the Las Vegas Market—was only about 3 percentage points more accurate than the worst-performing method, which was the model that always predicted the home team would win with 58 percent probability. All the other methods were somewhere in between. In fact, the model that also included recent win-loss records was so close to the Vegas market that if you used both methods to predict the actual point differences between the teams, the average error in their predictions would differ by less than a tenth of a point. Now, if you’re betting on the outcomes of hundreds or thousands of games, these tiny differences may still be the difference between making and losing money. At the same time, however, it’s surprising that the aggregated wisdom of thousands of market participants, who collectively devote countless hours to analyzing upcoming games for any shred of useful information, is only incrementally better than a simple statistical model that relies only on historical averages.
When we first told some prediction market researchers about this result, their reaction was that it must reflect some special feature of football. The NFL, they argued, has lots of rules like salary caps and draft picks that help to keep teams as equal as possible. And football, of course, is a game where the result can be decided by tiny random acts, like the wide receiver dragging in the quarterback’s desperate pass with his fingertips as he runs full tilt across the goal line to win the game in its closing seconds. Football games, in other words, have a lot of randomness built into them — arguably, in fact, that’s what makes them exciting. Perhaps it’s not so surprising after all, then, that all the information and analysis that is generated by the small army of football pundits who bombard fans with predictions every week is not superhelpful (although it might be surprising to the pundits). In order to be persuaded, our colleagues insisted, we would have to find the same result in some other domain for which the signal-to-noise ratio might be considerably higher than it is in the specific case of football.
OK, what about baseball? Baseball fans pride themselves on their near-fanatical attention to every measurable detail of the game, from batting averages to pitching rotations. Indeed, an entire field of research called sabermetrics has developed specifically for the purpose of analyzing baseball statistics, even spawning its own journal, the Baseball Research Journal. One might think, therefore, that prediction markets, with their far greater capacity to factor in different sorts of information, would outperform simplistic statistical models by a much wider margin for baseball than they do for football. But that turns out not to be true either. We compared the predictions of the Las Vegas sports betting markets over nearly twenty thousand Major League baseball games played from 1999 to 2006 with a simple statistical model based again on home-team advantage and the recent win-loss records of the two teams. This time, the difference between the two was even smaller — in fact, the performance of the market and the model were indistinguishable. In spite of all the statistics and analysis, in other words, and in spite of the absence of meaningful salary caps in baseball and the resulting concentration of superstar players on teams like the New York Yankees and Boston Red Sox, the outcomes of baseball games are even closer to random events than football games.
Since then, we have either found or learned about the same kind of result for other kinds of events that prediction markets have been used to predict, from the opening weekend box office revenues for feature films to the outcomes of presidential elections. Unlike sports, these events occur without any of the rules or conditions that are designed to make sports competitive. There is also a lot of relevant information that prediction markets could conceivably exploit to boost their performance well beyond that of a simple model or a poll of relatively uninformed individuals. Yet when we compared the Hollywood Stock Exchange (HSX) — one of the most popular prediction markets, which has a reputation for accurate prediction—with a simple statistical model, the HSX did only slightly better. And in a separate study of the outcomes of five US presidential elections from 1988 to 2004, political scientists Robert Erikson and Christopher Wlezien found that a simple statistical correction of ordinary opinion polls outperformed even the vaunted Iowa Electronic Markets.
More (#2) from Everything is Obvious:
What the conventional wisdom overlooks, however, is that Sony’s vision of the VCR wasn’t as a device for watching rented movies at all. Rather, Sony expected people to use VCRs to tape TV shows, allowing them to watch their favorite shows at their leisure. Considering the exploding popularity of digital VCRs that are now used for precisely this purpose, Sony’s view of the future wasn’t implausible at all. And if it had come to pass, the superior picture quality of Betamax might well have made up for the extra cost, while the shorter taping time may have been irrelevant. Nor was it the case that Matsushita had any better inkling than Sony how fast the video-rental market would take off—indeed, an earlier experiment in movie rentals by the Palo Alto–based firm CTI had failed dramatically. Regardless, by the time it had become clear that home movie viewing, not taping TV shows, would be the killer app of the VCR, it was too late. Sony did their best to correct course, and in fact very quickly produced a longer-playing BII version, eliminating the initial advantage held by Matsushita. But it was all to no avail. Once VHS got a sufficient market lead, the resulting network effects were impossible to overcome. Sony’s failure, in other words, was not really the strategic blunder it is often made out to be, resulting instead from a shift in consumer demand that happened far more rapidly than anyone in the industry had anticipated.
Shortly after their debacle with Betamax, Sony made another big strategic bet on recording technology — this time with their MiniDisc players. Determined not to make the same mistake twice, Sony paid careful attention to where Betamax had gone wrong, and did their best to learn the appropriate lessons. In contrast with Betamax, Sony made sure that MiniDiscs had ample capacity to record whole albums. And mindful of the importance of content distribution to the outcome of the VCR wars, they acquired their own content repository in the form of Sony Music. At the time they were introduced in the early 1990s, MiniDiscs held clear technical advantages over the then-dominant CD format. In particular, the MiniDiscs could record as well as play, and because they were smaller and more resistant to jolts they were better suited to portable devices. Recordable CDs, by contrast, required entirely new machines, which at the time were extremely expensive.
By all reasonable measures the MiniDisc should have been an outrageous success. And yet it bombed. What happened? In a nutshell, the Internet happened. The cost of memory plummeted, allowing people to store entire libraries of music on their personal computers. High-speed Internet connections allowed for peer-to-peer file sharing. Flash drive memory allowed for easy downloading to portable devices. And new websites for finding and downloading music abounded. The explosive growth of the Internet was not driven by the music business in particular, nor was Sony the only company that failed to anticipate the profound effect that the Internet would have on production, distribution, and consumption of music. Nobody did. Sony, in other words, really was doing the best that anyone could have done to learn from the past and to anticipate the future—but they got rolled anyway, by forces beyond anyone’s ability to predict or control.
Surprisingly, the company that "got it right" in the music industry was Apple, with their combination of the iPod player and their iTunes store. In retrospect, Apple’s strategy looks visionary, and analysts and consumers alike fall over themselves to pay homage to Apple’s dedication to design and quality. Yet the iPod was exactly the kind of strategic play that the lessons of Betamax, not to mention Apple’s own experience in the PC market, should have taught them would fail. The iPod was large and expensive. It was based on closed architecture that Apple refused to license, ran on proprietary software, and was actively resisted by the major content providers. Nevertheless, it was a smashing success. So in what sense was Apple’s strategy better than Sony’s? Yes, Apple had made a great product, but so had Sony. Yes, they looked ahead and did their best to see which way the technological winds were blowing, but so did Sony. And yes, once they made their choices, they stuck to them and executed brilliantly; but that’s exactly what Sony did as well. The only important difference, in Raynor’s view, was that Sony’s choices happened to be wrong while Apple’s happened to be right.
This is the strategy paradox. The main cause of strategic failure, Raynor argues, is not bad strategy, but great strategy that just happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution—not the stuff of success for sure, but more likely to lead to persistent mediocrity than colossal failure. Great strategy, by contrast, is marked by clarity of vision, bold leadership, and laser-focused execution. When applied to just the right set of commitments, great strategy can lead to resounding success—as it did for Apple with the iPod—but it can also lead to resounding failure. Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.
More (#4) from Everything is Obvious:
Inspired by these examples—along with "open innovation" companies like Innocentive, which conducts hundreds of prize competitions in engineering, computer science, math, chemistry, life sciences, physical sciences, and business—governments are wondering if the same approach can be used to solve otherwise intractable policy problems. In the past year, for example, the Obama administration has generated shock waves throughout the education establishment by announcing its "Race to the Top"—effectively a prize competition among US states for public education resources allocated on the basis of plans that the states must submit, which are scored on a variety of dimensions, including student performance measurement, teacher accountability, and labor contract reforms. Much of the controversy around the Race to the Top takes issue with its emphasis on teacher quality as the primary determinant of student performance and on standardized testing as a way to measure it. These legitimate critiques notwithstanding, however, the Race to the Top remains an interesting policy experiment for the simple reason that, like cap and trade, it specifies the "solution" only at the highest level, while leaving the specifics up to the states themselves.
More (#3) from Everything is Obvious:
That company is Zara, the Spanish clothing retailer that has made business press headlines for over a decade with its novel approach to satisfying consumer demand. Rather than trying to anticipate what shoppers will buy next season, Zara effectively acknowledges that it has no idea. Instead, it adopts what we might call a measure-and-react strategy. First, it sends out agents to scour shopping malls, town centers, and other gathering places to observe what people are already wearing, thereby generating lots of ideas about what might work. Second, drawing on these and other sources of inspiration, it produces an extraordinarily large portfolio of styles, fabrics, and colors—where each combination is initially made in only a small batch—and sends them out to stores, where it can then measure directly what is selling and what isn’t. And finally, it has a very flexible manufacturing and distribution operation that can react quickly to the information that is coming directly from stores, dropping those styles that aren’t selling (with relatively little left-over inventory) and scaling up those that are. All this depends on Zara’s ability to design, produce, ship, and sell a new garment anywhere in the world in just over two weeks—a stunning accomplishment to anyone who has waited in limbo for just about any designer good that isn’t on the shelf.
From Rhodes’ Arsenals of Folly:
Unknown to the Soviet public and the world, at least thirteen serious power-reactor accidents had occurred in the Soviet Union before the one at Chernobyl. Between 1964 and 1979, for example, repeated fuel-assembly fires plagued Reactor Number One at the Beloyarsk nuclear-power plant east of the Urals near Novosibirsk. In 1975, the core of an RBMK reactor at the Leningrad plant partly melted down; cooling the core by flooding it with liquid nitrogen led to a discharge of radiation into the environment equivalent to about one-twentieth the amount that was released at Chernobyl in 1986. In 1982, a rupture of the central fuel assembly of Chernobyl Reactor Number One released radioactivity over the nearby bedroom community of Pripyat, now in 1986 once again exposed and at risk. In 1985, a steam relief valve burst during a shaky startup of Reactor Number One at the Balakovo nuclear-power plant, on the Volga River about 150 miles southwest of Samara, jetting 500-degree steam that scalded to death fourteen members of the start-up staff; despite the accident, the responsible official, Balakovo’s plant director, Viktor Bryukhanov, was promoted to supervise construction at Chernobyl and direct its operation.
Comment
More (#3) from Arsenals of Folly:
And:
And:
During the Cuban confrontation, when American nuclear weapons were ready to launch or already aloft and moving toward their Soviet targets on hundreds of SAC bombers, both sides were at least aware of the danger and working intensely to resolve the dispute. During ABLE ARCHER 83, in contrast, an American renewal of high Cold War rhetoric, aggressive and perilous threat displays, and naïve incredulity were combined with Soviet arms-race and surprise-attack insecurities and heavy-handed war-scare propaganda in a nearly lethal mix.
And:
...Less politely, the political scientist Richard M. Pious, reviewing Cannon’s biography and other studies of the president, reduced their findings to three parallel axioms: "Reagan could only understand things if they were presented as a story; he could only explain something if he narrated it; he could only think about principles if they involved metaphor and analogy."
And:
Comment
Amazing stuff. Was the world really as close to a nuclear war in 1983 as in 1962?
More (#2) from Arsenals of Folly:
And:
"Internal pressures are pushing the Soviet system to the breaking point," Todd dramatically— but also accurately— prophesied on the opening page of his book. "In ten, twenty, or thirty years, an astonished world will be witness to the dissolution or the collapse of this, the first of the Communist systems." To explain how he came to such a radical conclusion in an era when the Committee on the Present Danger was claiming that the Soviet Union was growing in strength and malevolence, he demonstrated that Soviet statistics, otherwise "shabby and false," could still be mined for valuable information on the state of society. Even censored statistics, such as rates of birth and death missing from the charts for the Terror famine years 1931 to 1935, "indicate the abuses of Stalinism, especially when they succeed a period marked by a relatively large volume of data." Age pyramids, he pointed out— graphs in which stacked horizontal bars represent the percentage of the population in each age group—" have fixed for everyone to see the errors of Stalinism, Maoism, or any other totalitarian alternative which declares war upon a human community…. Rather belatedly, it is apparent that 30 to 60 million inhabitants in the USSR are missing. In 1975, it was clear that about 150 million were missing in China. Given population, the proportions are nearly the same."
And, a blockquote from the writings of Robert Gates:
More (#1) from Arsenals of Folly:
Curiously, a U.S. spy satellite had passed over the Chernobyl complex on Saturday morning only twenty-eight seconds after the explosions and had imaged it. American intelligence thought at first that a missile had been fired, reports health physicist and Chernobyl expert Richard Mould. When the image remained stationary, "opinion changed to a missile had blown up in its silo." Consulting a map corrected the mistake. By Sunday the British government had been informed, but neither the United States nor Britain warned the public.
And:
And:
And:
More (#4) from Arsenals of Folly:
Before Gorbachev, even during the years of détente, the Soviet military had operated on the assumption (however unrealistic) that it should plan to win a nuclear war should one be fought— a strategy built on the Soviet experience of fighting Germany during the Second World War. Partly because a massive surprise attack had initiated that nearly fatal conflict, the Soviet military had been and still was deeply skeptical of relying on deterrence to prevent an enemy attack. For different reasons, so were the proponents of common security. Brandt, who followed the Palme Commission’s deliberations closely, wrote that he "shared the conclusions [the commission] came to: collective security as an essential political task in the nuclear age, and partnership in security as a military concept to take over gradually from the strategy of nuclear deterrence; [because] deterrence threatens to destroy what it is supposed to be defending, and thereby increasingly loses credibility."
From Lewis’ Flash Boys:
What Spivey had realized, by 2008, was that there was a big difference between the trading speed that was available between these exchanges and the trading speed that was theoretically possible… Incredibly to Spivey, the telecom carriers were not set up to understand the new demand for speed. Not only did Verizon fail to see that it could sell its special route to traders for a fortune; Verizon didn’t even seem aware it owned anything of special value. "You would have to order up several lines and hope that you got it," says Spivey. "They didn’t know what they had." As late as 2008, major telecom carriers were unaware that the financial markets had changed, radically, the value of a millisecond.
...The construction guy [driving the route] with him clearly suspected he might be out of his mind. Yet when Spivey pressed him, even he couldn’t come up with a reason why the plan wasn’t at least theoretically possible. That’s what Spivey had been after: a reason not to do it. "I was just trying to find the reason no [telecom] carrier had done it," he says. "I was thinking: Surely I’ll see some roadblock." Aside from the construction engineer’s opinion that no one in his right mind wanted to cut through the hard Allegheny rock, he couldn’t find one.
So Spivey began digging the line, keeping it secret for 2 years. He didn’t start trying to sell the line to banks and traders until a couple months before the line was complete. And then:
Comment
More (#1) from Flash Boys:
And:
And:
Reg NMS was intended to create equality of opportunity in the U.S. stock market. Instead it institutionalized a more pernicious inequality. A small class of insiders with the resources to create speed were now allowed to preview the market and trade on what they had seen.
...By complying with Reg NMS, [Schwall] now understood, the smart order routers simply marched investors into various traps laid for them by high-frequency traders. "At that point I just got very, very pissed off," he said. "That they are ripping off the retirement savings of the entire country through systematic fraud and people don’t even realize it. That just drives me up the fucking wall."
His anger expressed itself in a search for greater detail. When he saw that Reg NMS had been created to correct for the market manipulations of the old NYSE specialists, he wanted to know: How had that corruption come about? He began another search. He discovered that the New York Stock Exchange specialists had been exploiting a loophole in some earlier regulation—which of course just led Schwall to ask: What event had led the SEC to create that regulation? Many hours later he’d clawed his way back to the 1987 stock market crash, which, as it turned out, gave rise to the first, albeit crude, form of high-frequency trading. During the 1987 crash, Wall Street brokers, to avoid having to buy stock, had stopped answering their phones, and small investors were unable to enter their orders into the market. In response, the government regulators had mandated the creation of an electronic Small Order Execution System so that the little guy’s order could be sent into the market with the press of a key on a computer keyboard, without a stockbroker first taking it from him on the phone. Because a computer was able to transmit trades must faster than humans, the system was soon gamed by smart traders, for purposes having nothing to do with the little guy. At which point Schwall naturally asked: From whence came the regulation that had made brokers feel comfortable not answering their phones in the midst of the 1987 stock market crash?
...Several days later he’d worked his way back to the late 1800s. The entire history of Wall Street was the story of scandals, it now seemed to him, linked together tail to trunk like circus elephants. Every systemic market injustice arose from some loophole in a regulation created to correct some prior injustice. "No matter what the regulators did, some other intermediary found a way to react, so there would be another form of front-running," he said. When he was done in the Staten Island library he returned to work, as if there was nothing unusual at all about the product manager having turned himself into a private eye. He’d learned several important things, he told his colleagues. First, there was nothing new about the behavior they were at war with: The U.S. financial markets had always been either corrupt or about to be corrupted. Second, there was zero chance that the problem would be solved by financial regulators; or, rather, the regulators might solve the narrow problem of front-running in the stock market by high-frequency traders, but whatever they did to solve the problem would create yet another opportunity for financial intermediaries to make money at the expense of investors.
Schwall’s final point was more aspiration than insight. For the first time in Wall Street history, the technology existed that eliminated entirely the need for financial intermediaries. Buyers and sellers in the U.S. stock market were now able to connect with each other without any need of a third party. "The way that the technology had evolved gave me the conviction that we had a unique opportunity to solve the problem," he said. "There was no longer any need for any human intervention." If they were going to somehow eliminate the Wall Street middlemen who had flourished for centuries, they needed to enlarge the frame of the picture they were creating. "I was so concerned that we were talking about what we were doing as a solution to high-frequency trading," he said. "It was bigger than that. The goal had to be to eliminate any unnecessary intermediation."
There was so much worth quoting from Better Angels of Our Nature that I couldn’t keep up. I’ll share a few quotes anyway.
Comment
More (#3) from Better Angels of Our Nature:
Integrative complexity is related to violence. People whose language is less integratively complex, on average, are more likely to react to frustration with violence and are more likely to go to war in war games. Working with the psychologist Peter Suedfeld, Tetlock tracked the integrative complexity of the speeches of national leaders in a number of political crises of the 20th century that ended peacefully (such as the Berlin blockade in 1948 and the Cuban Missile Crisis) or in war (such as World War I and the Korean War), and found that when the complexity of the leaders’ speeches declined, war followed. In particular, they found a linkage between rhetorical simple-mindedness and military confrontations in speeches by Arabs and Israelis, and by the Americans and Soviets during the Cold War. We don’t know exactly what the correlations mean: whether mule-headed antagonists cannot think their way to an agreement, or bellicose antagonists simplify their rhetoric to stake out an implacable bargaining position. Reviewing both laboratory and real-world studies, Tetlock suggests that both dynamics are in play.
Comment
Further reading on integrative complexity:
Wikipedia Psychlopedia Google book
Now that I’ve been introduced to the concept, I want to evaluate how useful it is to incorporate into my rhetorical repertoire and vocabulary. And, to determine whether it can inform my beliefs about assessing the exfoliating intelligence of others (a term I’ll coin to refer to that intelligence/knowledge which another can pass on to me to aid my vocabulary and verbal abstract reasoning—my neuropsychological strengths which I try to max out just like an RPG character).
At a less meta level, knowing the strengths and weaknesses of the trait will inform whether I choose to signal it or dampen it from herein and in what situations. It is important for imitators to remember that whatever IC is associated with does not neccersarily imply those associations to lay others.
strengths
As listed in psycholopedia:
appreciation of complexity
scientific profficiency
stress accomodationo
resistance to persuasion
prediction ability
social responsibliy
more initiative, as rated by managers, and more motivation to seek power, as gauged by a projective test
weaknesses
based on psychlopedia:
low scores on compliance and conscientiousness
seem antagonistic and even narcissistic based on the wiki article:
dependence (more likely to defer to others)
rational expectations (more likely to fallaciously assume they are dealing with rational agents)
Upon reflection, here are my conclusions:
high integrative complexity dominates low integrative complexity for those who have insight into the concept and self aware of how it relates to them, others, and the capacity to use the skill and hide it.
the questions eliciting the answers that are expert rated to define the concept of IC by psychometricians is very crude and there ought to be a validated tool devised, if that is an achievable feat (cognitive complexity or time estimates beyond the scope of my time/intelligence at the moment)
I have been using this tool as my primary estimate of intelligence of people but will instead subordinate it to ordinary psychometric status before I became aware of it here and will now elevate traditional tools of intelligence to their established status
I’m interested in learning about the algorithms used to search say Twitter and assess IC. Anyone got any info?
very interested in any research on IC association with corporate board performance and shareprices etc. Doesn’t seem to be much research but generally research does start with Defence implications before going corporate...
Interested in exploring relations between the assessment of IC and tools used in CBT given their structural similarity...and by extensions general relationships between IC and mental health
More (#4) from Better Angels of Our Nature:
Comment
Untrue unless you’re in a non-sequential game
True under a utilitarian framework and with a few common mind-theoretic assumptions derived from intuitions stemming from most people’s empathy
Woo
More (#2) from Better Angels of Our Nature:
Can higher intelligence at least nudge people in the direction of superrationality? That is, are better reasoners likely to reflect on the fact that mutual cooperation leads to the best joint outcome, assume that the other guy is reflecting on it as well, and profit from the resulting simultaneous leap of trust? No one has given people of different levels of intelligence a true one-shot Prisoner’s Dilemma, but a recent study came close by using a sequential one-shot Prisoner’s Dilemma, in which the second player acts only after seeing the first player’s move. The economist Stephen Burks and his collaborators gave a thousand trainee truck drivers a Matrices IQ test and a Prisoner’s Dilemma, using money for the offers and payoffs. The smarter truckers were more likely to cooperate on the first move, even after controlling for age, race, gender, schooling, and income. The investigators also looked at the response of the second player to the first player’s move. This response has nothing to do with superrationality, but it does reflect a willingness to cooperate in response to the other player’s cooperation in such a way that both players would benefit if the game were iterated. Smarter truckers, it turned out, were more likely to respond to cooperation with cooperation, and to defection with defection.
The economist Garrett Jones connected intelligence to the Prisoner’s Dilemma by a different route. He scoured the literature for all the Iterated Prisoner’s Dilemma experiments that had been conducted in colleges and universities from 1959 to 2003. Across thirty-six experiments involving thousands of participants, he found that the higher a school’s mean SAT score (which is strongly correlated with mean IQ), the more its students cooperated. Two very different studies, then, agree that intelligence enhances mutual cooperation in the quintessential situation in which its benefits can be foreseen. A society that gets smarter, then, may be a society that becomes more cooperative.
More (#1) from Better Angels of Our Nature:
As for Vietnam, the implication that the United States would have avoided the war if only the advisors of Kennedy and Johnson had been less intelligent seems unlikely in light of the fact that after they left the scene, the war was ferociously prosecuted by Richard Nixon, who was neither the best nor the brightest. The relationship between presidential intelligence and war may also be quantified. Between 1946 (when the PRIO dataset begins) and 2008, a president’s IQ is negatively correlated with the number of battle deaths in wars involving the United States during his presidency, with a coefficient of −0.45. One could say that for every presidential IQ point, 13,440 fewer people die in battle, though it’s more accurate to say that the three smartest postwar presidents, Kennedy, Carter, and Clinton, kept the country out of destructive wars.
From Ariely’s The Honest Truth about Dishonesty:
She was living near campus with several other people—none of whom knew one another. When the cleaning people came each weekend, they left several rolls of toilet paper in each of the two bathrooms. However, by Monday all the toilet paper would be gone. It was a classic tragedy-of-the-commons situation: because some people hoarded the toilet paper and took more than their fair share, the public resource was destroyed for everyone else.
After reading about the Ten Commandments experiment on my blog, Rhonda put a note in one of the bathrooms asking people not to remove toilet paper, as it was a shared commodity. To her great satisfaction, one roll reappeared in a few hours, and another the next day. In the other note-free bathroom, however, there was no toilet paper until the following weekend, when the cleaning people returned.
Comment
More (#1) from Ariely’s The Honest Truth about Dishonesty:
ME: By the time taxpayers finish entering all the data onto the form, it is too late. The cheating is done and over with, and no one will say, "Oh, I need to sign this thing, let me go back and give honest answers." You see? If people sign before they enter any data onto the form, they cheat less. What you need is a signature at the top of the form, and this will remind everyone that they are supposed to be telling the truth.
IRS: Yes, that’s interesting. But it would be illegal to ask people to sign at the top of the form. The signature needs to verify the accuracy of the information provided.
ME: How about asking people to sign twice? Once at the top and once at the bottom? That way, the top signature will act as a pledge—reminding people of their patriotism, moral fiber, mother, the flag, homemade apple pie—and the signature at the bottom would be for verification.
IRS: Well, that would be confusing.
ME: Have you looked at the tax code or the tax forms recently?
IRS: [No reaction.]
ME: How about this? What if the first item on the tax form asked if the taxpayer would like to donate twenty-five dollars to a task force to fight corruption? Regardless of the particular answer, the question will force people to contemplate their standing on honesty and its importance for society! And if the taxpayer donates money to this task force, they not only state an opinion, but they also put some money behind their decision, and now they might be even more likely to follow their own example.
IRS: [Stony silence.]
And:
Most professors encounter the same puzzling phenomenon, and I’ll guess that we have come to suspect some kind of causal relationship between exams and sudden deaths among grandmothers. In fact, one intrepid researcher has successfully proven it. After collecting data over several years, Mike Adams (a professor of biology at Eastern Connecticut State University) has shown that grandmothers are ten times more likely to die before a midterm and nineteen times more likely to die before a final exam. Moreover, grandmothers of students who aren’t doing so well in class are at even higher risk—students who are failing are fifty times more likely to lose a grandmother compared with non-failing students.
In a paper exploring this sad connection, Adams speculates that the phenomenon is due to intrafamilial dynamics, which is to say, students’ grandmothers care so much about their grandchildren that they worry themselves to death over the outcome of exams. This would indeed explain why fatalities occur more frequently as the stakes rise, especially in cases where a student’s academic future is in peril. With this finding in mind, it is rather clear that from a public policy perspective, grandmothers—particularly those of failing students—should be closely monitored for signs of ill health during the weeks before and during finals. Another recommendation is that their grandchildren, again particularly the ones who are not doing well in class, should not tell their grandmothers anything about the timing of the exams or how they are performing in class.
Though it is likely that intrafamilial dynamics cause this tragic turn of events, there is another possible explanation for the plague that seems to strike grandmothers twice a year. It may have something to do with students’ lack of preparation and their subsequent scramble to buy more time than with any real threat to the safety of those dear old women. If that is the case, we might want to ask why it is that students become so susceptible to "losing" their grandmothers (in e-mails to professors) at semesters’ end.
Perhaps at the end of the semester, the students become so depleted by the months of studying and burning the candle at both ends that they lose some of their morality and in the process also show disregard for their grandmothers’ lives. If the concentration it takes to remember a longer digit can send people running for chocolate cake, it’s not hard to imagine how dealing with months of cumulative material from several classes might lead students to fake a dead grandmother in order to ease the pressure (not that that’s an excuse for lying to one’s professors).
More (#2) from Ariely’s The Honest Truth about Dishonesty:
The same basic process can take place in any test in which the answers are available on another page or are written upside down, as they often are in magazines and SAT study guides. We often use the answers when we practice taking tests to convince ourselves that we’re smart or, if we get an answer wrong, that we’ve made a silly mistake that we would never make during a real exam. Either way, we come away with an inflated idea of how bright we actually are—and that’s something we’re generally happy to accept.
And:
When the time came to withdraw the needles, one nurse held my elbow and the other slowly pulled out each needle with pliers. Of course, the pain was excruciating and lasted for days—very much in contrast to how they described the procedure. Still, in hindsight, I was very glad they had lied to me. If they had told me the truth about what to expect, I would have spent the weeks before the extraction anticipating the procedure in misery, dread, and stress—which in turn might have compromised my much-needed immune system. So in the end, I came to believe that there are certain circumstances in which white lies are justified.
From Feynman’s Surely You’re Joking, Mr. Feynman:
After some discussion as to what "essential object" meant, the professor leading the seminar said something meant to clarify things and drew something that looked like lightning bolts on the blackboard. "Mr. Feynman," he said, "would you say an electron is an ‘essential object’?"
Well, now I was in trouble. I admitted that I hadn’t read the book, so I had no idea of what Whitehead meant by the phrase; I had only come to watch. "But," I said, "I’ll try to answer the professor’s question if you will first answer a question from me, so I can have a better idea of what ‘essential object’ means. Is a brick an essential object?"
...Then the answers came out. One man stood up and said, "A brick as an individual, specific brick. That is what Whitehead means by an essential object."
Another man said, "No, it isn’t the individual brick that is an essential object; it’s the general character that all bricks have in common their ‘brickness’ that is the essential object."
Another guy got up and said, "No, it’s not in the bricks themselves. ‘Essential object’ means the idea in the mind that you get when you think of bricks."
Another guy got up, and another, and I tell you I have never heard such ingenious different ways of looking at a brick before. And, just like it should in all stories about philosophers, it ended up in complete chaos. In all their previous discussions they hadn’t even asked themselves whether such a simple object as a brick, much less an electron, is an "essential object."
Comment
More (#1) from Surely You’re Joking, Mr. Feynman:
I began to read the paper. It kept talking about extensors and flexors, the gastrocnemius muscle, and so on. This and that muscle were named, but I hadn’t the foggiest idea of where they were located in relation to the nerves or to the cat. So I went to the librarian in the biology section and asked her if she could find me a map of the cat.
"A map of the cat, sir?" she asked, horrified. "You mean a zoological chart!" From then on there were rumors about some dumb biology graduate student who was looking for a "map of the cat."
When it came time for me to give my talk on the subject, I started off by drawing an outline of the cat and began to name the various muscles. The other students in the class interrupt me: "We know all that!"
"Oh," I say, "you do? Then no wonder I can catch up with you so fast after you’ve had four years of biology." They had wasted all their time memorizing stuff like that, when it could be looked up in fifteen minutes.
And:
Meselson and I had extracted enormous quantities of ribosomes from E. coli for some other experiment. I said, "Hell, I’ll just give you the ribosomes we’ve got. We have plenty of them in my refrigerator at the lab."
It would have been a fantastic and vital discovery if I had been a good biologist. But I wasn’t a good biologist. We had a good idea, a good experiment, the right equipment, but I screwed it up: I gave her infected ribosomes the grossest possible error that you could make in an experiment like that. My ribosomes had been in the refrigerator for almost a month, and had become contaminated with some other living things. Had I prepared those ribosomes promptly over again and given them to her in a serious and careful way, with everything under control, that experiment would have worked,, and we would have been the first to demonstrate the uniformity of life: the machinery of making proteins, the ribosomes, is the same in every creature. We were there at the right place, we were doing the right things, but I was doing things as an amateur stupid and sloppy.
And:
...there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can if you know anything at all wrong, or possibly wrong to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
One quote from Taleb’s AntiFragile is here, and here’s another:
The greatest irony is that we watched firsthand how narratives of thought are made, as we were lucky enough to face another episode of blatant intellectual expropriation. We received an invitation to publish our side of the story—being option practitioners—in the honorable Wiley Encyclopedia of Quantitative Finance. So we wrote a version of the previous paper mixed with our own experiences. Shock: we caught the editor of the historical section, one Barnard College professor, red-handed trying to modify our account. A historian of economic thought, he proceeded to rewrite our story to play down, if not reverse, its message and change the arrow of the formation of knowledge. This was scientific history in the making. The fellow sitting in his office in Barnard College was now dictating to us what we saw as traders—we were supposed to override what we saw with our own eyes with his logic.
I came to notice a few similar inversions of the formation of knowledge. For instance, in his book written in the late 1990s, the Berkeley professor Highly Certified Fragilista Mark Rubinstein attributed to publications by finance professors techniques and heuristics that we practitioners had been extremely familiar with (often in more sophisticated forms) since the 1980s, when I got involved in the business.
No, we don’t put theories into practice. We create theories out of practice. That was our story, and it is easy to infer from it—and from similar stories—that the confusion is generalized. The theory is the child of the cure, not the opposite...
Comment
AntiFragile makes lots of interesting points, but it’s clear in some cases that Taleb is running roughshod over the truth in order to support his preferred view. I’ve italicized the particularly lame part:
So for now we are looking at the forward arrow and at no point, although science was at some use along the way since computer technology relies on science in most of its aspects; at no point did academic science serve in setting its direction, rather it served as a slave to chance discoveries in an opaque environment, with almost no one but college dropouts and overgrown high school students along the way. The process remained self-directed and unpredictable at every step.
From Think Like a Freak:
So how could they tell whether these ads were effective? They couldn’t. With no variation whatsoever, it was impossible to know.
What if, we said, the company ran an experiment to find out? In science, the randomized control trial has been the gold standard of learning for hundreds of years—but why should scientists have all the fun? We described an experiment the company might run. They could select 40 major markets across the country and randomly divide them into two groups. In the first group, the company would keep buying newspaper ads every Sunday. In the second group, they’d go totally dark—not a single ad. After three months, it would be easy to compare merchandise sales in the two groups to see how much the print ads mattered.
"Are you crazy?" one marketing executive said. "We can’t possibly go dark in 20 markets. Our CEO would kill us."
"Yeah," said someone else, "it’d be like that kid in Pittsburgh."
What kid in Pittsburgh?
They told us about a summer intern who was supposed to call in the Sunday ad buys for the Pittsburgh newspapers. For whatever reason, he botched his assignment and failed to make the calls. So for the entire summer, the company ran no newspaper ads in a large chunk of Pittsburgh. "Yeah," one executive said, "we almost got fired for that one."
So what happened, we asked, to the company’s Pittsburgh sales that summer?
They looked at us, then at each other—and sheepishly admitted it never occurred to them to check the data. When they went back and ran the numbers, they found something shocking: the ad blackout hadn’t affected Pittsburgh sales at all!
Now that, we said, is valuable feedback. The company may well be wasting hundreds of millions of dollars on advertising. How could the executives know for sure? That 40-market experiment would go a long way toward answering the question. And so, we asked them, are you ready to try it now?
"Are you crazy?" the marketing executive said again. "We’ll get fired if we do that!"
To this day, on every single Sunday in every single market, this company still buys newspaper advertising—even though the only real piece of feedback they ever got is that the ads don’t work.
Comment
More (#1) from Think Like a Freak:
...How did he do? In his very first Coney Island contest, Kobi smoked the field and set a new world record… he ate 50.
...Kobayashi had observed that most Coney Island eaters used a similar strategy, which was not really much of a strategy at all. It was essentially a sped-up version of how the average person eats a hot dog at a backyard barbecue: pick it up, cram the dog and bun into the mouth, chew from end to end, and glug some water to wash it down. Kobayashi wondered if perhaps there was a better way. Nowhere was it written, for instance, that the dog must be eaten end to end. His first experiment was simple: What would happen if he broke the dog and bun in half before eating? This, he found, afforded more options for chewing and loading, and it also let his hands do some of the work that would otherwise occupy his mouth...
Kobayashi now questioned another conventional practice: eating the dog and bun together. It wasn’t surprising that everyone did this. The dog is nested so comfortably in the bun, and when eating for pleasure, the soft blandness of the bun pairs wonderfully with the slick, seasoned meat. But Kobayashi wasn’t eating for pleasure. Chewing dog and bun together, he discovered, created a density conflict. The dog itself is a compressed tube of dense, salty meat that can practically slide down the gullet on its own. The bun, while airy and less substantial, takes up a lot of space and requires a lot of chewing. So he started removing the dog from bun. Now he could feed himself a handful of bunless dogs, broken in half, followed by a round of buns.
...As easily as he was able to swallow the hot dogs—like a trained dolphin slorping down herring at the aquarium—the bun was still a problem. (If you want to win a bar bet, challenge someone to eat two hot-dog buns in one minute without a beverage; it is nearly impossible.) So Kobayashi tried something different. As he was feeding himself the bunless, broken hot dogs with one hand, he used the other hand to dunk the bun into his water cup. Then he’d squeeze out most of the excess water and smush the bun into his mouth. This might seem counterintuitive—why put extra liquid in your stomach when you need all available space for buns and dogs?—but the bun-dunking provided a hidden benefit. Eating soggy buns meant Kobayashi grew less thirsty along the way, which meant less time wasted on drinking. He experimented with water temperature and found that warm was best, as it relaxed his chewing muscles. He also spiked the water with vegetable oil, which seemed to help swallowing.
His experimentation was endless. He videotaped his training sessions and recorded all his data in a spreadsheet, hunting for inefficiencies and lost milliseconds. He experimented with pace: Was it better to go hard the first four minutes, ease off during the middle four, and "sprint" toward the end—or maintain a steady pace throughout? (A fast start, he discovered, was best.) He found that getting a lot of sleep was especially important. So was weight training: strong muscles aided in eating and helped resist the urge to throw up. He also discovered that he could make more room in his stomach by jumping and wriggling as he ate—a strange, animalistic dance that came to be known as the Kobayashi Shake.
And:
For every ton of carbon dioxide a factory eliminated, it would receive one credit. Other pollutants were far more remunerative: methane (21 credits), nitrous oxide (310), and, near the top of the list, something called hydrofluorocarbon-23, or HFC-23. It is a "super" greenhouse gas that is a by-product in the manufacture of HCFC-22, a common refrigerant that is itself plenty bad for the environment.
The UN was hoping that factories would switch to a greener refrigerant than HCFC-22. One way to incentivize them, it reasoned, was to reward the factories handsomely for destroying their stock of its waste gas, HFC-23. So the UN offered a whopping bounty of 11,700 carbon credits for every ton of HFC-23 that was destroyed rather than released into the atmosphere.
Can you guess what happened next?
Factories around the world, especially in China and India, began to churn out extra HCFC-22 in order to generate extra HFC-23 so they could rake in the cash. As an official with the Environmental Investigation Agency (EIA) put it: "The evidence is overwhelming that manufacturers are creating excess HFC-23 simply to destroy it and earn carbon credits." The average factory earned more than $20 million a year by selling carbon credits for HFC-23.
Angry and embarrassed, the UN changed the rules of the program to curb the abuse; several carbon markets banned the HFC-23 credits, making it harder for the factories to find buyers. So what will happen to all those extra tons of harmful HFC-23 that suddenly lost its value? The EIA warns that China and India may well "release vast amounts of . . . HFC-23 into the atmosphere, causing global greenhouse gas emissions to skyrocket."
Which means the UN wound up paying polluters millions upon millions of dollars to . . . create additional pollution.
Backfiring bounties are, sadly, not as rare as one might hope. This phenomenon is sometimes called "the cobra effect." As the story goes, a British overlord in colonial India thought there were far too many cobras in Delhi. So he offered a cash bounty for every cobra skin. The incentive worked well—so well, in fact, that it gave rise to a new industry: cobra farming. Indians began to breed, raise, and slaughter the snakes to take advantage of the bounty. Eventually the bounty was rescinded—whereupon the cobra farmers did the logical thing and set their snakes free, as toxic and unwanted as today’s HFC-23.
From Rhodes’ Twilight of the Bombs:
″...Let me describe … one possible scenario of attack under the conditions of the coup. The early warning system detects a missile attack and sends signals to the subsystems that assess the threat. It is a process that immediately involves the president of the country, the minister of defense, chief of the general staff and the commanders in chief of the three branches of strategic nuclear forces.
"Then the chief of the general staff and commanders in chief of strategic nuclear forces form a command and send it down to the subordinate units. In essence, this command is meant to inform troops and weapons systems about a possible nuclear attack, and this command is called a preliminary command.
"The preliminary command opens up access by the launch crews to the equipment directly controlling the use of nuclear weapons and also gives them access to the relevant special documentation. However, launch crews do not [yet] have the full right to use the equipment of direct control over the use of nuclear weapons.
"As a more accurate assessment of the situation is made, a message is received from the early warning systems confirming the fact of nuclear attack, and the decision to use nuclear weapons may be made at that point. It can be carried out according to a two-stage process."
The first stage of this two-stage process, Pavlov continued, once again involved the top leadership in a political decision—whether or not to generate a "permission command" that would be sent to the CICs. Then, during the second stage, the CICs and the chief of the general staff would decide as military leaders whether or not to generate a "direct command" ordering launch crews to fire their weapons. Even then, the direct command had to pass through an ordeal of what Pavlov called "special processing42 by technical and organizational means to verify its authenticity." Each of these actions had time limits, and if the time for an action expired, the blocking system that normally prevented weapons from being launched automatically reactivated.
Cumbersome as the Soviet system seemed from their descriptions, Blair pointed out, it was "actually devised … to streamline43 the command system to ensure that they could release nuclear weapons within the time frame of a ballistic missile attack launched by the United States, that is to say, within 15 to 30 minutes." And despite its complexity, Blair added, a nuclear launch by the coup leaders might still have been possible had they persuaded the general staff to issue Yanayev a Cheget and had one or more of the CICs gone along. "There is an important lesson44 here," Blair concluded. "No system of safeguards can reliably guard against misbehavior at the very apex of government, in any government. There is no adequate answer to the question, ‘Who guards the guards?’"
Comment
More (#1) from Twilight of the Bombs:
"Many of the new Republicans37 on Capitol Hill were young enough to have avoided Vietnam entirely; and most of those who had not been young enough had received deferments. Never before had the American people elected a congressional majority so few of whose members had served in the military. Perhaps the most striking attribute of the new House membership, though, was its startling lack of familiarity with the world outside America’s borders. Fully a third of the new Republican House members had never set foot outside the United States. In the main, many of them considered that a good thing; or if not, then certainly not a deficiency to be rectified. The deep suspicion of the UN reflected in the Contract with America was an accurate reflection of these individuals’ deep distrust of the foreign, in all senses of that term."
And:
And:
"We would have been forced to occupy Baghdad and, in effect, rule Iraq. The coalition would instantly have collapsed, the Arabs deserting it in anger and other allies pulling out as well… Going in and occupying Iraq, thus unilaterally exceeding the U.N.’s mandate, would have destroyed the precedent of international response to aggression we hoped to establish. Had we gone the invasion route, the U.S. could conceivably still be an occupying power in a bitterly hostile land. It would have been a dramatically different—and perhaps barren—outcome."
And:
But something else happened that week in Washington that had an even greater impact on George Bush and Dick Cheney’s thinking. Special sensors that detect chemical, biological, or radiological agents had been installed in the White House to protect the president. On Thursday, 18 October, they went off while Cheney and his aides were working in the Situation Room. "Everyone who had entered10 the Situation Room that day," the journalist Jane Mayer reported, "was believed to have been exposed, and that included Cheney. ‘They thought there had been a nerve attack,’ a former administration official, who was sworn to secrecy about it, later confided. ‘It was really, really scary. They thought that Cheney was already lethally infected.’" Cheney had recently been briefed about the lack of U.S. defenses against a biowarfare attack, Mayer revealed. Thus, "when the White House sensor11 registered the presence of such poisons less than a month later, many, including Cheney, believed a nightmare was unfolding. ‘It was a really nerve jangling time,’ the former official said."
And:
Because nuclear weapons are well protected, national-security bureaucracies have postulated that a terrorist group that wants to acquire a nuclear capability will be forced to build its own bomb or bombs. Enriching uranium or breeding plutonium and separating it from its intensely radioactive matrix of spent fuel are both well beyond the capacity of subnational entities. The notion that a government would risk its own security by giving up control of a nuclear weapon to terrorists is nonsensical despite the Bush administration’s use of the argument to justify invading Saddam Hussein’s Iraq. A nuclear attack on United States interests by a terrorist group using a donated bomb would certainly lead to a devastating nuclear counterattack on the country that supplied the weapon, provided the supplier could be determined—a near certainty with nuclear forensics and other means of investigation.
From Harford’s The Undercover Economist Strikes Back:
And:
Funnily enough, the "gross national happiness" thing appears to have emerged as a defensive reflex—the then king of Bhutan, Jigme Singye Wangchuck, announced that "Gross National Happiness is more important than Gross Domestic Product" when pressed on the question of Bhutan’s lack of economic progress in an interview with the Financial Times in 1986. His majesty isn’t the last person to turn to alternative measures of progress for consolation. When Nicolas Sarkozy was president of France he commissioned three renowned economists, Joseph Stiglitz (a Nobel laureate), Amartya Sen (another Nobel laureate) and Jean-Paul Fitoussi, to contemplate alternatives to GDP. One possible reason for President Sarkozy’s enthusiasm was surely that the French spend most of their time not working, and this lowers France’s GDP. The country is likely to look better on most alternative indices. It’s not unreasonable to look at those alternatives, but let’s not kid ourselves: politicians are always on the lookout for statistical measures that reflect well on them.
Comment
More (#2) from The Undercover Economist Strikes Back:
Keynes famously remarked, "If economists could manage to get themselves thought of as humble, competent people, on a level with dentists, that would be splendid!" It’s a good joke, but it’s not just a joke; you don’t expect your dentist to be able to forecast the pattern of tooth decay, but you expect that she will be able to give you good practical advice on dental health and to intervene to fix problems when they occur. That is what we should demand from economists: useful advice about how to keep the economy working well, and solutions when the economy malfunctions.
And:
And:
...Three examples spring to mind: banking, behavioral economics and complexity theory.
And:
But macroeconomists? They seem to have ignored behavioral economics almost entirely. Robert Shiller told me that while the microeconomists would show up to argue when he gave seminars on behavioral finance, the macroeconomists just haven’t shown up at all.
More (#1) from The Undercover Economist Strikes Back:
Smith’s point is not that poverty is relative, but that it is a social condition. People don’t become poor just because the median citizen receives a pay raise, whatever Eurostat may say. But they may become poor if something they cannot afford—such as a television—becomes viewed as a social essential. A person can lack the money necessary to participate in society, and that, in an important sense, is poverty.
For me, the poverty lines that make the most sense are absolute poverty lines, adjusted over time to reflect social change. Appropriately enough, one of the attempts to do such work is made by a foundation established by Seebohm Rowntree’s father, Joseph. The Joseph Rowntree Foundation uses focus groups to establish what things people feel it’s now necessary to have in order to take part in society—the list includes a vacation, a no-frills mobile phone and enough money to buy a cheap suit every two or three years. Of course, this is all subjective, but so is poverty. I’m not sure we will get anywhere if we believe that some expert, somewhere—even an expert as thoughtful as Mollie Orshansky or Seebohm Rowntree—is going to be able to nail down, permanently and precisely, what it means to be poor.
Even if we accept the simpler idea of a nutrition-based absolute poverty line, there will always be complications. One obvious one is the cost of living: lower in, say, Alabama than in New York. In principle, absolute poverty lines could and should take account of the cost of living, but the U.S. poverty line does not. A second issue is how to deal with short-term loss of income. A middle manager who loses her job and is unemployed for three months before finding another well-paid position might temporarily fall below the poverty line as far as her income is concerned, but with good prospects, a credit card and savings in the bank, she won’t need to live like a poor person—and she is likely to maintain much of her pre-poverty spending pattern. For this reason, some economists prefer to measure poverty not by what a household earns in a given week, month or year—but by how much money that household spends.
And:
The European Union doesn’t use a comparable poverty line, but in the year 2000, researchers at the University of York tried to work out what EU poverty rates would be as measured against U.S. standards. They estimated poverty rates as high as 48 percent in Portugal and as low as 6 percent in Denmark, with France at 12 percent, Germany at 15 percent and the UK at 18 percent. Clearly, national income is a big influence on absolute poverty (Portugal is a fair bit poorer than Denmark), but so, too, is the distribution of income (France and the UK have similar average incomes, but France is more egalitarian).
From Caplan’s The Myth of the Rational Voter:
This optimistic story is, however, often at odds with the facts. Democracies frequently adopt and maintain policies harmful for most people. Protectionism is a classic example. Economists across the political spectrum have pointed out its folly for centuries, but almost every democracy restricts imports. Even when countries negotiate free trade agreements, the subtext is not, "Trade is mutually beneficial," but, "We’ll do you the favor of buying your imports if you do us the favor of buying ours." Admittedly, this is less appalling than the Berlin Wall, yet it is more baffling. In theory, democracy is a bulwark against socially harmful policies, but in practice it gives them a safe harbor.
Comment
More (#2) from The Myth of the Rational Voter:
Start with the easiest case: partisan identification. Both economists and the public almost automatically accept the view that poor people are liberal Democrats and rich people are conservative Republicans. The data paint a quite different picture. At least in the United States, there is only a flimsy connection between individuals’ incomes and their ideology or party. The sign fits the stereotype: As your income rises, you are more likely to be conservative and Republican. But the effect is small, and shrinks further after controlling for race. A black millionaire is more likely to be a Democrat than a white janitor. The Republicans might be the party for the rich, but they are not the party of the rich.
We see the same pattern for specific policies. The elderly are not more in favor of Social Security and Medicare than the rest of the population. Seniors strongly favor these programs, but so do the young. Contrary to the SIVH-inspired bumper sticker "If men got pregnant, abortion would be a sacrament," men appear a little more pro-choice on abortion than women. Compared to the overall population, the unemployed are at most a little more in favor of government-guaranteed jobs, and the uninsured at most a little more supportive of national health insurance. Measures of self-interest predict little about beliefs about economic policy. Even when the stakes are life and death, political self-interest rarely surfaces: Males vulnerable to the draft support it at normal levels, and families and friends of conscripts in Vietnam were in fact more opposed to withdrawal than average.
The broken clock of the SIVH is right twice a day. It fails for party identification, Social Security, Medicare, abortion, job programs, national health insurance, Vietnam, and the draft. But it works tolerably well for a few scattered issues. You might expect to see the exceptions on big questions with a lot of money at stake, but the truth is almost the reverse. The SIVH shines brightest on the banal issue of smoking. Donald Green and Ann Gerken find that smokers and nonsmokers are ideologically and demographically similar, but smokers are a lot more opposed to restrictions and taxes on their favorite vice. Belief in "smokers’ rights" cleanly rises with daily cigarette consumption: fully 61.5% of "heavy" smokers want laxer antismoking policies, but only 13.9% of people who "never smoked" agree. If the SIVH were true, comparable patterns of belief would be everywhere. They are not.
Comment
This is an absurdly narrow definition of self-interest. Many people who are not old have parents who are senior citizens. Men have wives, sisters, and daughters whose well-being is important to them. Etc. Self-interest != solipsistic egoism.
More (#1) from The Myth of the Rational Voter:
In biology, Stalin and other prominent Marxist leaders elevated the views of the quack antigeneticist Trofim Lysenko to state-supported orthodoxy, leading to the dismissal of thousands of geneticists and plant biologists. Lysenkoism hurt Soviet agriculture, and helped trigger the deadliest famine in human history during China’s Great Leap Forward.
In physics, on the other hand, leading scientists enjoyed more intellectual autonomy than any other segment of Soviet society. Internationally respected physicists ran the Soviet atomic project, not Marxist ideologues. When their rivals tried to copy Lysenko’s tactics, Stalin balked. A conference intended to start a witch hunt in Soviet physics was abruptly canceled, a decision that had to originate with Stalin. Holloway recounts a telling conversation between Beria, the political leader of the Soviet atomic project, and Kurchatov, its scientific leader: "Beria asked Kurchatov whether it was true that quantum mechanics and relativity theory were idealist, in the sense of antimaterialist. Kurchatov replied that if relativity theory and quantum mechanics were rejected, the bomb would have to be rejected too. Beria was worried by this reply, and may have asked Stalin to call off the conference."
The "Lysenkoization" of Soviet physics never came.
The best explanation for the difference is that modern physics had a practical payoff that Stalin and other Communist leaders highly valued: nuclear weapons.
And:
How does this process work? Your default is to believe what makes you feel best. But an offer to bet triggers standby rationality. Two facts then come into focus. First, being wrong endangers your net worth. Second, your belief received little scrutiny before it was adopted. Now you have to ask yourself which is worse: Financial loss in a bet, or psychological loss of self-worth? A few prefer financial loss, but most covertly rethink their views. Almost no one "bets the farm" even if — pre-wager — he felt sure.
More (#3) from The Myth of the Rational Voter:
Comment
Allow me to offer an alternative explanation of this phenomenon for consideration. Typically, when polled about their trust in insitutions, people tend to trust the executive branch more than the legislature or the courts, and they trust the military far more than they trust civilian government agencies. In the period before 9/11, our long national nightmare of peace and prosperity would generally have made the military less salient in people’s minds, and the spectacles of impeachment and Bush v. Gore would have made the legislative and judicial branches more salient in people’s minds. After 9/11, the legislative agenda quieted down/the legislature temporarily took a back seat to the executive, and military and national security organs became very high salience. So when people were asked about the government, the most immediate associations would have been to the parts that were viewed as more trustworthy.
From Richard Rhodes’ The Making of the Atomic Bomb:
Aston goes on in this lecture, delivered in 1936, to speculate about the consequences of that energy release… "There are those about us who say that such research should be stopped by law, alleging that man’s destructive powers are already large enough. So, no doubt, the more elderly and ape-like of our prehistoric ancestors objected to the innovation of cooked food and pointed out the grave dangers attending the use of the newly discovered agency, fire. Personally I think there is no doubt that sub-atomic energy is available all around us, and that one day man will release and control its almost infinite power. We cannot prevent him from doing so and can only hope that he will not use it exclusively in blowing up his next door neighbor."
Comment
More (#2) from The Making of the Atomic Bomb:
After Alexander Sachs paraphrased the Einstein-Szilard letter to Roosevelt, Roosevelt demanded action, and Edwin Watson set up a meeting with representatives from the Bureau of Standards, the Army, and the Navy...
Upon asking for some money to conduct the relevant experiments, the Army representative launched into a tirade:
"All right, all right," Adamson snapped, "you’ll get your money."
More (#3) from The Making of the Atomic Bomb:
Frisch’s review article mentioned the possibility of a chain reaction only to discount it. He based that conclusion on Bohr’s argument that the U238 in natural uranium would scatter fast neutrons, slowing them to capture-resonance energies; the few that escaped capture would not suffice, he thought, to initiate a slow-neutron chain reaction in the scarce U235. Slow neutrons in any case could never produce more than a modest explosion, Frisch pointed out; they took too long slowing down and finding a nucleus. As he explained later: "That process would take times of the order of a sizeable part of a millisecond… and for the whole chain reaction to develop would take several milliseconds; once the material got hot enough to vaporize, it would begin to expand and the reaction would be stopped before it got much further. So the thing might blow up like a pile of gunpowder, but no worse, and that wasn’t worth the trouble."
Not long from Nazi Germany, Frisch found his argument against a violently explosive chain reaction reassuring. It was backed by the work of no less a theoretician than Niels Bohr. With satisfaction he published it.
...Concerned that Hitler might bluff Neville Chamberlain with threats of a new secret weapon, Churchill had collected a briefing from Frederick Lindemann and written to caution the cabinet not to fear "new explosives of devastating power" for at least "several years." The best authorities, the distinguished M.P. emphasized with a nod to Niels Bohr, held that "only a minor constituent of uranium is effective in these processes." That constituent would need to be laboriously extracted for any large-scale effects. "The chain process can take place only if the uranium is concentrated in a large mass," Churchill continued, slightly muddling the point. "As soon as the energy develops, it will explode with a mild detonation before any really violent effects can be produced. It might be as good as our present-day explosives, but it is unlikely to produce anything very much more dangerous." He concluded optimistically: "Dark hints will be dropped and terrifying whispers will be assiduously circulated, but it is to be hoped that nobody will be taken in by them."
...[Several months later] Frisch walked home through ominous blackouts so dark that he sometimes stumbled over roadside benches and could distinguish fellow pedestrians only by the glow of the luminous cards they had taken to wearing in their hatbands. Thus reminded of the continuing threat of German bombing, he found himself questioning his confident Chemical Society review: "Is that really true what I have written?"
Sometime in February 1940 he looked again. There had always been four possible mechanisms for an explosive chain reaction in uranium: (1) slow-neutron fission of U238; (2) fast-neutron fission of U238; (3) slow-neutron fission of U235; and (4) fast-neutron fission of U235. Bohr’s logical distinction between U238 and thorium on the one hand and U235 on the other ruled out (1): U238 was not fissioned by slow neutrons. (2) was inefficient because of scattering and the parasitic effects of the capture resonance of U238. (3) was possibly applicable to power production but too slow for a practical weapon. But what about (4)? Apparently no one in Britain, France or the United States had asked the question quite that way before.
If Frisch now glimpsed an opening into those depths he did so because he had looked carefully at isotope separation and had decided it could be accomplished even with so fugitive an isotope as U235. He was therefore prepared to consider the behavior of the pure substance unalloyed with U238, as Bohr, Fermi and even Szilard had not yet been...
...He shared the problem with [Rudolf] Peierls… [and together they worked out that] eighty generations of neutrons — as many as could be expected to multiply before the swelling explosion separated the atoms of U235 enough to stop the chain reaction — still millionths of a second in total, gave temperatures as hot as the interior of the sun, pressures greater than the center of the earth where iron flows as a liquid. "I worked out the results of what such a nuclear explosion would be," says Peierls. "Both Frisch and I were staggered by them."
And finally, practically: could even a few pounds of U235 be separated from U238? Frisch writes: "I had worked out the possible efficiency of my separation system with the help of Clusius’s formula, and we came to the conclusion that with something like a hundred thousand similar separation tubes one might produce a pound of reasonably pure uranium-235 in a modest time, measured in weeks. At that point we stared at each other and realized that an atomic bomb might after all be possible."
Frisch and Peierls wrote a two-part report of their findings:
The second report, "Memorandum on the properties of a radioactive ‘super-bomb,’" a less technical document, was apparently intended as an alternative presentation for nonscientists. This study explored beyond the technical questions of design and production to the strategic issues of possession and use; it managed at the same time both seemly innocence and extraordinary prescience:
As a weapon, the super-bomb would be practically irresistible. There is no material or structure that could be expected to resist the force of the explosion.
Owing to the spreading of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon for use by this country.
It is quite conceivable that Germany is, in fact, developing this weapon.
If one works on the assumption that Germany is, or will be, in the possession of this weapon, it must be realised that no shelters are available that would be effective and could be used on a large scale. The most effective reply would be a counter-threat with a similar weapon.
Thus in the first months of 1940 it was already clear to two intelligent observers that nuclear weapons would be weapons of mass destruction against which the only apparent defense would be the deterrent effect of mutual possession. Frisch and Peierls finished their two reports and took them to [Mark] Oliphant. He quizzed the men thoroughly, added a cover letter to their memoranda ("I have considered these suggestions in some detail and have had considerable discussion with the authors, with the result that I am convinced that the whole thing must be taken rather seriously, if only to make sure that the other side are not occupied in the production of such a bomb at the present time") and sent letter and documents off to Henry Thomas Tizard...
"I have often been asked," Otto Frisch wrote many years afterward of the moment when he understood that a bomb might be possible after all, before he and Peierls carried the news to Mark Oliphant, "why I didn’t abandon the project there and then, saying nothing to anybody. Why start on a project which, if it was successful, would end with the production of a weapon of unparalleled violence, a weapon of mass destruction such as the world had never seen? The answer was very simple. We were at war, and the idea was reasonably obvious; very probably some German scientists had had the same idea and were working on it."
Whatever scientists of one warring nation could conceive, the scientists of another warring nation might also conceive — and keep secret. That early in 1939 and early 1940, the nuclear arms race began.
More (#1) from The Making of the Atomic Bomb:
...If Bohr could be convinced to swing his prestige behind secrecy, the campaign to isolate German nuclear physics research might work.
They met in the evening in Wigner’s office. "Szilard outlined the Columbia data," Wheeler reports, "and the preliminary indications from it that at least two secondary neutrons emerge from each neutron-induced fission. Did this not mean that a nuclear explosive was certainly possible?" Not necessarily, Bohr countered. "We tried to convince him," Teller writes, "that we should go ahead with fission research but we should not publish the results. We should keep the results secret, lest the Nazis learn of them and produce nuclear explosions first. Bohr insisted that we would never succeed in producing nuclear energy and he also insisted that secrecy must never be introduced into physics."
...[Bohr] had worked for decades to shape physics into an international community, a model within its limited franchise of what a peaceful, politically united world might be. Openness was its fragile, essential charter, an operational necessity, as freedom of speech is an operational necessity to a democracy. Complete openness enforced absolute honesty: the scientist reported all his results, favorable and unfavorable, where all could read them, making possible the ongoing correction of error. Secrecy would revoke that charter and subordinate science as a political system—Polanyi’s "republic"—to the anarchic competition of the nation-states.
...March 17 was a Friday; Szilard traveled down to Washington from Princeton with Teller; Fermi stayed the weekend. They got together, reports Szilard, "to discuss whether or not these things"—the Physical Review papers—"should be published. Both Teller and I thought that they should not. Fermi thought that they should. But after a long discussion, Fermi took the position that after all this was a democracy; if the majority was against publication, he would abide by the wish of the majority." Within a day or two the issue became moot. The group learned of the Joliot/von Halban/Kowarski paper, published in Nature on March 18. "From that moment on," Szilard notes, "Fermi was adamant that withholding publication made no sense."1135
[About a month later, German physicist] Paul Harteck wrote a letter jointly with his assistant to the German War Office: "We take the liberty of calling to your attention the newest development in nuclear physics, which, in our opinion, will probably make it possible to produce an explosive many orders of magnitude more powerful than the conventional ones… That country which first makes use of it has an unsurpassable advantage over the others."
The Harteck letter reached Kurt Diebner, a competent nuclear physicist stuck unhappily in the Wehrmacht’s ordnance department studying high explosives. Diebner carried it to Hans Geiger. Geiger recommended pursuing the research. The War Office agreed.
On the origins of the Einstein–Szilárd letter:
And:
From the horrible weapon which they were about to urge the United States to develop, Szilard, Teller and Wigner — "the Hungarian conspiracy," Merle Tuve was amused to call them — hoped for more than deterrence against German aggression. They also hoped for world government and world peace, conditions they imagined bombs made of uranium might enforce.
More (#5) from The Making of the Atomic Bomb:
"When one considers that right up to the end of the war, in 1945, there was virtually no increase in our heavy-water stocks in Germany… it will be seen that it was the elimination of German heavy-water production in Norway that was the main factor in our failure to achieve a self-sustaining atomic reactor before the war ended.
The race to the bomb, such as it was, ended for Germany on a mountain lake in Norway on a cold Sunday morning in February 1944.
More (#4) from The Making of the Atomic Bomb:
And:
"When [Fermi] finished his [carbon absorption] measurement the question of secrecy again came up. I went to his office and said that now that we had this value perhaps the value ought not to be made public. And this time Fermi really lost his temper; he really thought this was absurd. There was nothing much more I could say, but next time when I dropped in his office he told me that Pegram had come to see him, and Pegram thought that this value should not be published. From that point the secrecy was on."
It was on just in time to prevent German researchers from pursuing a cheap, effective moderator. Bothe’s measurement ended German experiments on graphite.
And:
Turner had published a masterly twenty-nine-page review article on nuclear fission in the January Reviews of Modern Physics, citing nearly one hundred papers that had appeared since Hahn and Strassmann reported their discovery twelve months earlier; the number of papers indicates the impact of the discovery on physics and the rush of physicists to explore it. Turner had also noted the recent Nier/Columbia report confirming the attribution of slow-neutron fission to U235. (He could hardly have missed it; the New York Times and other newspapers publicized the story widely. He wrote Szilard irritably or ingenuously that he found it "a little difficult to figure out the guiding principle [of keeping fission research secret] in view of the recent ample publicity given to the separation of isotopes."1368) His reading for the review article and the new Columbia measurements had stimulated him to further thought; the result was his Physical Review letter.
...Szilard… answered Turner’s letter on May 30… [and] told him "it might eventually turn out to be a very important contribution" — and proposed he keep it secret. Szilard saw beyond what Turner had seen. He saw that a fissile element bred in uranium could be chemically separated away: that the relatively easy and relatively inexpensive process of chemical separation could replace the horrendously difficult and expensive process of physical separation of isotopes as a way to a bomb.
And:
And:
From Poor Economics:
First, the poor often lack critical pieces of information and believe things that are not true. They are unsure about the benefits of immunizing children; they think there is little value in what is learned during the first few years of education; they don’t know how much fertilizer they need to use; they don’t know which is the easiest way to get infected with HIV; they don’t know what their politicians do when in office. When their firmly held beliefs turn out to be incorrect, they end up making the wrong decision, sometimes with drastic consequences — think of the girls who have unprotected sex with older men or the farmers who use twice as much fertilizer as they should. Even when they know that they don’t know, the resulting uncertainty can be damaging. For example, the uncertainty about the benefits of immunization combines with the universal tendency to procrastinate, with the result that a lot of children don’t get immunized. Citizens who vote in the dark are more likely to vote for someone of their ethnic group, at the cost of increasing bigotry and corruption.
We saw many instances in which a simple piece of information makes a big difference. However, not every information campaign is effective. It seems that in order to work, an information campaign must have several features: It must say something that people don’t already know (general exhortations like "No sex before marriage" seem to be less effective); it must do so in an attractive and simple way (a film, a play, a TV show, a well-designed report card); and it must come from a credible source (interestingly, the press seems to be viewed as credible). One of the corollaries of this view is that governments pay a huge cost in terms of lost credibility when they say things that are misleading, confusing, or false.
Second, the poor bear responsibility for too many aspects of their lives. The richer you are, the more the "right" decisions are made for you. The poor have no piped water, and therefore do not benefit from the chlorine that the city government puts into the water supply. If they want clean drinking water, they have to purify it themselves. They cannot afford ready-made fortified breakfast cereals and therefore have to make sure that they and their children get enough nutrients. They have no automatic way to save, such as a retirement plan or a contribution to Social Security, so they have to find a way to make sure that they save. These decisions are difficult for everyone because they require some thinking now or some other small cost today, and the benefits are usually reaped in the distant future. As such, procrastination very easily gets in the way. For the poor, this is compounded by the fact that their lives are already much more demanding than ours: Many of them run small businesses in highly competitive industries; most of the rest work as casual laborers and need to constantly worry about where their next job will come from. This means that their lives could be significantly improved by making it as easy as possible to do the right thing — based on everything else we know — using the power of default options and small nudges: Salt fortified with iron and iodine could be made cheap enough that everyone buys it. Savings accounts, the kind that make it easy to put in money and somewhat costlier to take it out, can be made easily available to everyone, if need be, by subsidizing the cost for the bank that offers them. Chlorine could be made available next to every source where piping water is too expensive. There are many similar examples.
Third, there are good reasons that some markets are missing for the poor, or that the poor face unfavorable prices in them. The poor get a negative interest rate from their savings accounts (if they are lucky enough to have an account) and pay exorbitant rates on their loans (if they can get one) because handling even a small quantity of money entails a fixed cost. The market for health insurance for the poor has not developed, despite the devastating effects of serious health problems in their lives because the limited insurance options that can be sustained in the market (catastrophic health insurance, formulaic weather insurance) are not what the poor want.
In some cases, a technological or an institutional innovation may allow a market to develop where it was missing. This happened in the case of microcredit, which made small loans at more affordable rates available to millions of poor people, although perhaps not the poorest. Electronic money transfer systems (using cell phones and the like) and unique identification for individuals may radically cut the cost of providing savings and remittance services to the poor over the next few years. But we also have to recognize that in some cases, the conditions for a market to emerge on its own are simply not there. In such cases, governments should step in to support the market to provide the necessary conditions, or failing that, consider providing the service themselves.
We should recognize that this may entail giving away goods or services (such as bed nets or visits to a preventive care center) for free or even rewarding people, strange as it might sound, for doing things that are good for them. The mistrust of free distribution of goods and services among various experts has probably gone too far, even from a pure cost-benefit point of view. It often ends up being cheaper, per person served, to distribute a service for free than to try to extract a nominal fee. In some cases, it may involve ensuring that the price of a product sold by the market is attractive enough to allow the market to develop. For example, governments could subsidize insurance premiums, or distribute vouchers that parents can take to any school, private or public, or force banks to offer free "no frills" savings accounts to everyone for a nominal fee. It is important to keep in mind that these subsidized markets need to be carefully regulated to ensure they function well. For example, school vouchers work well when all parents have a way of figuring out the right school for their child; otherwise, they can turn into a way of giving even more of an advantage to savvy parents.
Fourth, poor countries are not doomed to failure because they are poor, or because they have had an unfortunate history. It is true that things often do not work in these countries: Programs intended to help the poor end up in the wrong hands, teachers teach desultorily or not at all, roads weakened by theft of materials collapse under the weight of overburdened trucks, and so forth. But many of these failures have less to do with some grand conspiracy of the elites to maintain their hold on the economy and more to do with some avoidable flaw in the detailed design of policies, and the ubiquitous three Is: ignorance, ideology, and inertia. Nurses are expected to carry out jobs that no ordinary human being would be able to complete, and yet no one feels compelled to change their job description. The fad of the moment (be it dams, barefoot doctors, microcredit, or whatever) is turned into a policy without any attention to the reality within which it is supposed to function. We were once told by a senior government official in India that the village education committees always include the parent of the best student in the school and the parent of the worst student in the school. When we asked how they decided who were the best and worst children, given that there are no tests until fourth grade, she quickly changed subjects. And yet even these absurd rules, once in place, keep going out of sheer inertia.
The good news, if that is the right expression, is that it is possible to improve governance and policy without changing the existing social and political structures. There is tremendous scope for improvement even in "good" institutional environments, and some margin for action even in bad ones. A small revolution can be achieved by making sure that everyone is invited to village meetings; by monitoring government workers and holding them accountable for failures in performing their duties; by monitoring politicians at all levels and sharing this information with voters; and by making clear to users of public services what they should expect—what the exact health center hours are, how much money (or how many bags of rice) they are entitled to.
Finally, expectations about what people are able or unable to do all too often end up turning into self-fulfilling prophecies. Children give up on school when their teachers (and sometimes their parents) signal to them that they are not smart enough to master the curriculum; fruit sellers don’t make the effort to repay their debt because they expect that they will fall back into debt very quickly; nurses stop coming to work because nobody expects them to be there; politicians whom no one expects to perform have no incentive to try improving people’s lives. Changing expectations is not easy, but it is not impossible: After seeing a female pradhan in their village, villagers not only lost their prejudice against women politicians but even started thinking that their daughter might become one, too; teachers who are told that their job is simply to make sure that all the children can read can accomplish that task within the duration of a summer camp. Most important, the role of expectations means that success often feeds on itself. When a situation starts to improve, the improvement itself affects beliefs and behavior. This is one more reason one should not necessarily be afraid of handing things out (including cash) when needed to get a virtuous cycle started.
From The Visioneers:
And:
And:
And:
From Priest & Arkin’s Top Secret America:
It’s not an academic insufficiency. When John M. Custer III was the director of intelligence at U.S. Central Command, he grew angry at how little helpful information came out of the NCTC. In 2007, he visited its director at the time, retired vice admiral John Scott Redd, to say so, loudly. "I told him," Custer explained to me, "that after four and a half years, this organization had never produced one shred of information that helped me prosecute three wars!" Redd was not apologetic. He believed the system worked well, saying it wasn’t designed to serve commanders in the field but policy makers in Washington. That explanation sounded like a poor excuse to Custer. Mediocre information was mediocre information, no matter on whose desk it landed.
Two years later, as head of the army’s intelligence school at Fort Huachuca, Arizona, Custer still got red-faced when he recalled that day and his general frustration with Washington’s bureaucracy. "Who has the mission of reducing redundancy and ensuring everybody doesn’t gravitate to the lowest-hanging fruit?" he asked. "Who orchestrates what is produced so that everybody doesn’t produce the same thing?" The answer in Top Secret America was, dangerously, nobody.
This sort of wasteful redundancy is endemic in Top Secret America, not just in analysis but everywhere. Born of the blank check that Congress first gave national security agencies in the wake of the 9/11 attacks, Top Secret America’s wasteful duplication was cultivated by the bureaucratic instinct that bigger is always better, and by the speed at which big departments like defense allowed their subagencies to grow.
Comment
More (#2) from Top Secret America:
These developments were heartbreaking to those who had spent years building up Northern Command. But the fact that Northern Command would even continue to exist as a major, four-star-led, geographic military command, with virtually no responsibilities, no competencies, and no unique role to fill, demonstrated the resiliency of institutions created in the wake of 9/11 and just how difficult it would be to ever actually shrink Top Secret America. Northern Command, with its $100 million renovated concrete headquarters, its two dozen generals, its redundant command centers, its gigantic electronic map, and its multitude of contractors, looked as busy as ever, putting together agendas and exercises and PowerPoint briefings in the name of keeping the nation safe.
And, on JSOC:
This secretive organization, created in 1980 but completely reinvented in 2003, flies ten times more drones than the CIA. Some are armed with Hellfire missiles; most carry video cameras, sensors, and signals intercept equipment. When the CIA’s paramilitary Special Activities Division1 needs help, or when the president decides to send agency operatives on a covert mission into a foreign country, it often borrows troops from this same organization, temporarily deputizing them when necessary in order to get the missions done.
The CIA has captured, imprisoned, and interrogated close to a hundred terrorists in secret prisons around the world. Troops from this other secret military unit have captured and interrogated ten times as many. They hold them in prisons in Iraq and Afghanistan that they alone control and, for at least three years after 9/11, they sometimes ignored U.S. military rules for interrogation and used almost whatever means they thought might be most effective.
Of all the top secret units fighting terrorism after 9/11, this is the single organization that has killed and captured more al-Qaeda members around the world and destroyed more of their training camps and safe houses than the rest of the U.S. government forces combined. And although it greatly benefited from the technology produced by Top Secret America, the secret to its success has been otherwise escaping the behemoth created in response to the 9/11 attacks.
And:
Worse, some JSOC Task Force 121 members were beating prisoners—something that would before long become known to Iraqis and the rest of the world. Indeed, even before the Abu Ghraib prison photos began circulating among investigators, a confidential report warned army generals that some JSOC interrogators were assaulting prisoners and hiding them in secret facilities, and that this could be feeding the Iraqi insurgency by "making gratuitous enemies," reported the Washington Post’s Josh White, who first obtained a copy of the report by retired colonel Stuart A. Herrington.
That wasn’t the only extreme: in an effort to force insurgents to turn themselves in, some JSOC troops also detained mothers, wives, and daughters when the men in a house they were looking for were not at home. These detentions and other massive sweep operations flooded prisons with terrified, innocent people—some of them were more like hostages than suspects—that was particularly counterproductive to winning Iraqi support, Herrington noted.
And:
When it was finished, the new administration had "changed virtually nothing," said Rizzo. "Things continued. Authorities were continued that were originally granted by President Bush beginning shortly after 9/11. Those were all picked up, reviewed, and endorsed by the Obama administration."
Like that of his predecessor, Obama’s Justice Department has also aggressively used the state secrets privilege to quash court challenges to clandestine government actions. The privilege is a rule that permits the executive branch to withhold evidence in a court case when it believes national security would be harmed by its public release. From January 2001 to January 2009, the government invoked the state secrets privilege in more than one hundred cases, which is more than five times the number of cases invoked in all previous administrations, according to a study by the Georgetown Law Center on National Security and the Law. The Obama administration also initiated more leak investigations against national security whistle-blowers and journalists than had the Bush administration, hoping, at the very least, to scare government employees with security clearances into not speaking with reporters.
And the growth of Top Secret America continued, too. In the first month of the administration, four new intelligence and Special Operations organizations that had already been in the works were activated.3 But by the end of 2009, some thirty-nine new or reorganized counterterrorism organizations came into being. This included seven new counterterrorism and intelligence task forces overseas and ten Special Operations and military intelligence units that were created or substantially reorganized. The next year, 2010, was just as busy: Obama’s Top Secret America added twenty-four new organizations and a dozen new task forces and military units, although the wars in Afghanistan and Iraq were winding down.
Comment
I wonder if the security-industrial complex bureaucracy is any better in other countries.
Comment
Which sense of "better" do you have in mind? :-)
Comment
More efficient.
Comment
KGB had a certain aura, though I don’t know if its descendants have the same cachet. Israeli security is supposed to be very good.
Stay tuned; The Secret History of MI6 and Defend the Realm are in my audiobook queue. :)
More (#1) from Top Secret America:
And this is exactly what a terrorist organization would want. With no hope of defeating a much better equipped and professional nation-state army, terrorists hoped to get their adversary to overreact, to bleed itself dry, and to trample the very values it tried to protect. In this sense, al-Qaeda—though increasingly short on leaders and influence (a fact no one in Top Secret America would ever say publicly, just in case there was another attack)—was doing much more damage to its enemy than it had on 9/11.
And:
But when that dreaded but awaited intelligence about threats originating in Yemen reached the National Counterterrorism Center for analysis, it arrived buried within the daily load of thousands of snippets of general terrorist-related data from around the world that Leiter said all needed to be given equal attention.
Instead of searching one network of computerized intelligence reports, NCTC analysts had to switch from database to database, from hard drive to hard drive, from screen to screen, merely to locate the Yemen material that might be interesting to study further. If they wanted raw material—transcripts of voice intercepts or email exchanges that had not been analyzed and condensed by the CIA or NSA—they had to use liaison officers assigned to those agencies to try to find it, or call people they happened to know there and try to persuade them to locate it. As secret U.S. military operations in Yemen intensified and the chatter about a possible terrorist strike in the United States increased, the intelligence agencies further ramped up their effort. That meant that the flood of information coming into the NCTC became a torrent, a fire hose instead of an eyedropper.
Somewhere in that deluge was Umar Farouk Abdulmutallab. He showed up in bits and pieces. In August, NSA intercepted al-Qaeda conversations about an unidentified "Nigerian." They had only a partial name. In September, the NSA intercepted a communication about Awlaki—the very same person Major Hasan had contacted—facilitating transportation for someone through Yemen. There was also a report from the CIA station in Nigeria of a father who was worried about his son because he had become interested in radical teachings and had gone to Yemen.
But even at a time of intense secret military operations going on in the country, the many clues to what was about to happen went missing in the immensity and complexity of the counterterrorism system. Abdulmutallab left Yemen, returned to Nigeria, and on December 16 purchased a one-way ticket to the United States. Once again, connections hiding in plain sight went unnoticed.
"There are so many people involved here," Leiter later told Congress.
"Everyone had the dots to connect," DNI Blair explained to lawmakers. "But I hadn’t made it clear exactly who had primary responsibility."
Waltzing through the gaping holes in the security net, Abdulmutallab was able to step aboard Northwest Airlines Flight 253 without any difficulty. As the plane descended toward Detroit, he returned from the bathroom with a pillow over his stomach and tried to ignite explosives hidden in his underwear. And just as the billions of dollars and tens of thousands of security-cleared personnel of the massive 9/11 apparatus hadn’t prevented Abdulmutallab from getting to this moment, it did nothing now to prevent disaster. Instead, a Dutch video producer, Jasper Schuringa, dove across four airplane seats to tackle the twenty-three-year-old when he saw him trying to light something on fire.
The secretary of Homeland Security, Janet Napolitano, was the first to address the public afterward. She was happy to announce that "once the incident occurred, the system worked." The next day, however, she admitted the system that had allowed him onto the plane with an explosive had "failed miserably."
"We didn’t follow up and prioritize the stream of intelligence," White House counterterrorism adviser John O. Brennan explained later, "because no one intelligence entity, or team, or task force, was assigned responsibility for doing that follow-up investigation."
Incredible as it was, after all this time, after all these reorganizations, after all the money spent to get things right, no one person was actually responsible for counterterrorism. And no one is responsible today, either.
From Pentland’s Social Physics:
What Kelly found was that star producers engage in "preparatory exploration"; that is, they develop dependable two-way streets to experts ahead of time, setting up a relationship that will later help the star producer complete critical tasks. Moreover, the stars’ networks differed from typical workers’ networks in two important respects. First, they maintained stronger engagement with the people in their networks, so that these people responded more quickly and helpfully. As a result, the stars rarely spent time spinning their wheels or going down blind alleys.
Second, star performers’ networks were also more diverse. Average performers saw the world only from the viewpoint of their job, and kept pushing the same points. Stars, on the other hand, had people in their networks with a more diverse set of work roles, so they could adopt the perspectives of customers, competitors, and managers. Because they could see the situation from a variety of viewpoints, they could develop better solutions to problems.
Comment
More (#2) from Social Physics:
On average, it turned out that the social network incentive scheme worked almost four times more efficiently than a traditional individual-incentive market approach. For the buddies who had the most interactions with their assigned target, the social network incentive worked almost eight times better than the standard market approach.
And better yet, it stuck. People who received social network incentives maintained their higher levels of activity even after the incentives disappeared. These small but focused social network incentives generated engagement around new, healthier habits of behavior by creating social pressure for behavior change in the community.
And:
More (#1) from Social Physics:
And:
One particular health behavior that we focused on was weight change and on whether this was more influenced by the behavior of friends or by peers in the surrounding community...
exposure to the behavior examples that surrounded each individual dominated everything else we examined in this study. It was more important than personal factors, such as weight gain by friends, gender, age, or stress/happiness, and even more than all these other factors combined. Put another way, the effect of exposure to the surrounding set of behavior examples was about as powerful as the effect of IQ on standardized test scores.
It might be asked how we can know that exposure to the surrounding behaviors actually caused the idea flow; perhaps it is merely a correlation. The answer is in this experiment we could make quantitative, time-synchronized predictions, which make other noncausal explanations fairly implausible. Perhaps even more persuasively, we have also been able to use the connection between exposure and behavior to predict outcomes in several different situations, and even to manipulate exposure in order to cause behavior changes. Finally, there also have been careful quantitative laboratory experiments that show similar effects and in which the causality is certain.
Therefore, people seem to pick up at least some habits from exposure to those of peers (and not just friends). When everyone else takes that second slice of pizza, we probably will also. The fact that exposure turned out to be more important for driving idea flow than all the other factors combined highlights the overarching importance of automatic social learning in shaping our lives.
And:
We also asked the students a wide range of questions about their interest in politics, involvement in politics, political leanings, and finally (after the election), we inquired which candidate had received their vote. In total, this produced more than five hundred thousand hours of automatically generated data about their interaction patterns, which we then combined with survey data about their beliefs, attitudes, personality, and more.
When sifting through these hundreds of gigabytes of data, we found that the amount of exposure to people possessing similar opinions accurately predicted both the students’ level of interest in the presidential race and their liberal-conservative balance. This collective opinion effect was very clear: More exposure to similar views made the students more extreme in their own views.
Most important, though, this meant that the amount of exposure to people with similar views also predicted the students’ eventual voting behavior. For first-year students, the size of this social exposure effect was similar to the weight gain ones I described in the previous section, while for older students, who presumably had more fixed attitudes, the size of the effect was less but still quite significant.
But what did not predict their voting behavior? The views of the people they talked politics with, and the views of their friends. Just as with weight gain, it was the behavior of the surrounding peer group—the set of behavior examples that they were immersed in—that was the most powerful force in driving idea flow and shaping opinion.
From de Mesquita and Smith’s The Dictator’s Handbook:
Equally, we may well wonder: Why are Wall Street executives so politically tone-deaf that they dole out billions in bonuses while plunging the global economy into recession? Why is the leadership of a corporation, on whose shoulders so much responsibility rests, decided by so few people? Why are failed CEOs retained and paid handsomely even as their company’s shareholders lose their shirts?
In one form or another, these questions of political behavior pop up again and again. Each explanation, each story, treats the errant leader and his or her faulty decision making as a one-off, one-of-a-kind situation. But there is nothing unique about political behavior.
...We look at each case and conclude they are different, uncharacteristic anomalies. Yet they are held together by the logic of politics, the rules ruling rulers.
...To understand politics properly, we must modify one assumption in particular: we must stop thinking that leaders can lead unilaterally.
No leader is monolithic. If we are to make any sense of how power works, we must stop thinking that North Korea’s Kim Jong Il can do whatever he wants. We must stop believing that Adolf Hitler or Joseph Stalin or Genghis Khan or anyone else is in sole control of their respective nation. We must give up the notion that Enron’s Kenneth Lay or British Petroleum’s (BP) Tony Hayward knew about everything that was going on in their companies, or that they could have made all the big decisions. All of these notions are flat out wrong because no emperor, no king, no sheikh, no tyrant, no chief executive officer (CEO), no family head, no leader whatsoever can govern alone.
...For leaders, the political landscape can be broken down into three groups of people: the nominal selectorate, the real selectorate, and the winning coalition.
The nominal selectorate includes every person who has at least some legal say in choosing their leader. In the United States it is everyone eligible to vote, meaning all citizens aged eighteen and over. Of course, as every citizen of the United States must realize, the right to vote is important, but at the end of the day no individual voter has a lot of say over who leads the country. Members of the nominal selectorate in a universal-franchise democracy have a toe in the political door, but not much more. In that way, the nominal selectorate in the United States or Britain or France doesn’t have much more power than its counterparts, the "voters," in the old Soviet Union. There, too, all adult citizens had the right to vote, although their choice was generally to say Yes or No to the candidates chosen by the Communist Party rather than to pick among candidates. Still, every adult citizen of the Soviet Union, where voting was mandatory, was a member of the nominal selectorate. The second stratum of politics consists of the real selectorate. This is the group that actually chooses the leader. In today’s China (as in the old Soviet Union), it consists of all voting members of the Communist Party; in Saudi Arabia’s monarchy it is the senior members of the royal family; in Great Britain, the voters backing members of parliament from the majority party. The most important of these groups is the third, the subset of the real selectorate that makes up a winning coalition. These are the people whose support is essential if a leader is to survive in office. In the USSR the winning coalition consisted of a small group of people inside the Communist Party who chose candidates and who controlled policy. Their support was essential to keep the commissars and general secretary in power. These were the folks with the power to overthrow their boss—and he knew it. In the United States the winning coalition is vastly larger. It consists of the minimal number of voters who give the edge to one presidential candidate (or, at the legislative level in each state or district, to a member of the House or Senate) over another. For Louis XIV, the winning coalition was a handful of members of the court, military officers, and senior civil servants without whom a rival could have replaced the king.
Fundamentally, the nominal selectorate is the pool of potential support for a leader; the real selectorate includes those whose support is truly influential; and the winning coalition extends only to those essential supporters without whom the leader would be finished. A simple way to think of these groups is: interchangeables, influentials, and essentials.
In the United States, the voters are the nominal selectorate — interchangeables. As for the real selectorate — influentials — the electors of the electoral college really choose the president (just like the party faithful picked their general secretary back in the USSR), but the electors nowadays are normatively bound to vote the way their state’s voters voted, so they don’t really have much independent clout in practice. In the United States, the nominal selectorate and real selectorate are therefore pretty closely aligned. This is why, even though you’re only one among many voters, interchangeable with others, you still feel like your vote is influential — that it counts and is counted. The winning coalition — essentials — in the United States is the smallest bunch of voters, properly distributed among the states, whose support for a candidate translates into a presidential win in the electoral college. And while the winning coalition (essentials) is a pretty big fraction of the nominal selectorate (interchangeables), it doesn’t have to be even close to a majority of the US population. In fact, given the federal structure of American elections, it’s possible to control the executive and legislative branches of government with as little as about one fifth of the vote, if the votes are really efficiently placed...
Looking elsewhere we see that there can be a vast range in the size of the nominal selectorate, the real selectorate, and the winning coalition. Some places, like North Korea, have a mass nominal selectorate in which everyone gets to vote — it’s a joke, of course — a tiny real selectorate who actually pick their leader, and a winning coalition that surely is no more than maybe a couple of hundred people (if that) and without whom even North Korea’s first leader, Kim Il Sung, could have been reduced to ashes. Other nations, like Saudi Arabia, have a tiny nominal and real selectorate, made up of the royal family and a few crucial merchants and religious leaders. The Saudi winning coalition is perhaps even smaller than North Korea’s.
...These three groups provide the foundation of all that’s to come in the rest of this book, and, more importantly, the foundation behind the working of politics in all organizations, big and small. Variations in the sizes of these three groups give politics a three-dimensional structure that clarifies the complexity of political life. By working out how these dimensions intersect—that is, each organization’s mix in the size of its interchangeable, influential, and essential groups—we can come to grips with the puzzles of politics. Differences in the size of these groups across states, businesses, and any other organization, as you will see, decide almost everything that happens in politics—what leaders can do, what they can and can’t get away with, to whom they answer, and the relative qualities of life that everyone under them enjoys (or, too often, doesn’t enjoy).
Comment
More (#2) from The Dictator’s Handbook:
And:
We have seen that larger coalition systems are extremely selective in their decisions about waging war and smaller coalition systems are not. Democracies only fight when negotiation proves unfruitful and the democrat’s military advantage is overwhelming, or when, without fighting, the democrat’s chances of political survival are slim to none. Furthermore, when war becomes necessary, large-coalition regimes make an extra effort to win if the fight proves difficult. Small-coalition leaders do not if doing so uses up so much treasure that would be better spent on private rewards that keep their cronies loyal. And finally, when a war is over, larger coalition leaders make more effort to enforce the peace and the policy gains they sought through occupation or the imposition of a puppet regime. Small-coalition leaders mostly take the valuable private goods for which they fought and go home, or take over the territory they conquered so as to enjoy the economic fruits of their victory for a long time.
Clausewitz had war right. War, it seems, truly is just domestic politics as usual. For all the philosophical talk of "a just war," and all the strategizing about balances of power and national interests, in the end, war, like all politics, is about staying in power and controlling as many resources as possible. It is precisely this predictability and normality of war that makes it, like all the pathologies of politics we have discussed, susceptible to being understood and fixed.
More (#1) from The Dictator’s Handbook:
The resource curse enables autocrats to massively reward their supporters and accumulate enormous wealth. This drives prices to the stratospheric heights seen in Luanda, where wealthy expatriates and lucky coalition members can have foie gras flown in from France every day. Yet to make sure the people cannot coordinate, rebel, and take control of the state, leaders endeavor to keep those outside the coalition poor, ignorant, and unorganized. It is ironic that while oil revenues provide the resources to fix societal problems, it creates political incentives to make them far worse.
This effect is much less pernicious in democracies. The trouble is that once a state profits from mineral wealth, it is unlikely to democratize. The easiest way to incentivize the leader to liberalize policy is to force him to rely on tax revenue to generate funds. Once this happens, the incumbent can no longer suppress the population because the people won’t work if he does.
The upshot is that the resource curse can be lifted. If aid organizations want to help the peoples of oil-rich nations, then the logic of our survival-based argument suggests they would achieve more by spending their donations lobbying the governments in the developed world to increase the tax on petroleum than by providing assistance overseas. By raising the price of oil and gas, such taxes would reduce worldwide demand for oil. This in turn would reduce oil revenues and make leaders more reliant on taxation.
From Ferguson’s The Ascent of Money:
Comment
More (#1) from The Ascent of Money:
And:
With its domestic bond market exhausted and only two paltry foreign loans, the Confederate government was forced to print unbacked paper dollars to pay for the war and its other expenses, 1.7 billion dollars’ worth in all. Both sides in the Civil War had to print money, it is true. But by the end of the war the Union’s ‘greenback’ dollars were still worth about 50 cents in gold, whereas the Confederacy’s ‘greybacks’ were worth just one cent, despite a vain attempt at currency reform in 1864. The situation was worsened by the ability of Southern states and municipalities to print paper money of their own; and by rampant forgery, since Confederate notes were crudely made and easy to copy. With ever more paper money chasing ever fewer goods, inflation exploded. Prices in the South rose by around 4,000 per cent during the Civil War. By contrast, prices in the North rose by just 60 per cent. Even before the surrender of the principal Confederate armies in April 1865, the economy of the South was collapsing, with hyperinflation as the sure harbinger of defeat.
The Rothschilds had been right. Those who had invested in Confederate bonds ended up losing everything, since the victorious North pledged not to honour the debts of the South. In the end, there had been no option but to finance the Southern war effort by printing money. It would not be the last time in history that an attempt to buck the bond market would end in ruinous inflation and military humiliation.
The Medici Bank is pretty interesting. A while ago I wrote https://en.wikipedia.org/wiki/Medici_Bank on the topic; LWers might find it interesting how international finance worked back then.
From Scahill’s Dirty Wars:
GST was also the vehicle for snatch operations, known as extraordinary renditions. Under GST, the CIA began coordinating with intelligence agencies in various countries to establish "Status of Forces" agreements to create secret prisons where detainees could be held, interrogated and kept away from the Red Cross, the US Congress and anything vaguely resembling a justice system. These agreements not only gave immunity to US government personnel, but to private contractors as well. The administration did not want to put terror suspects on trial, "because they would get lawyered up," said Jose Rodriguez, who at the time ran the CIA’s Directorate of Operations, which was responsible for all of the "action" run by the Agency. "[O]ur job, first and foremost, is to obtain information." To obtain that information, authorization was given to interrogators to use ghoulish, at times medieval, techniques on detainees, many of which were developed by studying the torture tactics of America’s enemies. The War Council lawyers issued a series of legal documents, later dubbed the "Torture Memos" by human rights and civil liberties organizations, that attempted to rationalize the tactics as necessary and something other than torture...
The CIA began secretly holding prisoners in Afghanistan on the edge of Bagram Airfield, which had been commandeered by US military forces. In the beginning, it was an ad hoc operation with prisoners stuffed into shipping containers. Eventually, it expanded to a handful of other discrete sites, among them an underground prison near the Kabul airport and an old brick factory north of Kabul. Doubling as a CIA substation, the factory became known as the "Salt Pit" and would be used to house prisoners, including those who had been snatched in other countries and brought to Afghanistan. CIA officials who worked on counterterrorism in the early days after 9/11 said that the idea for a network of secret prisons around the world was not initially a big-picture plan, but rather evolved as the scope of operations grew. The CIA had first looked into using naval vessels and remote islands—such as uninhabited islands dotting Lake Kariba in Zambia—as possible detention sites at which to interrogate suspected al Qaeda operatives. Eventually, the CIA would build up its own network of secret "black sites" in at least eight countries, including Thailand, Poland, Romania, Mauritania, Lithuania and Diego Garcia in the Indian Ocean. But in the beginning, lacking its own secret prisons, the Agency began funneling suspects to Egypt, Morocco and Jordan for interrogation. By using foreign intelligence services, prisoners could be freely tortured without any messy congressional inquiries.
In the early stages of the GST program, the Bush administration faced little obstruction from Congress. Democrats and Republicans alike gave tremendous latitude to the administration to prosecute its secret war. For its part, the White House at times refused to provide details of its covert operations to the relevant congressional oversight committees but met little protest for its reticence. The administration also unilaterally decided to reduce the elite Gang of Eight members of Congress to just four: the chairs and ranking members of the House and Senate intelligence committees. Those members are prohibited from discussing these briefings with anyone. In effect, it meant that Congress had no oversight of the GST program. And that was exactly how Cheney wanted it.
Comment
More (#2) from Dirty Wars:
During the course of their imprisonment, some of the prisoners were confined in boxes and subjected to prolonged nudity—sometimes lasting for several months. Some of them were kept for days at a time, naked, in "stress standing positions," with their "arms extended and chained above the head." During this torture, they were not allowed to use a toilet and "had to defecate and urinate over themselves." Beatings and kickings were common, as was a practice of placing a collar around a prisoner’s neck and using it to slam him against walls or yank him down hallways. Loud music was used for sleep deprivation, as was temperature manipulation. If prisoners were perceived to be cooperating, they were given clothes to wear. If they were deemed uncooperative, they’d be stripped naked. Dietary manipulation was used—at times the prisoners were put on liquid-only diets for weeks at a time. Three of the prisoners told the ICRC they had been waterboarded. Some of them were moved to as many as ten different sites during their imprisonment. "I was told during this period that I was one of the first to receive these interrogation techniques, so no rules applied," one prisoner, taken early on in the war on terror, told the ICRC. "I felt like they were experimenting and trying out techniques to be used later on other people."
And:
The heavy costs of that strategic redirection to the larger counterterrorism mission were of deep concern to Lieutenant Colonel Anthony Shaffer, a senior military intelligence officer who was CIA trained and had worked for the DIA and JSOC. Shaffer ran a task force, Stratus Ivy, that was part of a program started in the late 1990s code-named Able Danger. Utilizing what was then cutting-edge "data mining" technology, the program was operated by military intelligence and the Special Operations Command and aimed at identifying al Qaeda cells globally. Shaffer and some of his Able Danger colleagues claimed that they had uncovered several of the 9/11 hijackers a year before the attacks but that no action was taken against them. He told the 9/11 Commission he felt frustrated when the program was shut down and believed it was one of the few effective tools the United States had in the fight against al Qaeda pre-9/11. After the attacks, Shaffer volunteered for active duty and became the commander of the DIA’s Operating Base Alpha, which Shaffer said "conducted clandestine antiterrorist operations" in Africa. Shaffer was running the secret program, targeting al Qaeda figures who might flee Afghanistan and seek shelter in Somalia, Liberia and other African nations. It "was the first DIA covert action of the post–Cold War era, where my officers used an African national military proxy to hunt down and kill al Qaeda terrorists," Shaffer recalled.
Like many other experienced intelligence officers who had been tracking al Qaeda prior to 9/11, Shaffer believed that the focus was finally placed correctly on destroying the terror network and killing or capturing its leaders. But then all resources were repurposed for the Iraq invasion. "I saw the Bush administration lunacy up close and personal," Shaffer said. After a year and a half of running the African ops, "I was forced to shut down Operating Base Alpha so that its resources could be used for the Iraq invasion."
Shaffer was reassigned as an intelligence planner on the DIA team that helped feed information on possible Iraqi WMD sites to the advance JSOC teams that covertly entered Iraq ahead of the invasion. "It yielded nothing," he alleged. "As we now know, no WMD were ever found." He believed that shifting the focus and resources to Iraq was a grave error that allowed bin Laden to continue operating for nearly another decade. Shaffer was eventually sent to Afghanistan, where he would clash with US military leaders over his proposals to run operations into Pakistan to target the al Qaeda leaders who were hiding there.
And:
More (#1) from Dirty Wars:
The CIA flew Libi to the USS Bataan in the Arabian Sea, which was also housing the so-called American Taliban, John Walker Lindh, who had been picked up in Afghanistan, and other foreign fighters. From there, Libi was transferred to Egypt, where he was tortured by Egyptian agents. Libi’s interrogation focused on a goal that would become a centerpiece of the rendition and torture program: proving an Iraq connection to 9/11. Once he was in CIA custody, interrogators pummeled Libi with questions attempting to link the attacks and al Qaeda to Iraq. Even after the interrogators working Libi over had reported that they had broken him and that he was "compliant," Cheney’s office directly intervened and ordered that he continue to be subjected to enhanced interrogation techniques. "After real macho interrogation—this is enhanced interrogation techniques on steroids—he admitted that al Qaeda and Saddam were working together. He admitted that al Qaeda and Saddam were working together on WMDs," former senior FBI interrogator Ali Soufan told PBS’s Frontline. But the Defense Intelligence Agency (DIA) cast serious doubt on Libi’s claims at the time, observing in a classified intelligence report that he "lacks specific details" on alleged Iraqi involvement, asserting that it was "likely this individual is intentionally misleading" his interrogators. Noting that he had been "undergoing debriefs for several weeks," the DIA analysis concluded Libi may have been "describing scenarios to the debriefers that he knows will retain their interest." Despite such doubts, Libi’s "confession" would later be given to Secretary of State Powell when he made the administration’s fraudulent case at the United Nations for the Iraq War. In that speech Powell would say, "I can trace the story of a senior terrorist operative telling how Iraq provided training in these weapons to al Qaeda." Later, after these claims were proven false, Libi, according to Soufan, admitted he had lied. "I gave you what you want[ed] to hear," he said. "I want[ed] the torture to stop. I gave you anything you want[ed] to hear."
And:
And:
Saleh’s entourage was given a list of several al Qaeda suspects that the Yemeni regime could target as a show of good faith. The next month, Saleh ordered his forces to raid a village in Marib Province, where Abu Ali al Harithi, a lead suspect in the Cole bombing, and other militants were believed to be residing. The operation by Yemeni special forces was a categorical failure. Local tribesmen took several of the soldiers hostage and the targets of the raid allegedly escaped unharmed. The soldiers were later released through tribal mediators, but the action angered the tribes and served as a warning to Saleh to stay out of Marib. It was the beginning of what would be a complex and dangerous chess match for Saleh as he made his first moves to satisfy Washington’s desire for targeted killing in Yemen while maintaining his own hold on power.
And:
The use of these new techniques was discussed at the National Security Council, including at meetings attended by Rumsfeld and Condoleezza Rice. By the summer of 2002, the War Council legal team, led by Cheney’s consigliere, David Addington, had developed a legal rationale for redefining torture so narrowly that virtually any tactic that did not result in death was fair game. "For an act to constitute torture as defined in [the federal torture statute], it must inflict pain that is difficult to endure. Physical pain amounting to torture must be equivalent in intensity to the pain accompanying serious physical injury, such as organ failure, impairment of bodily function, or even death," Assistant Attorney General for the Office of Legal Counsel Jay Bybee asserted in what would become an infamous legal memo rationalizing the torture of US prisoners. "For purely mental pain or suffering to amount to torture under [the federal torture statute], it must result in significant psychological harm of significant duration, e.g., lasting for months or even years." A second memo signed by Bybee gave legal justification for using a specific series of "enhanced interrogation techniques," including waterboarding. "There was not gonna be any deniability," said the CIA’s Rodriguez, who was coordinating the interrogation of prisoners at the black sites. "In August of 2002, I felt I had all the authorities that I needed, all the approvals that I needed. The atmosphere in the country was different. Everybody wanted us to save American lives." He added, "We went to the border of legality. We went to the border, but that was within legal bounds."
Comment
Foreign fighters show up everywhere. And now there’s the whole Islamic State issue. Perhaps all the world needs is more foreign legions doing good things. The FFL is overrecruited afterall. Heck, we could even deal with the refugee crisis by offering visas to those mercenaries. Sure as hell would be more popular than selling visas and citizenship cause people always get antsy about inequality and having less downward social comparisons.
Passage from Patterson’s Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market:
The two professors had noticed something very odd in the data: Nasdaq market makers rarely if ever posted an order at an "odd-eighth" — as in $10⅛ $10⅜ $10⅝ or $10⅞ (recall that this was a time when stocks were quoted in fractions of a dollar, not pennies.) Instead, they found that for heavily traded stocks such as Apple, market makers posted odd-eighth quotes roughly 1 percent of the time.
When they looked at spreads for stocks on the NYSE or American Stock Exchange, by comparison, they found a consistent use of odd-eighths. That meant Nasdaq market makers must be deliberately colluding to keep spreads artificially wide. Instead of the minimum spread of 12.5 cents (one-eighth of a dollar), spreads were usually twenty-five or fifty cents wide. That extra 12.5 cents was coming directly out of the pockets of investors. Add it up, and Nasdaq’s market makers were siphoning billions out of the pockets of investors.
...Inside the SEC, the study erupted like a bomb. The Nasdaq investigation was assigned to a staid, low-key attorney in the enforcement division named Leo Wang. Socially awkward, but aggressive as a pit bull, Wang had gained prestige within the commission for handling a high-profile bond-manipulation case against Salomon Brothers in the early 1990s.… [Wang] started hammering Nasdaq dealers with subpoenas, demanding transaction records. He hit the jackpot when he forced the firms to hand over truckloads of tape recordings going back years. Traders had been oblivious to the recordings, which were made as a backup in the event of a dispute over the details of a trade. Inside the SEC, the enormity of the task of reviewing the tapes at first seemed daunting — it could take weeks, if not months, to comb through them for evidence of price fixing.
But it proved all too easy: The very first tape Wang played revealed two dealers fixing prices.
Some relevant quotes from Schlosser’s Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety:
The B-52 was carrying two Mark 39 hydrogen bombs, each with a yield of 4 megatons. As the aircraft spun downward, centrifugal forces pulled a lanyard in the cockpit. The lanyard was attached to the bomb release mechanism. When the lanyard was pulled, the locking pins were removed from one of the bombs. The Mark 39 fell from the plane. The arming wires were yanked out, and the bomb responded as though it had been deliberately released by the crew above a target. The pulse generator activated the low-voltage thermal batteries. The drogue parachute opened, and then the main chute. The barometric switches closed. The timer ran out, activating the high-voltage thermal batteries. The bomb hit the ground, and the piezoelectric crystals inside the nose crushed. They sent a firing signal. But the weapon didn’t detonate.
Every safety mechanism had failed, except one: the ready/safe switch in the cockpit. The switch was in the SAFE position when the bomb dropped. Had the switch been set to GROUND or AIR, the X-unit would’ve charged, the detonators would’ve triggered, and a thermonuclear weapon would have exploded in a field near Faro, North Carolina...
The other Mark 39 plummeted straight down and landed in a meadow just off Big Daddy’s Road, near the Nahunta Swamp. Its parachutes had failed to open. The high explosives did not detonate, and the primary was largely undamaged...
The Air Force assured the public that the two weapons had been unarmed and that there was never any risk of a nuclear explosion. Those statements were misleading. The T-249 control box and ready/safe switch, installed in every one of SAC’s bombers, had already raised concerns at Sandia. The switch required a low-voltage signal of brief duration to operate — and that kind of signal could easily be provided by a stray wire or a short circuit, as a B-52 full of electronic equipment disintegrated midair.
A year after the North Carolina accident, a SAC ground crew removed four Mark 28 bombs from a B-47 bomber and noticed that all of the weapons were armed. But the seal on the ready/ safe switch in the cockpit was intact, and the knob hadn’t been turned to GROUND or AIR. The bombs had not been armed by the crew. A seven-month investigation by Sandia found that a tiny metal nut had come off a screw inside the plane and lodged against an unused radar-heating circuit. The nut had created a new electrical pathway, allowing current to reach an arming line— and bypass the ready/ safe switch. A similar glitch on the B-52 that crashed near Goldsboro would have caused a 4-megaton thermonuclear explosion. "It would have been bad news— in spades," Parker F. Jones, a safety engineer at Sandia, wrote in a memo about the accident. "One simple, dynamo-technology, low-voltage switch stood between the United States and a major catastrophe!"
And:
Members of the Joint Committee on Atomic Energy visited fifteen NATO bases in December 1960, eager to see how America’s nuclear weapons were being deployed. The group was accompanied by Harold Agnew, …an expert on how to design bombs, and how to handle them properly. At a NATO base in Germany, Agnew looked out at the runway and, in his own words, "nearly wet my pants." The F-84F fighter planes on alert, each carrying a fully assembled Mark 7 bomb, were being guarded by a single American soldier. Agnew walked over and asked the young enlisted man, who carried an old-fashioned, bolt-action rifle, what he’d do if somebody jumped into one of the planes and tried to take off. Would he shoot at the pilot— or the bomb? The soldier had never been told what to do… Agnew realized there was little to prevent a German pilot from taking a plane, flying it to the Soviet Union, and dropping an atomic bomb.
The custody arrangements at the Jupiter missile sites in Italy were even more alarming. Each site had three missiles topped with a 1.4-megaton warhead— a weapon capable of igniting firestorms and flattening every brick structure within thirty square miles. All the security was provided by Italian troops. The launch authentication officer was the only American at the site. Two keys were required to launch the missiles; one was held by the American, the other by an Italian officer. The keys were often worn on a string around the neck, like a dog tag.
Congressman Chet Holifield, the chairman of the joint committee, was amazed to find three ballistic missiles, carrying thermonuclear weapons, in the custody of a single American officer with a handgun. "All [the Italians] have to do is hit him on the head with a blackjack, and they have got his key," Holifield said, during a closed-door committee hearing after the trip. The Jupiters were located near a forest, without any protective covering, and brightly illuminated at night. They would be sitting ducks for a sniper. "There were three Jupiters setting there in the open— all pointed toward the sky," Holifield told the committee. "Over $300 million has been spent to set up that little show and it can be knocked out with 3 rifle bullets."
...Harold Agnew was amazed to see a group of NATO weapon handlers pull the arming wires out of a Mark 7 while unloading it from a plane. When the wires were pulled, the arming sequence began— and if the X-unit charged, a Mark 7 could be detonated by its radar, by its barometric switches, by its timer, or by falling just a few feet from a plane and landing on a runway. A stray cosmic ray could, theoretically, detonate it. The weapon seemed to invite mistakes… And a Mark 7 sometimes contained things it shouldn’t. A screwdriver was found inside one of the bombs; an Allen wrench was somehow left inside another. In both bombs, the loose tools could have caused a short circuit.
Comment
More from Command and Control:
Agnew brought an early version of the electromechanical locking system to Washington, D.C., for a closed-door hearing of the joint committee… To unlock a nuclear weapon, a two-man custodial team would attach a cable to it from the decoder. Then they’d turn the knobs on the decoder to enter a four-digit code. It was a "split-knowledge" code— each custodian would be given only two of the four numbers. Once the correct code was entered, the switch inside the weapon would take anywhere from thirty seconds to two and a half minutes to unlock, as its little gears, cams, and cam followers whirred and spun… everyone in the hearing room agreed that it was absolutely essential for national security.
The American military, however, vehemently opposed putting any locks on nuclear weapons. The Army, the Navy, the Air Force, the Marines, the Joint Chiefs of Staff, General Power at SAC, General Norstad at NATO — all of them agreed that locks were a bad idea. The always/never dilemma lay at the heart of military’s thinking. "No single device can be expected to increase both safety and readiness," the Joint Chiefs of Staff argued. And readiness was considered more important: the nuclear weapons in Europe were "adequately safe, within the limits of the operational requirements imposed on them."
...After reading the joint committee’s report, President Kennedy halted the dispersal of nuclear weapons among America’s NATO allies. Studies on weapon safety and command and control were commissioned. At Sandia, the development of coded, electromechanical locks was begun on a crash basis. Known at first as "Prescribed Action Links," the locks were given a new name, one that sounded less restrictive, in the hopes of appeasing the military. "Permissive Action Links" sounded more friendly, as did the acronym: PALs.
And:
More (#3) from Command and Control:
And:
Walske issued new safety standards in March 1968. They said that the "probability of a premature nuclear detonation" should be no greater than one in a billion, amid "normal storage and operational environments," during the lifetime of a single weapon. And the probability of a detonation amid "abnormal environments" should be no greater than one in a million. An abnormal environment could be anything from the heat of a burning airplane to the water pressure inside a sinking submarine. Walske’s safety standards applied to every nuclear weapon in the American stockpile. They demanded a high level of certainty that an accidental detonation could never occur. But they offered no guidelines on how these strict criteria could be met. And in the memo announcing the new policy, Walske expressed confidence that "the adoption of the attached standards will not result in any increase in weapon development times or costs."
A few months later, William L. Stevens was chosen to head Sandia’s new Nuclear Safety Department… Stevens looked through the accident reports kept by the Defense Atomic Support Agency, the Pentagon group that had replaced the Armed Forces Special Weapons Project. The military now used Native American terminology to categorize nuclear weapon accidents. The loss, theft, or seizure of a weapon was an Empty Quiver. Damage to a weapon, without any harm to the public or risk of detonation, was a Bent Spear. And an accident that caused the unauthorized launch or jettison of a weapon, a fire, an explosion, a release of radioactivity, or a full-scale detonation was a Broken Arrow. The official list of nuclear accidents, compiled by the Department of Defense and the AEC, included thirteen Broken Arrows. Bill Stevens read reports that secretly described a much larger number of unusual events with nuclear weapons. And a study of abnormal environments commissioned by Sandia soon found that at least 1,200 nuclear weapons had been involved in "significant" incidents and accidents between 1950 and March 1968.
The armed services had done a poor job of reporting nuclear weapon accidents until 1959— and subsequently reported about 130 a year. Many of the accidents were minor: "During loading of a Mk 25 Mod O WR Warhead onto a 6X6 truck, a handler lost his balance . . . the unit tipped and fell approximately four feet from the truck to the pavement." And some were not: "A C-124 Aircraft carrying eight Mk 28 War reserve Warheads and one Mk 49 Y2 Mod 3 War Reserve Warhead was struck by lightning… Observers noted a large ball of fire pass through the aircraft from nose to tail… The ball of fire was accompanied by a loud noise."
Reading these accident reports persuaded Stevens that the safety of America’s nuclear weapons couldn’t be assumed. The available data was insufficient for making accurate predictions about the future; a thousand weapon accidents were not enough for any reliable calculation of the odds. Twenty-three weapons had been directly exposed to fires during an accident, without detonating. Did that prove a fire couldn’t detonate a nuclear weapon? Or would the twenty-fourth exposure produce a blinding white flash and a mushroom cloud? The one-in-a-million assurances that Sandia had made for years now seemed questionable. They’d been made without much empirical evidence.
And:
Stan Spray’s group ruthlessly burned, scorched, baked, crushed, and tortured weapon components to find their potential flaws. And in the process Spray helped to overturn the traditional thinking about electrical circuits at Sandia. It had always been taken for granted that if two circuits were kept physically apart, if they weren’t mated or connected in any way— like separate power lines running beside a highway— current couldn’t travel from one to the other. In a normal environment, that might be true. But strange things began to happen when extreme heat and stress were applied.
When circuit boards were bent or crushed, circuits that were supposed to be kept far apart might suddenly meet. The charring of a circuit board could transform its fiberglass from an insulator into a conductor of electricity. The solder of a heat-sensitive fuse was supposed to melt when it reached a certain temperature, blocking the passage of current during a fire. But Spray discovered that solder behaved oddly once it melted. As a liquid it could prevent an electrical connection— or flow back into its original place, reconnect wires, and allow current to travel between them.
The unpredictable behavior of materials and electrical circuits during an accident was compounded by the design of most nuclear weapons. Although fission and fusion were radically new and destructive forces in warfare, the interior layout of bombs hadn’t changed a great deal since the Second World War. The wires from different components still met in a single junction box. Wiring that armed the bomb and wiring that prevented it from being armed often passed through the same junction— making it possible for current to jump from one to the other. And the safety devices were often located far from the bomb’s firing set. The greater the distance between them, Spray realized, the greater the risk that stray electricity could somehow enter an arming line, set off the detonators, and cause a nuclear explosion.
And:
More (#2) from Command and Control:
At a press conference the following day, Kennedy stressed that "it would be premature to reach a judgment as to whether there is a gap or not a gap." Soon the whole issue was forgotten. Political concerns, not strategic ones, determined how many long-range, land-based missiles the United States would build. Before Sputnik, President Eisenhower had thought that twenty to forty would be enough. Jerome Wiesner advised President Kennedy that roughly ten times that number would be sufficient for deterrence. But General Power wanted the Strategic Air Command to have ten thousand Minuteman missiles, aimed at every military target in the Soviet Union that might threaten the United States. And members of Congress, unaware that the missile gap was a myth, also sought a large, land-based force. After much back and forth, McNamara decided to build a thousand Minuteman missiles. One Pentagon adviser later explained that it was "a round number."
And:
And:
...While the Kennedy administration anxiously wondered if the Soviets would back down, Khrushchev maintained a defiant facade. And then on October 26, persuaded by faulty intelligence that an American attack on Cuba was about to begin, he wrote another letter to Kennedy, offering a deal: the Soviet Union would remove the missiles from Cuba, if the United States promised never to invade Cuba.
Khrushchev’s letter arrived at the American embassy in Moscow around five o’clock in the evening, which was ten in the morning, Eastern Standard Time. It took almost eleven hours for the letter to be fully transmitted by cable to the State Department in Washington, D.C. Kennedy and his advisers were encouraged by its conciliatory tone and decided to accept the deal— but went to bed without replying. Seven more hours passed, and Khrushchev started to feel confident that the United States wasn’t about to attack Cuba, after all. He wrote another letter to Kennedy, adding a new demand: the missiles in Cuba would be removed, if the United States removed its Jupiter missiles from Turkey. Instead of being delivered to the American embassy, this letter was broadcast, for the world to hear, on Radio Moscow.
On the morning of October 27, as President Kennedy was drafting a reply to Khrushchev’s first proposal, the White House learned about his second one. Kennedy and his advisers struggled to understand what was happening in the Kremlin. Conflicting messages were now coming not only from Khrushchev, but from various diplomats, journalists, and Soviet intelligence agents who were secretly meeting with members of the administration. Convinced that Khrushchev was being duplicitous, McNamara now pushed for a limited air strike to destroy the missiles. General Maxwell Taylor, now head of the Joint Chiefs of Staff, recommended a large-scale attack. When an American U-2 was shot down over Cuba, killing the pilot, the pressure on Kennedy to launch an air strike increased enormously. A nuclear war with the Soviet Union seemed possible. "As I left the White House… on that beautiful fall evening," McNamara later recalled, "I feared I might never live to see another Saturday night."
The Cuban Missile Crisis ended amid the same sort of confusion and miscommunication that had plagued much of its thirteen days. President Kennedy sent the Kremlin a cable accepting the terms of Khrushchev’s first offer, never acknowledging that a second demand had been made. But Kennedy also instructed his brother to meet privately with Ambassador Dobrynin and agree to the demands made in Khrushchev’s second letter— so long as the promise to remove the Jupiters from Turkey was never made public. Giving up dangerous and obsolete American missiles to avert a nuclear holocaust seemed like a good idea. Only a handful of Kennedy’s close advisers were told about this secret agreement.
Meanwhile, at the Kremlin, Khrushchev suddenly became afraid once again that the United States was about to attack Cuba. He decided to remove the Soviet missiles from Cuba— without insisting upon the removal of the Jupiters from Turkey. Before he had a chance to transmit his decision to the Soviet embassy in Washington, word arrived from Dobrynin about Kennedy’s secret promise. Khrushchev was delighted by the president’s unexpected— and unnecessary— concession. But time seemed to be running out, and an American attack might still be pending. Instead of accepting the deal through a diplomatic cable, Khrushchev’s decision to remove the missiles from Cuba was immediately broadcast on Radio Moscow. No mention was made of the American vow to remove its missiles from Turkey.
Both leaders had feared that any military action would quickly escalate to a nuclear exchange. They had good reason to think so. Although Khrushchev never planned to move against Berlin during the crisis, the Joint Chiefs had greatly underestimated the strength of the Soviet military force based in Cuba. In addition to strategic weapons, the Soviet Union had almost one hundred tactical nuclear weapons on the island that would have been used by local commanders to repel an American attack. Some were as powerful as the bomb that destroyed Hiroshima. Had the likely targets of those weapons— the American fleet offshore and the U.S. naval base at Guantánamo— been destroyed, an all-out nuclear war would have been hard to avoid.
More (#4) from Command and Control:
Peurifoy had recently heard about an explosive called [TATB]. It had been invented in 1888 but had been rarely used since then— because TATB was so hard to detonate. Under federal law, it wasn’t even classified as an explosive; it was considered a flammable solid. With the right detonators, however, it could produce a shock wave almost as strong as the high explosives that surrounded the core of a nuclear weapon. TATB soon became known as an "insensitive high explosive." You could drop it, hammer it, set it on fire, smash it into the ground at a speed of 1,500 feet per second, and it still wouldn’t detonate. The explosives being used in America’s nuclear weapons would go off from an impact one tenth as strong. Harold Agnew was now the director of Los Alamos, and he thought using TATB in hydrogen bombs made a lot more sense— as a means of preventing plutonium dispersal during an accident— than adding two or three thousand extra pounds of steel and padding.
All the necessary elements for nuclear weapon safety were now available: a unique signal, weak link/strong link technology, insensitive high explosives. The only thing missing was the willingness to fight a bureaucratic war on their behalf— and Bob Peurifoy had that quality in abundance. He was no longer a low-level employee, toiling away on the electrical system of a bomb, without a sense of the bigger picture. As the head of weapon development, he now had some authority to make policy at Sandia. And he planned to take advantage of it. Three months into the new job, Peurifoy told his superior, Glenn Fowler, a vice president at the lab, that all the nuclear weapons carried by aircraft had to be retrofitted with new safety devices. Peurifoy didn’t claim that the weapons were unsafe; he said their safety could no longer be presumed. Fowler listened carefully to his arguments and agreed. A briefing for Sandia’s upper management was scheduled for February 1974.
The briefing did not go well. The other vice presidents at Sandia were indifferent, unconvinced, or actively hostile to Peurifoy’s recommendations. The strongest opponents of a retrofit argued that it would harm the lab’s reputation— it would imply that Sandia had been wrong about nuclear weapon safety for years. They said new weapons with improved safety features could eventually replace the old ones. And they made clear that the lab’s research-and-development money would not be spent on bombs already in the stockpile. Sandia couldn’t force the armed services to alter their weapons, and the Department of Defense had the ultimate responsibility for nuclear weapon safety. The lab’s upper management said, essentially, that this was someone else’s problem.
In April 1974, Peurifoy and Fowler went to Washington and met with Major General Ernest Graves, Jr., a top official at the Atomic Energy Commission, whose responsibilities included weapon safety. Sandia reported to the AEC, and Peurifoy was aiming higher on the bureaucratic ladder. Graves listened to the presentation and then did nothing about it. Five months later, unwilling to let the issue drop and ready to escalate the battle, Peurifoy and Fowler put their concerns on the record. A letter to General Graves was drafted— and Glenn Fowler placed his career at risk by signing and sending it. The "Fowler Letter," as it was soon called, caused a top secret uproar in the nuclear weapon community. It ensured that high-level officials at the weapons labs, the AEC, and the Pentagon couldn’t hide behind claims of plausible deniability, if a serious accident happened. The letter was proof that they had been warned.
"Most of the aircraft delivered weapons now in stockpile were designed to requirements which envisioned… operations consisting mostly of long periods of igloo storage and some brief exposure to transportation environments," the Fowler letter began. But these weapons were now being used in ways that could subject them to abnormal environments. And none of the weapons had adequate safety mechanisms. Fowler described the "possibility of these safing devices being electrically bypassed through charred organic plastics or melted solder" and warned of their "premature operation from stray voltages and currents." He listed the weapons that should immediately be retrofitted or retired, including the Genie, the Hound Dog, the 9-megaton Mark 53 bomb— and the weapons that needed to be replaced, notably the Mark 28, SAC’s most widely deployed bomb. He said that the secretary of defense should be told about the risks of using these weapons during ground alerts. And Fowler recommended, due to "the urgency associated with the safety question," that nuclear weapons should be loaded onto aircraft only for missions "absolutely required for national security reasons."
And:
...At Homestead Air Force Base in Florida, thirty-five members of an Army unit were arrested for using and selling marijuana and LSD. The unit controlled the Nike Hercules antiaircraft missiles on the base, along with their nuclear warheads. The drug use at Homestead was suspected after a fully armed Russian MiG-17 fighter plane, flown by a Cuban defector, landed there unchallenged, while Air Force One was parked on a nearby runway. Nineteen members of an Army detachment were arrested on pot charges at a Nike Hercules base on Mount Gleason, overlooking Los Angeles. One of them had been caught drying a large amount of marijuana on land belonging to the U.S. Forest Service. Three enlisted men at a Nike Hercules base in San Rafael, California, were removed from guard duty for psychiatric reasons. One of them had been charged with pointing a loaded rifle at the head of a sergeant. Although illegal drugs were not involved in the case, the three men were allowed to guard the missiles, despite a history of psychiatric problems. The squadron was understaffed, and its commander feared that hippies—" people from the Haight-Ashbury"— were trying to steal nuclear weapons.
More than one fourth of the crew on the USS Nathan Hale, a Polaris submarine with sixteen ballistic missiles, were investigated for illegal drug use. Eighteen of the thirty-eight seamen were cleared; the rest were discharged or removed from submarine duty. A former crew member of the Nathan Hale told a reporter that hashish was often smoked when the sub was at sea. The Polaris base at Holy Loch, Scotland, helped turn the Cowal Peninsula into a center for drug dealing in Great Britain. Nine crew members of the USS Casimir Pulaski, a Polaris submarine, were convicted for smoking marijuana at sea. One of the submarine tenders that docked at the base, the USS Canopus, often carried nuclear warheads and ballistic missiles. The widespread marijuana use among its crew earned the ship a local nickname: the USS Cannabis.
Four SAC pilots stationed at Castle Air Force Base near Merced, California, were arrested with marijuana and LSD. The police who raided their house, located off the base, said that it resembled "a hippie type pad with a picture of Ho Chi Minh on the wall." At Seymour Johnson Air Force Base in Goldsboro, North Carolina, 151 of the 225 security police officers were busted on marijuana charges. The Air Force Office of Special Investigations arrested many of them leaving the base’s nuclear weapon storage area. Marijuana was discovered in one of the underground control centers of a Minuteman missile squadron at Malmstrom Air Force Base near Great Falls, Montana. It was also found in the control center of a Titan II launch complex about forty miles southeast of Tucson, Arizona. The launch crew and security officers at the site were suspended while investigators tried to determine who was responsible for the "two marijuana cigarettes."
The true extent of drug use among American military personnel with access to nuclear weapons was hard to determine. Of the roughly 114,000 people who’d been cleared to work with nuclear weapons in 1980, only 1.5 percent lost that clearance because of drug abuse. But the Personnel Reliability Program’s 98.5 percent success rate still allowed at least 1,728 "unreliable" drug uses near the weapons. And those were just the ones who got caught.
Do you keep a list of the audiobooks you liked anywhere? I’d love to take a peek.
Comment
Okay. In this comment I’ll keep an updated list of audiobooks I’ve heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.
Outstanding:
Tetlock, Expert Political Judgment
Pinker, The Better Angels of Our Nature (my clips)
Schlosser, Command and Control (my clips)
Yergin, The Quest (my clips)
Osnos, Age of Ambition (my clips)
Worthwhile if you care about the subject matter:
Singer, Wired for War (my clips)
Feinstein, The Shadow World (my clips)
Venter, Life at the Speed of Light (my clips)
Rhodes, Arsenals of Folly (my clips)
Weiner, Enemies: A History of the FBI (my clips)
Rhodes, The Making of the Atomic Bomb (available here) (my clips)
Gleick, Chaos (my clips)
Wiener, Legacy of Ashes: The History of the CIA (my clips)
Freese, Coal: A Human History (my clips)
Aid, The Secret Sentry (my clips)
Scahill, Dirty Wars (my clips)
Patterson, Dark Pools (my clips)
Lieberman, The Story of the Human Body
Pentland, Social Physics (my clips)
Okasha, Philosophy of Science: VSI
Mazzetti, The Way of the Knife (my clips)
Ferguson, The Ascent of Money (my clips)
Lewis, The Big Short (my clips)
de Mesquita & Smith, The Dictator’s Handbook (my clips)
Sunstein, Worst-Case Scenarios (available here) (my clips)
Johnson, Where Good Ideas Come From (my clips)
Harford, The Undercover Economist Strikes Back (my clips)
Caplan, The Myth of the Rational Voter (my clips)
Hawkins & Blakeslee, On Intelligence
Gleick, The Information (my clips)
Gleick, Isaac Newton
Greene, Moral Tribes
Feynman, Surely You’re Joking, Mr. Feynman! (my clips)
Sabin, The Bet (my clips)
Watts, Everything Is Obvious: Once You Know the Answer (my clips)
Greenblatt, The Swerve: How the World Became Modern (my clips)
Cain, Quiet: The Power of Introverts in a World That Can’t Stop Talking
Dennett, Freedom Evolves
Kaufman, The First 20 Hours
Gertner, The Idea Factory (my clips)
Olen, Pound Foolish
McArdle, The Up Side of Down
Rhodes, Twilight of the Bombs (my clips)
Isaacson, Steve Jobs (my clips)
Priest & Arkin, Top Secret America (my clips)
Ayres, Super Crunchers (my clips)
Lewis, Flash Boys (my clips)
Dartnell, The Knowledge (my clips)
Cowen, The Great Stagnation
Lewis, The New New Thing (my clips)
McCray, The Visioneers (my clips)
Jackall, Moral Mazes (my clips)
Langewiesche, The Atomic Bazaar
Ariely, The Honest Truth about Dishonesty (my clips)
Comment
A process for turning ebooks into audiobooks for personal use, at least on Mac:
Rip the Kindle ebook to non-DRMed .epub with Calibre and Apprentice Alf.
Open the .epub in Sigil, merge all the contained HTML files into a single HTML file (select the files, right-click, Merge). Open the Source view for the big HTML file.
Edit the source so that the ebook begins with the title and author, then jumps right into the foreword or preface or first chapter, and ends with the end of the last chapter or epilogue. (Cut out any table of contents, list of figures, list of tables, appendices, index, bibliography, and endnotes.)
Remove footnotes if easy to do so, using Sigil’s Regex find-and-replace (remember to use Minimal Match so you don’t delete too much!). Click through several instances of the Find command to make sure it’s going to properly cut out only the footnotes, before you click "Replace All."
(Ignore italics here; it’s added erroneously by LW.) Use find and replace to add [[slnc_1000]] at the end of every paragraph; Mac’s text-to-speech engine interprets this as a slight pause, which aids in comprehension when I’m listening to the audiobook. Usually this just means replacing every instance of with [[slnc_1000]]
Copy/paste that entire HTML file into a text file and save it as .html. Open this in your browser, Select All, right-click and choose Services → Add to iTunes as Spoken Track. (I think "Ava" is the best voice; you’ll have to add this voice by upgrading to Mavericks and adding Ava under System Preferences → Dictation and Speech.) This will take a while, and might even throw up an error even though the track will continue being created and will succeed.
Now, sync this text-to-speech audiobook to some audio player that can play at 2x or 3x speed, and listen away.
To de-DRM your Audible audiobooks, just use Tune4Mac.
Comment
VoiceDream for iPhone does a very fine job of text-to-speech; it also syncs your pocket bookmarks and can read epub files.
Other:
Roose, Young Money. Too focused on a few individuals for my taste, but still has some interesting content. (my clips)
Hofstadter & Sander, Surfaces and Essences. Probably a fine book, but I was only interested enough to read the first and last chapters.
Taleb, AntiFragile. Learned some from it, but it’s kinda wrong much of the time. (my clips)
Acemoglu & Robinson, Why Nations Fail. Lots of handy examples, but too much of "our simple theory explains everything." (my clips)
Byrne, The Many Worlds of Hugh Everett III (available here). Gave up on it; too much theory, not enough story. (my clips)
Drexler, Radical Abundance. Gave up on it; too sanitized and basic.
Mukherjee, The Emperor of All Maladies. Gave up on it; too slow in pace and flowery in language for me.
Fukuyama, The Origins of Political Order. Gave up on it; the author is more keen on name-dropping theorists than on tracking down data.
Friedman, The Moral Consequences of Economic Growth (available here). Gave up on it. There are some actual data in chs. 5-7, but the argument is too weak and unclear for my taste.
Tuchman, The Proud Tower. Gave up on it after a couple chapters. Nothing wrong with it, it just wasn’t dense enough in the kind of learning I’m trying to do.
Foer, Eating Animals. I listened to this not to learn, but to shift my emotions. But it was too slow-moving, so I didn’t finish it.
Caro, The Power Broker. This might end up under "outstanding" if I ever finish it. For now, I’ve put this one on hold because it’s very long and not as highly targeted at the useful learning I want to be doing right now than some other books.
Rutherfurd, Sarum. This is the furthest I’ve gotten into any fiction book for the past 5 years at least, including HPMoR. I think it’s giving my system 1 an education into what life was like in the historical eras it covers, without getting bogged down in deep characterization, complex plotting, or ornate environmental description. But I’ve put it on hold for now because it is incredibly long.
Diamond, Collapse. I listened to several chapters, but it seemed to be mostly about environmental decline, which doesn’t interest me much, so I stopped listening.
Bowler & Morus, Making Modern Science (available here) (my clips). A decent history of modern science but not focused enough on what I wanted to learn, so I gave up.
Brynjolfsson & McAfee, The Second Machine Age (my clips). Their earlier, shorter Race Against the Machine contained the core arguments; this book expands the material in order to explain things to a lay audience. As with Why Nations Fail, I have too many quibbles with this book’s argument to put this book in the ‘Liked’ category.
Clery, A Piece of the Sun. Nothing wrong with it, I just wasn’t learning the type of things I was hoping to learn, so I stopped about half way through.
Schuman, The Miracle. Fairly interesting, but not quite dense enough in the kind of stuff I’m hoping to learn these days.
Conway & Oreskes, Merchants of Doubt. Fairly interesting, but not dense enough in the kind of things I’m hoping to learn.
Horowitz, The Hard Thing About Hard Things
Wessel, Red Ink
Levitt & Dubner, Think Like a Freak (my clips)
Gladwell, David and Goliath (my clips)
Thanks! Your first 3 are not my cup of tea, but I’ll keep looking through the top 1000 list. For now, I am listening to MaddAddam, the last part of Margaret Atwood’s post-apocalyptic fantasy trilogy, which qrnyf jvgu bar zna qvfnccbvagrq jvgu uvf pbagrzcbenel fbpvrgl ervairagvat naq ercbchyngvat gur rnegu jvgu orggre crbcyr ur qrfvtarq uvzfrys. She also has some very good non-fiction, like her Massey lecture on debt, which I warmly recommend.
Could you say a bit about your audiobook selection process?
Comment
When I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn’t make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to.
These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible.
Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it’s harder for my brain to stay engaged as I’m listening, especially when I’m tired. I might give up on that process, I’m not sure.
Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
Comment
I definitely found out something similar. I’ve come to believe that most ‘popular science’, ‘popular history’ etc books are on audible, but almost anything with equations or code is not.
The ‘great courses’ have been quite fantastic for me for learning about the social sciences. I found out about those recently.
Occasionally I try podcasts for very niche topics (recent Rails updates, for instance), but have found them to be rather uninteresting in comparison to full books and courses.
Thanks!
From Singer’s Wired for War:
Similarly botched predictions frequently happen in the military field. General Giulio Douhet, the commander of Italy’s air force in World War I, is perhaps the most infamous. In 1921, he wrote a best-selling book called The Command of the Air, which argued that the invention of airplanes made all other parts of the military obsolete and unnecessary. Needless to say, this would be news both to my granddaddy, who sailed out to another world war just twenty years later, and to the soldiers slogging through the sand and dust of Iraq and Afghanistan today.
Comment
More (#7) from Wired for War:
The same concept could apply to unmanned systems that commit some war crime not because of manufacturer’s defect, but because of some sort of misuse or failure to take proper precautions. Given the different ways that people are likely to classify robots as "beings" when it comes to expectations of rights we might grant them one day, the same concept might be flipped across to the responsibilities that come with using or owning them. For example, a dog is a living, breathing animal totally separate from a human. That doesn’t mean, however, that the law is silent on the many legal questions that can arise from dogs’ actions. As odd as it sounds, pet law might then be a useful resource in figuring out how to assess the accountability of autonomous systems.
The owner of a pit bull may not be in total control of exactly what the dog does or even who the dog bites. The dog’s autonomy as a "being" doesn’t mean, however, that we just wave our hands and act as if there is no accountability if that dog mauls a little kid. Even if the pit bull’s owner was gone at the time, they still might be criminally prosecuted if the dog was abused or trained (programmed) improperly, or because the owner showed some sort of negligence in putting a dangerous dog into a situation where it was easy for kids to get harmed.
Like the dog owner, some future commander who deploys an autonomous robot may not always be in total control of their robot’s every operation, but that does not necessarily break their chain of accountability. If it turns out that the commands or programs they authorized the robot to operate under somehow contributed to a violation of the laws of war or if their robot was deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold them responsible. Commanders have what is known as responsibility "by negation." Because they helped set the whole situation in process, commanders are equally responsible for what they didn’t do to avoid a war crime as for what they might have done to cause it.
And:
Finkelstein is hardly the only scientist who talks so directly about robots taking over one day. Hans Moravec, director of the Robotics Institute at Carnegie Mellon University, believes that "the robots will eventually succeed us: humans clearly face extinction." Eric Drexler, the engineer behind many of the basic concepts of nanotechnology, says that "our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short." Freeman Dyson, the distinguished physicist and mathematician who helped jump-start the field of quantum mechanics (and inspired the character of Dyson in the Terminator movies), states that "humanity looks to me like a magnificent beginning, but not the final word." His equally distinguished son, the science historian George Dyson, came to the same conclusion, but for different reasons. As he puts it, "In the game of life and evolution, there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines." Even inventor Ray Kurzweil of Singularity fame gives humanity "a 50 percent chance of survival." He adds, "But then, I’ve always been accused of being an optimist."
...Others believe that we must take action now to stave off this kind of future. Bill Joy, the cofounder of Sun Microsystems, describes himself as having had an epiphany a few years ago about his role in humanity’s future. "In designing software and microprocessors, I have never had the feeling I was designing an intelligent machine. The software and hardware is so fragile, and the capabilities of a machine to ‘think’ so clearly absent that, even as a possibility, this has always seemed very far in the future.… But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of technology that may replace our species. How do I feel about this? Very uncomfortable."
The army recruiters say that soldiers on the ground still win wars. I reckon that Douhet’s prediction will approach true, however, crudely. Drones.
More (#6) from Wired for War:
The Grays didn’t have PhDs in robotics, billion-dollar military labs backing them, or even much familiarity with computers. Instead, they brought in the head of their insurance company’s ten-person IT department for guidance on what to do. He then went out and bought some of the various parts and components described in the magazine article. They got their ruggedized computer, for example, at a boat show. The Grays then began reading up on video game programming, thinking that programming a robot car to drive through the real-world course had many parallels with "navigating an animated monster through a virtual world." Everything was loaded into a Ford Escape Hybrid SUV, which they called Kat 5, after the category 5 Hurricane Katrina that hit their hometown just a few months before the race.
When it came time for the race to see who could design the best future automated military vehicle, Team Gray’s entry lined up beside robots made by some of the world’s most prestigious universities and companies. Kat 5 then not only finished the racecourse (recall that no robot contestant had even been able to go more than a few miles the year before), but came in fourth out of the 195 contestants, just thirty-seven minutes behind Sebastian Thrun’s Stanley robot. Said Eric Gray, who spent only $650,000 to make a robot that the Pentagon and nearly every top research university had been unable to build just a year before, "It’s a beautiful thing when people are ignorant that something is impossible."
And:
In political theory, noted philosophers like Thomas Hobbes argued that individuals have always had to grant their obedience to governments because it was only by banding together and obeying some leader that people could protect themselves. Otherwise, life would be "nasty, brutish and short," as he famously described a world without governments. But most people forget the rest of the deal that Hobbes laid out. "The obligation of subjects to the sovereign is understood to last as long and no longer than the power lasteth by which he is able to protect them."
As a variety of scientists and analysts look at such new technologies as robotics, AI and nanotech, they are finding that massive power will no longer be held only by states. Nor will it even be limited to nonstate organizations like Hezbollah or al-Qaeda. It is also within the reach of individuals. The playing field is changing for Hobbes’s sovereign.
Even the eternal optimist Ray Kurzweil believes that with the barriers to entry being lowered for violence, we could see the rise of superempowered individuals who literally hold humanity’s future in their hands. New technologies are allowing individuals with creativity to push the limits of what is possible. He points out how Sergey Brin and Larry Page were just two Stanford kids with a creative idea that turned into Google, a mechanism that makes it easy for anyone to search almost all the world’s knowledge. However, their $100 billion idea is "also empowering for those who are destructive." Information on how to build your own remote bomb or the genetic code for the 1918 flu bug are as searchable as the latest news on Britney Spears. Kurzweil describes the looming period in human history that we are entering, just before his hoped-for Singularity: "It feels like all ten billion of us are standing in a room up to our knees in flammable fluid, waiting for someone—anyone—to light a match."
Kurzweil thinks we have enough fire extinguishers to avoid going up in flames before the Singularity arrives, but others aren’t so certain. Bill Joy, the so-called father of the Internet, for example, fears what he calls "KMD," individuals who wield knowledge-enabled mass destruction. "It is no exaggeration to say that we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation states, on to a surprising and terrible empowerment of individuals."
The science fiction writers concur. "Single individual mass destruction" is the biggest dilemma we have to worry about with our new technologies, warns Greg Bear. He notes that many high school labs now have greater sophistication and capability than the Pentagon’s top research labs did in the cold war. Vernor Vinge, the computer scientist turned award-winning novelist, agrees: "Historically, warfare has pushed technologies. We are in a situation now, if certain technologies become cheap enough, it’s not just countries that can do terrible things to millions of people, but criminal gangs can do terrible things to millions of people. What if for 50 dollars you buy something that could destroy everybody in a country? Then, basically, anybody who’s having a bad hair day is a threat to national survival."
Comment
Inequality doesn’t seem so bad now, huh?
More (#5) from Wired for War:
The Department of Justice once found that as much as 5 percent of the government’s annual budget is lost to old-fashioned fraud and theft, most of it in the defense realm. This is not helped by the fact that the Pentagon’s own rules and laws for how it should buy weapons are "routinely broken," as one report in Defense News put it. One 2007 study of 131 Pentagon purchases found that 117 did not meet federal regulation standards. The Pentagon’s own inspector general also reported that not one person had been fired or otherwise held accountable for these violations.
...Whenever any new weapon is contemplated, the military often adds wave after wave of new requirements, gradually creeping the original concept outward. It builds in new design mandates, asks for various improvements and additions, forgetting that each new addition means another delay in delivery (and for robots, at least, forgetting that the systems were meant to be expendable). In turn, the makers are often only too happy to go along with what transforms into a process of gold-plating, as adding more bells, more whistles, and more design time means more money. These sorts of problems are rife in U.S. military robotics today. The MDARS (Mobile Detection Assessment Response System) is a golf-cart-sized robot that was planned as a cheap sentry at Pentagon warehouses and bases. It is now fifty times more expensive than originally projected. The air force’s unmanned bomber design is already projecting out at more than $2 billion a plane, roughly three times the original $737 million cost of the B-2 bomber it is to replace.
These costs weigh not just in dollars and cents. The more expensive the systems are, the fewer can be bought. The U.S. military becomes more heavily invested in those limited numbers of systems, and becomes less likely to change course and develop or buy alternative systems, even if they turn out to be better. The costs also change what doctrines can be used in battle, as the smaller number makes the military less likely to endanger systems in risky operations. Many worry this is defeating the whole purpose of unmanned systems. "We become prisoners of our very expensive purchases," explains Ralph Peters. He worries that the United States might potentially lose some future war because of what he calls "quantitative incompetence." Norm Augustine even jokes, all too seriously, that if the present trend continues, "In the year 2054, the entire defense budget will purchase just one tactical aircraft. This aircraft will have to be shared by the Air Force and Navy, three and one half days per week, except for the leap year, when it will be made available to the Marines for the extra day."
More (#4) from Wired for War:
When war returned to Europe, it seemed unlikely that the Germans would win. The French and the British had won the last war in the trenches, and seemed well prepared for this one with the newly constructed Maginot Line of fortifications. They also seemed better off with the new technologies as well. Indeed, the French alone had more tanks than the Germans (3,245 to 2,574). But the Germans chose the better doctrine, and they conquered all of France in just over forty days. In short, both sides had access to roughly the same technology, but made vastly different choices about how to use it, choices that shaped history.
And:
Military cultural resistance also jibes with problems of technological "lock-in." This is where change is resisted because of the costs sunk in the old technology, such as the large investment in infrastructure supporting it. Lock-in, for example, is why so many corporate and political interests are fighting the shift away from gas-guzzling cars.
This mix of organizational culture and past investment is why militaries will go to great lengths to keep their old systems relevant and old institutions intact. Cavalry forces were so desperate to keep horses relevant when machine guns and engines entered twentieth-century warfare that they even tried out "battle chariots," which were basically machine guns mounted on the kind of chariots once used by ancient armies. Today’s equivalent is the development of a two-seat version of the Air Force’s F-22 Raptor (which costs some $360 million per plane, when you count the research and development). A sell of the idea described how the copilot is there to supervise an accompanying UAV that would be sent to strike guarded targets and engage enemy planes in any dogfights, as the drone could "perform high-speed aerobatics that would render a human pilot unconscious." It’s an interesting concept, but it begs the question of what the human fighter pilot would do.
Akin to the baseball managers who couldn’t adapt to change like Billy Beane, such cultural resistance may prove another reason why the U.S. military could fall behind others in future wars, despite its massive investments in technologies. As General Eric Shinseki, the former U.S. Army chief of staff, once admonished his own service, "If you dislike change, you’re going to dislike irrelevance even more." It is not a good sign then that the last time Shinseki made such a warning against the general opinion—that the invasion of Iraq would be costly—he was summarily fired by then secretary of defense Rumsfeld.
More (#3) from Wired for War:
And:
And:
With the rise of more sophisticated sensors that better see the world, faster computers that can process information more quickly, and most important, GPS that can give a robot its location and destination instantaneously, higher levels of autonomy are becoming more attainable, as well as cheaper to build into robots. But each level of autonomy means more independence. It is a potential good in moving the human away from danger, but also raises the stakes of the robot’s decisions.
More (#2) from Wired for War:
If, as an official at DARPA observed, "the human is becoming the weakest link in defense systems," unmanned systems offer a path around those limitations. They can fly faster and turn harder, without worrying about that squishy part in the middle. Looking forward, a robotics researcher notes that "the UCAV [the unmanned fighter jet] will totally trump the human pilot eventually, purely because of physics." This may prove equally true at sea, and not just in underwater operations, where humans have to worry about small matters like breathing or suffering ruptured organs from water pressure. For example, small robotic boats (USV) have already operated in "sea state six." This is when the ocean is so rough that waves are eighteen feet high or more, and human sailors would break their bones from all the tossing about.
Working at digital speed is another unmanned advantage that’s crucial in dangerous situations. Automobile crash avoidance technologies illustrate that a digital system can recognize a danger and react in about the same time that the human driver can only get to mid-curse word. Military analysts see the same thing happening in war, where bullets or even computer-guided missiles come in at Mach speed and defenses must be able to react against them even quicker. Humans can only react to incoming mortar rounds by taking cover at the last second, whereas "R2-D2," the CRAM system in Baghdad, is able to shoot them down before they even arrive. Some think this is only the start. One army colonel says, "The trend towards the future will be robots reacting to robot attack, especially when operating at technologic speed. . . . As the loop gets shorter and shorter, there won’t be any time in it for humans."
More (#1) from Wired for War:
And:
The pilot was Joseph Kennedy Jr., older brother of John Fitzgerald Kennedy, thirty-fifth president of the United States. The two had spent much of their youth competing for the attention of their father, the powerful businessman and politician Joseph Sr. While younger brother JFK was often sickly and decidedly bookish, firstborn son Joe Jr. had been the "chosen one" of the family. He was a natural-born athlete and leader, groomed from birth to become the very first Catholic president. Indeed, it is telling that in 1940, just before war broke out, JFK was auditing classes at Stanford Business School, while Joe Jr. was serving as a delegate to the Democratic National Convention. When the war started, Joe Jr. became a navy pilot, perhaps the most glamorous role at the time. John was initially rejected for service by the army because of his bad back. The navy relented and allowed John to join only after his father used his political influence.
When Joe Kennedy Jr. was killed in 1944, two things happened: the army ended the drone program for fear of angering the powerful Joe Sr. (setting the United States back for years in the use of remote systems), and the mantle of "chosen one" fell on JFK. When the congressional seat in Boston opened up in 1946, what had been planned for Joe Jr. was handed to JFK, who had instead been thinking of becoming a journalist. He would spend the rest of his days not only carrying the mantle of leadership, but also trying to live up to his dead brother’s carefree and playboy image.
From Osnos’ Age of Ambition:
And:
And:
When the award was announced, most Chinese people had never heard of Liu, so the state media made the first impression; it splashed an article across the country reporting that he earned his living "bad-mouthing his own country." The profile was a classic of the form: it described him as a collector of fine wines and porcelain, and it portrayed him telling fellow prisoners, "I’m not like you. I don’t lack for money. Foreigners pay me every year, even when I’m in prison." Liu "spared no effort in working for Western anti-China forces" and, in doing so, "crossed the line of freedom of speech into crime."
For activists, the news of the award was staggering. "Many broke down in tears, even uncontrollable sobbing," one said later. In Beijing, bloggers, lawyers, and scholars gathered in the back of a restaurant to celebrate, but police arrived and detained twenty of them. When the announcement was made, Han Han, on his blog, toyed with censors and readers; he posted nothing but a pair of quotation marks enclosing an empty space. The post drew one and a half million hits and more than 28,000 comments.
Comment
More (#2) from Osnos’ Age of Ambition:
And:
More (#1) from Osnos’ Age of Ambition:
With so many kickbacks changing hands, it wasn’t surprising that parts of the railway went wildly over budget. A station in Guangzhou slated to be built for $ 316 million ended up costing seven times that. The ministry was so large that bureaucrats would create fictional departments and run up expenses for them. A five-minute promotional video that went largely unseen cost nearly $ 3 million. The video led investigators to the ministry’s deputy propaganda chief, a woman whose home contained $ 1.5 million in cash and the deeds to nine houses.
Reporters who tried to expose the corruption in the railway world ran into dead ends. Two years before the crash, a journalist named Chen Jieren posted an article about problems in the ministry entitled, "Five Reasons That Liu Zhijun Should Take Blame and Resign," but the piece was deleted from every major Web portal. Chen was later told that Liu oversaw a slush fund used for buying the loyalty of editors at major media and websites. Other government agencies also had serious financial problems— out of fifty, auditors found problems with forty-nine— but the scale of cash available in the railway world was in a class by itself. Liao Ran, an Asia specialist at Transparency International, told the International Herald Tribune that China’s high-speed railway was shaping up to be "the biggest single financial scandal not just in China, but perhaps in the world."
And:
Liu was expelled from the Party the following May, for "severe violations of discipline" and "primary leadership responsibilities for the serious corruption problem within the railway system." An account in the state press alleged that Liu took a 4 percent kickback on railway deals; another said he netted $ 152 million in bribes. He was the highest-ranking official to be arrested for corruption in five years. But it was Liu’s private life that caught people by surprise. The ministry accused him of "sexual misconduct," and the Hong Kong newspaper Ming Pao reported that he had eighteen mistresses. His friend Ding was said to have helped him line up actresses from a television show in which she invested. Chinese officials are routinely discovered indulging in multiple sins of the flesh, prompting President Hu Jintao to give a speech a few years ago warning comrades against the "many temptations of power, wealth, and beautiful women." But the image of a gallivanting Great Leap Liu, and the sheer logistics of keeping eighteen mistresses, made him into a punch line. When I asked Liu’s colleague if the mistress story was true, he replied, "What is your definition of a mistress?"
By the time the libidinous Liu was deposed, at least eight other senior officials had been removed and placed under investigation, including Zhang, Liu’s bombastic aide. Local media reported that Zhang, on an annual salary of less than five thousand dollars, had acquired a luxury home near Los Angeles, stirring speculation that he had been preparing to join the growing exodus of officials who were taking their fortunes abroad. In recent years, corrupt cadres who sent their families overseas had become known in Chinese as "naked officials." In 2011 the central bank posted to the Web an internal report estimating that, since 1990, eighteen thousand corrupt officials had fled the country, having stolen $ 120 billion— a sum large enough to buy Disney or Amazon. (The report was promptly removed.)
And:
And:
From Soldiers of Reason:
The paper also warned that with the Soviet buildup of atomic capability, the Kremlin might very well stage a surprise attack, with 1954 as the year of maximum danger, unless the United States substantially and immediately increased its armed forces and civil defenses. NSC-68 was forwarded to President Truman around the time that Soviet-backed North Korea launched an invasion of South Korea, an American ally. Nitze’s warning about Communist designs worked so well that President Truman adopted NSC-68 as the official policy of the land and increased the national defense budget by almost $40 billion.
Comment
More (#2) from Soldiers of Reason:
And:
More (#1) from Soldiers of Reason:
And:
By the mid-1980s, RAND analysts observed a very disturbing trend: terrorism was becoming bloodier. Whereas in 1968 the bombs of groups like the Croatian separatists were disarmed before they could injure anyone, by 1983 Hezbollah followers were ramming trucks full of explosives into U.S. Marine barracks in Lebanon, killing American servicemen by the score.28 This last incident brought attention to what would become the most worrying trend of all, suicide attacks by extremists in and from the Middle East.
According to RAND analysts, the first record of a suicide attack since ancient times occurred in May of 1972, when Japanese terrorists acting on behalf of Palestinian causes tossed a hand grenade into a group of Christian pilgrims at the airport in Lod, Israel. The attack claimed twenty-six victims but also exposed the terrorists to immediate retribution from security agents at the scene; two of the three terrorists were killed in what amounted to a suicide mission, similar to the "divine wind" or kamikaze immolations of World War II. RAND analysts believed this self-sacrifice shamed the Palestinians into similar action. If Japanese were willing to die for a foreign cause, Palestinians must demonstrate their readiness to sacrifice themselves for their own cause. The inevitable next step was the glorification of death in battle as the bloody gate to paradise.
This transformation in tactics gave terrorists unexpected results. By most accounts, after the bombing of the Beirut barracks, Reagan administration officials decided Lebanon was not worth the American funeral candles; the marines packed up and went home. American withdrawal from Lebanon and the Soviet defeat in Afghanistan, when conjoined to the rise in Muslim fundamentalism fueled by the financial support of Saudi Arabia, created a belief among terrorist groups that they had finally found a way to change the policies of Western powers.
From David and Goliath:
Ranadivé looked at his girls. Morgan and Julia were serious basketball players. But Nicky, Angela, Dani, Holly, Annika, and his own daughter, Anjali, had never played the game before. They weren’t all that tall. They couldn’t shoot. They weren’t particularly adept at dribbling. They were not the sort who played pickup games at the playground every evening. Ranadivé lives in Menlo Park, in the heart of California’s Silicon Valley. His team was made up of, as Ranadivé put it, "little blond girls." These were the daughters of nerds and computer programmers. They worked on science projects and read long and complicated books and dreamed about growing up to be marine biologists. Ranadivé knew that if they played the conventional way—if they let their opponents dribble the ball up the court without opposition—they would almost certainly lose to the girls for whom basketball was a passion. Ranadivé had come to America as a seventeen-year-old with fifty dollars in his pocket. He was not one to accept losing easily. His second principle, then, was that his team would play a real full-court press—every game, all the time. The team ended up at the national championships. "It was really random," Anjali Ranadivé said. "I mean, my father had never played basketball before."
And:
When they finally arrived at Aqaba, Lawrence’s band of several hundred warriors killed or captured twelve hundred Turks and lost only two men. The Turks simply had not thought that their opponent would be crazy enough to come at them from the desert.
Comment
More (#2) from David and Goliath:
"The guy was running the options business but did not know what an option was," Cohn went on. He was laughing at the sheer audacity of it all. "I lied to him all the way to the airport. When he said, ‘Do you know what an option is?’ I said, ‘Of course I do, I know everything, I can do anything for you.’ Basically by the time we got out of the taxi, I had his number. He said, ‘Call me Monday.’ I called him Monday, flew back to New York Tuesday or Wednesday, had an interview, and started working the next Monday. In that period of time, I read McMillan’s Options as a Strategic Investment book. It’s like the Bible of options trading."
It wasn’t easy, of course, since Cohn estimates that on a good day, it takes him six hours to read twenty-two pages. He buried himself in the book, working his way through one word at a time, repeating sentences until he was sure he understood them. When he started at work, he was ready. "I literally stood behind him and said, ‘Buy those, sell those, sell those,’" Cohn said. "I never owned up to him what I did. Or maybe he figured it out, but he didn’t care. I made him tons of money."
...Today he is the president of Goldman Sachs.
And:
From Wade’s A Troublesome Inheritance:
In the 1927 case, known as Buck v. Bell, the Supreme Court found for the state, with only one dissent. Justice Oliver Wendell Holmes, writing for the majority, endorsed without reservation the eugenicists’ credo that the offspring of the mentally impaired were a menace to society. "It is better for the world," he wrote, "if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. Three generations of imbeciles are enough."
Eugenics, having started out as a politically impractical proposal for encouraging matches among the well-bred, had now become an accepted political movement with grim consequences for the poor and defenseless.
The first of these were sterilization programs. At the urging of Davenport and his disciples, state legislatures passed programs for sterilizing the inmates of their prisons and mental asylums. A common criterion for sterilization was feeblemindedness, an ill-defined diagnostic category that was often identified by knowledge-based questions that put the ill educated at particular disadvantage.
...Up until 1928, fewer than 9,000 people had been sterilized in the United States, even though the eugenicists estimated that up to 400,000 citizens were "feeble minded." After the Buck v. Bell decision, the floodgates opened. By 1930, 24 states had sterilization laws on their books, and by 1940, 35,878 Americans had been sterilized or castrated.
Comment
More (#2) from A Troubled Inheritance:
China, unlike the Islamic world, did not ban printing presses, but the books they produced were only for the elite. Another impediment to independent thought was the stultifying education system, which consisted of rote memorization of the more than 500,000 characters that comprised the Confucian classics, and the ability to write a stylized commentary on them. The imperial examination system, which began in 124 BC, took its final form in 1368 AD and remained unchanged until 1905, deterring intellectual innovation for a further five centuries.
More (#1) from A Troublesome Inheritance:
...Pinker agrees with Elias that the principal drivers of the civilizing process were the increasing monopoly of force by the state, which reduced the need for interpersonal violence, and the greater levels of interaction with others that were brought about by urbanization and commerce.
The next question of interest is whether the long behavioral shift toward more restrained behavior had a genetic basis. The gracilization of human skulls prior to 15,000 years ago almost certainly did, and Clark makes a strong case that the molding of the English population from rough peasants into industrious citizenry between 1200 and 1800 AD was a continuation of this evolutionary process. On the basis of Pinker’s vast compilation of evidence, natural selection seems to have acted incessantly to soften the human temperament, from the earliest times until the most recent date for which there is meaningful data.
This is the conclusion that Pinker signals strongly to his readers. He notes that mice can be bred to be more aggressive in just five generations, evidence that the reverse process could occur just as speedily. He describes the human genes, such as the violence-promoting MAO-A mutation mentioned in chapter 3, that could easily be modulated so as to reduce aggressiveness. He mentions that violence is quite heritable, on the evidence from studies of twins, and so must have a genetic basis. He states that "nothing rules out the possibility that human populations have undergone some degree of biological evolution in recent millennia, or even centuries, long after races, ethnic groups, and nations diverged."
But at the last moment, Pinker veers away from the conclusion, which he has so strongly pointed to, that human populations have become less violent in the past few thousand years because of the continuation of the long evolutionary trend toward less violence. He mentions that evolutionary psychologists, of whom he is one, have always held that the human mind is adapted to the conditions of 10,000 years ago and hasn’t changed since.
But since many other traits have evolved more recently than that, why should human behavior be any exception? Well, says Pinker, it would be terribly inconvenient politically if this were so. "It could have the incendiary implication that aboriginal and immigrant populations are less biologically adapted to the demands of modern life than populations that have lived in literate state societies for millennia."
Whether or not a thesis might be politically incendiary should have no bearing on the estimate of its scientific validity. That Pinker would raise this issue in a last minute diversion of a sustained scientific argument is an explicit acknowledgment to the reader of the political dangers that researchers, even ones of his stature and independence, would face in pursuing the truth too far.
Turning on a dime, Pinker then contends that there is no evidence that the decline in violence over the past 10,000 years is an evolutionary change. To reach this official conclusion, he is obliged to challenge Clark’s evidence that there was indeed such a change. But he does so with an array of arguments that seem less than decisive...
And:
The Jesuits invested significant talent in their mission, which was founded by Matteo Ricci, a trained mathematician who also spoke Chinese. Ricci, who died in 1610, and his successors imported the latest European books on math and astronomy and diligently trained Chinese astronomers, who set about reforming the calendar. One of the Jesuits, Adam Schall von Bell, even became head of the Chinese Bureau of Mathematics and Astronomy.
The Jesuits and their Chinese followers several times arranged prediction challenges between themselves and Chinese astronomers following traditional methods, which the Jesuits always won. The Chinese knew, for instance, that there would be a solar eclipse on June 21, 1629, and the emperor asked both sides to submit the day before their predictions of its exact time and duration. The traditional astronomers predicted the eclipse would start at 10:30 AM and last for two hours. Instead it began at 11:30 AM and lasted two minutes, exactly as the Jesuits had calculated.
But these computational victories did not solve the Jesuits’ problem. The Chinese had little curiosity about astronomy itself. Rather, they were interested in divination, in forecasting propitious days for certain events, and astronomy was merely a means to this end. Thus the astronomical bureau was a small unit within the Ministry of Rites. The Jesuits doubted how far they should get into the business of astrological prediction, but their program of converting the Chinese through astronomical excellence compelled them to take the plunge anyway. This led them into confrontation with Chinese officials and to being denounced as foreigners who were interfering in Chinese affairs. In 1661, Schall and the other Jesuits were bound with thick iron chains and thrown into jail. Schall was sentenced to be executed by dismemberment, and only an earthquake that occurred the next day prompted his release.
The puzzle is that throughout this period the Chinese made no improvements on the telescope. Nor did they show any sustained interest in the ferment of European ideas about the theoretical structure of the universe, despite being plied by the Jesuits with the latest European research. Chinese astronomers had behind them a centuries-old tradition of astronomical observation. But it was embedded in a Chinese cosmological system that they were reluctant to abandon. Their latent xenophobia also supported resistance to new ideas. "It is better to have no good astronomy than to have Westerners in China," wrote the anti-Christian scholar Yang Guangxian.
From Moral Mazes:
No decision was made. The CEO had sent the word out to defer all unnecessary capital expenditures to give the corporation cash reserves for other investments. So the managers allocated small amounts of money to patch the battery up until 1979, when it collapsed entirely. This brought the company into a breach of contract with a steel producer and into violation of various… EPA pollution regulations. The total bill, including lawsuits and now federally mandated repairs to the battery, exceeded $100 million...
This simple but very typical example gets to the heart of how decision making is intertwined with a company’s authority structure and advancement patterns. As Alchemy managers see it, the decisions facing them in 1975 and 1979 were crucially different. Had they acted decisively in 1975 — in hindsight, the only substantively rational course — they would have salvaged the battery and saved their corporation millions of dollars in the long run.
In the short run, however, since even seemingly rational decisions are subject to widely varying interpretations, particularly decisions that run counter to a CEO’s stated objectives, they would have been taking serious personal risks in restoring the battery. What is more, their political networks might have unraveled, leaving them vulnerable to attack. They chose short-term safety over long-term gain.
And:
And:
"I look at it this way. See, in a big bureaucracy like this, very few individual people can really change anything. It’s like a big ant colony. I really believe that if most people didn’t come to work, it wouldn’t matter. You could come in one day a week and accomplish the absolutely necessary work. But the whole colony has significance; it’s just the individual that doesn’t count. Somewhere though some actions have to have significance. Now you see this at work with mistakes. You can make mistakes in the work you do and not suffer any consequences. For instance, I could negotiate a contract that might have a phrase that would trigger considerable harm to the company in the event of the occurrence of some set of circumstances. The chances are that no one would ever know. But if something did happen and the company got into trouble, and I had moved on from that job to another, it would never be traced to me. The problem would be that of the guy who presently has responsibility. And it would be his headache. There’s no tracking system in the corporation."
From Lewis’ The New New Thing:
And:
From Dartnell’s The Knowledge:
And:
And:
And:
From Ayres’ Super Crunchers, speaking of Epagogix, which uses neural nets to predict a movie’s box office performance from its screenplay:
Copaken was completely depressed when he walked out of the meeting, but when he looked over he noticed that the hedge fund guys were grinning from ear to ear. He asked them why they were so happy. They told him, "You don’t understand, Dick. We make our fortunes by identifying small imperfections in the marketplace. They are usually tiny and they are usually fleeting and they are immediately filled by the efficiency of the marketplace. But if we can discover these things… we end up making lots of money before the efficiency of the marketplace closes out that opportunity. What you just showed us here in Hollywood is a ten-lane paved highway of opportunity. It’s like they are committed to doing things the wrong way..."
Comment
More (#1) from Super Crunchers:
Direct Instruction won hands down. Education writer Richard Nadler summed it up this way: "When the testing was over, students in DI classrooms had placed first in reading, first in math, first in spelling, and first in language. No other model came close." And DI’s dominance wasn’t just in basic skill acquisition. DI students could also more easily answer questions that required higher-order thinking… DI even did better in promoting students’ self-esteem than several child-centered approaches...
More recent studies by both the American Federation of Teachers and the American Institutes for Research reviewed data on two dozen "whole school" reforms and found once again that the Direct Instruction model had the strongest empirical support.
And:
And:
I bit my tongue, but I knew he was dead wrong. It is possible to combine different pieces of evidence, and has been since 1763 when a short essay by the Reverand Thomas Bayes was posthumously published...
From Isaacson’s Steve Jobs:
The pastor answered, "Yes, God knows everything."
Jobs then pulled out the Life cover and asked, "Well, does God know about this and what’s going to happen to those children?"
"Steve, I know you don’t understand, but yes, God knows about that."
Jobs announced that he didn’t want to have anything to do with worshipping such a God, and he never went back to church.
And:
And:
On April 1, 1976, Jobs and Wozniak went to Wayne’s apartment in Mountain View to draw up the partnership agreement...
...Wayne then got cold feet. As Jobs started planning to borrow and spend more money, he recalled the failure of his own company. He didn’t want to go through that again. Jobs and Wozniak had no personal assets, but Wayne (who worried about a global financial Armageddon) kept gold coins hidden in his mattress. Because they had structured Apple as a simple partnership rather than a corporation, the partners would be personally liable for the debts, and Wayne was afraid potential creditors would go after him. So he returned to the Santa Clara County office just eleven days later with a "statement of withdrawal" and an amendment to the partnership agreement. "By virtue of a re-assessment of understandings by and between all parties," it began, "Wayne shall hereinafter cease to function in the status of ‘Partner.’" It noted that in payment for his 10% of the company, he received $800, and shortly afterward $1,500 more.
Had he stayed on and kept his 10% stake, at the end of 2010 it would have been worth approximately $2.6 billion. Instead he was then living alone in a small home in Pahrump, Nevada, where he played the penny slot machines and lived off his social security check.
And:
Markkula shrugged and said okay. But Jobs got very upset. He cajoled Wozniak; he got friends to try to convince him; he cried, yelled, and threw a couple of fits. He even went to Wozniak’s parents’ house, burst into tears, and asked Jerry for help. By this point Wozniak’s father had realized there was real money to be made by capitalizing on the Apple II, and he joined forces on Jobs’s behalf. "I started getting phone calls at work and home from my dad, my mom, my brother, and various friends," Wozniak recalled. "Every one of them told me I’d made the wrong decision." None of that worked. Then Allen Baum, their Buck Fry Club mate at Homestead High, called. "You really ought to go ahead and do it," he said. He argued that if he joined Apple full-time, he would not have to go into management or give up being an engineer. "That was exactly what I needed to hear," Wozniak later said. "I could stay at the bottom of the organization chart, as an engineer." He called Jobs and declared that he was now ready to come on board.
Comment
More (#1) from Steve Jobs:
And:
[no more clips, because Audible somehow lost all my bookmarks for the last two parts of the audiobook!]
From Feinstein’s The Shadow World:
A significant portion of the more than £1bn was paid into personal and Saudi embassy accounts at the venerable Riggs Bank opposite the White House on Pennsylvania Avenue, Washington DC. The bank of choice for Presidents, ambassadors and embassies had close ties to the CIA, with several bank officers holding full agency security clearance. Jonathan Bush, uncle of the President, was a senior executive of the bank at the time. But Riggs and the White House were stunned by the revelation that from 1999 money had inadvertently flowed from the account of Prince Bandar’s wife to two of the fifteen Saudis among the 9/11 hijackers.
Comment
More (#8) from The Shadow World:
And:
No legal action has been permitted in Albania against any of the senior politicians involved in the events that led to the deaths of the villagers.
And:
The arms industry receives unique treatment from government. Many companies were, and some still are, state-owned. Even those that have been privatized continue to be treated, in many ways, as if they were still in the public fold. Physical access to and enormous influence on departments of defence is commonplace. Government officials and ministers act as salespeople for private arms contractors as enthusiastically as they do for state-owned entities. Partly this is because they are seen as contributing to national security and foreign policy, as well as often playing substantial roles in the national economy. In many, if not all, countries of the world, arms companies and dealers play an important role in intelligence gathering and are involved in ‘black’ or secret operations.
The constant movement of staff between government, arms companies, the intelligence agencies and lobbying firms the world over only entrenches this special treatment. As do the contributions of money and support to political parties in both selling and purchasing countries. This also results in the companies and individuals in this industry exercising a disproportionate and usually bellicose influence on all manner of policymaking, be it on economic, foreign or national security issues.
It is for these reasons that arms companies and individuals involved in the trade very seldom face justice, even for transgressions that are wholly unrelated to their strategic contributions to the state. Political interventions, often justified in the name of national security, ensure that the arms trade operates in its own privileged shadow world, largely immune to the legal and economic vagaries experienced by other companies. Even when a brave prosecutor attempts to investigate and bring charges against an arms company or dealer, the matter is invariably settled with little or no public disclosure and seldom any admission of wrongdoing. And the investigator, whistle-blower or prosecutor inevitably finds their career prospects significantly diminished.
More (#7) from The Shadow World:
Remarkably, the Pentagon hasn’t been audited for over twenty years and recently announced that it hopes to be audit-ready by 2017, a claim that a bipartisan group of Senators thought unlikely.
And:
A defence industry insider with close links to the Pentagon put it to me that ‘the procurement system in the US is a fucking joke. Every administration says we need procurement reform and it never happens.’ Robert Gates on his reappointment as Secretary of Defense stated to Congress: ‘We need to take a very hard look at the way we go about acquisition and procurement.’ However, this is the same official who in June 2008 endorsed a Bush administration proid=l to develop a treaty with the UK and Australia that would allow unlicensed trade in arms and services between the US and these countries. The proposal is procedurally scandalous and would lead to even less oversight but has generated little media coverage. In September 2010, with Robert Gates in office, the agreement was passed.
And:
The closest a company has come to debarment was the temporary suspension of BAE’s US export privileges while the State Department considered the matter. Specific measures seem to be taken to avoid applying debarment rules to major arms companies, in particular by charging companies with non-FCPA charges as in the case of BAE. A legislative effort was undertaken to debar Blackwater (Xe) from government contracts due to its FCPA violations. Legislation was introduced in May 2010 to debar any company that violates the FCPA, though with a waiver system in place that would require any federal agency to justify the use of a debarred company in a report submitted to Congress.
The mutual dependence between the government, Congress and defence companies means that, in practice, even serial corrupters are ‘too important’ to fail. For example, the US could not practically debar KBR, a company to which it has outsourced billions of dollars of its military functions. Similarly, debarring BAE would threaten its work on new arms projects and the maintenance of BAE products that the US military already uses.
And:
I asked Chuck what he thought of the Pentagon now. ‘It’s worse now. Things are worse today than they’ve ever been.’
And:
More (#6) from The Shadow World:
And:
When Colombia considered buying light attack aircraft from Brazil rather than a US manufacturer, the senior American commander in the region wrote to Bogotá that the purchase would have a negative impact on Congressional support for future military aid to Colombia. The deal with Brazil fell through.
And:
Initially eight ships were produced for $100m. They were unusable: the hulls cracked and the engines didn’t work properly. The second-largest boat couldn’t even pass a simple water tank test and was put on hold. The largest ship, produced at a cost of over half a billion dollars, was also plagued by cracks in the hull, leading to fears of the hull’s complete collapse.
In May 2005, Congress cut the project’s budget in half, leading to the usual battery of letter writing, lobbying and campaign contributions that resulted in not only the avoidance of cuts to the disastrous programme but an increase to the budget of about $1bn a year, bringing the total project budget to $24bn. Finally, in April 2007, the Coast Guard took back the management of the project from the defence contractors. The first boats are expected to be ready for launch sometime in 2011, ten years after the 9/11 attacks that prompted the modernization effort in the first place.
More (#5) from The Shadow World:
And:
And:
And:
From inside the Pentagon, Chuck Spinney described the process as follows:
"When you start a programme the prime management objective is to make it hard to cancel. The way to think about this is in terms of managing risk: you have performance risk and the bearers of the performance risk are the soldiers who are going to fight with the weapon. You have an economic risk, the bearers of which are the people paying for it, the tax payers. And then you have programmatic risk, that’s the risk that a programme would be cancelled for whatever reasons. Whether you are a private corporation or a public operation you always have those three risks. Now if you look at who bears the programmatic risks it’s the people who are associated with and benefit from the promotion and continuance of that programme. That would include the military and civilians whose careers are attached to its success, and the congressman whose district it may be made in, and of course the companies that make it. If you look at traditional engineering, you start by designing and testing prototypes. To reduce performance risk you test it and redesign it and test it, redesign it. In this way you evolve the most workable design, which in some circumstances may be very different from your original conception. This process also reduces the economic risk because you work bugs out of it beforehand and figure out how to make it efficiently. But the process increases the programmatic risk, or the likelihood of it being cancelled because it doesn’t work properly or is too expensive.
"But the name of the game in the Pentagon is to keep the money flowing to the programme’s constituents. So we bypass the classical prototyping phase and rush a new programme into engineering development before its implications are understood. The subcontractors and jobs are spread all over the country as early as possible to build the programme’s political safety net. But this madness increases performance and economic risk because you’re locking into a design before you understand the future consequences of your decision. It’s insane. If you are spending your own money you would never do it this way but we are spending other people’s money and because we won’t be the ones to use the weapon – so we are risking other people’s blood. So protecting the programme and the money flow takes priority over reducing risk. That’s why we don’t do prototyping and why we lie about costs and why soldiers in the field end up with weapons that do not perform as promised.
"In the US government money is power. The way you preserve that power is to eliminate decision points that might threaten the flow of money. So with the F-22 we should have built a combat capable prototype. But the Cold War was ending, and the Air Force wanted that cow out of the barn door before the door closed."
More (#4) from The Shadow World:
And:
Armed with this new ‘open chequebook for arms sales’, Augustine and Lockheed’s vice-president for International Operations, Bruce Jackson, determined that their best hope of new business lay in an extended NATO. New entrants to the military alliance would be required to replace their Soviet-era weapons with systems compatible with NATO’s dominant Western members. Augustine toured Eastern Europe. In Romania he pledged that if the country’s government bought a new radar system from Lockheed Martin, the company would use its considerable clout in Washington to promote Bucharest’s NATO candidacy. In other words, a major defence manufacturer made clear that it was willing to reshape American international security and foreign policy to secure an arms order.
And:
By allocating so much public sector work to private companies, the Bush administration created a condition in which the nature and practice of government activities could be hidden under the cloak of corporate privacy. This severely limits both financial and political accountability. The financial activities of these companies are scrutinized primarily by its shareholders if it is a public company and occasionally by government auditors on a contract-by-contract basis. And of course, at a political level, it is not just feasible but common for the government to claim that a contractor had promised to do one thing but then did another, thus absolving government of responsibility.
This opaque operating environment, in addition to the secrecy afforded by national security, makes it extremely difficult to critically analyse and hold to account the massive military-industrial complex that drives the country’s predisposition to warfare and the increasing militarization of American society. What analysis there is tends to focus on the few corruption scandals that see the light of day.
More (#3) from The Shadow World:
In the late 1950s, Lockheed paid bribes of about $1.5m to $2m to various officials and a fee of $750,000 to Kodama to secure an order for 230 Starfighter planes. The details of the bribes were passed on to the CIA, which confirmed that every move made was approved by Washington. Lockheed was seen to be conducting a deep layer of Washington foreign policy.
This marked the high point of the Starfighter. It was sold to the German air force, and over a ten-year period crashed 178 times, killing a total of eighty-five German pilots. It earned the nickname ‘the Flying Coffin’, and a group of fifty widows of the pilots sued the company.
And:
And:
Resistance to this massive build-up was slow in coming, partly because of its popularity among ordinary Americans. But towards the end of Reagan’s first term, criticism was voiced of both the excessive size of the build-up at a time of growing deficits and social needs, and fear that the massive increase in nuclear weapons could exacerbate the risk of a superpower nuclear confrontation. The latter led to the nuclear freeze campaign, one of the most inspiring citizens’ movements of the twentieth century, while the former forced at least a slow-down in the military build-up.
Among the most effective tools of Reagan’s critics were two vastly overpriced items: a $600 toilet seat and a $7,662 coffeemaker. At a time when Caspar Weinberger was telling Congress that there wasn’t ‘an ounce of waste’ in the largest peacetime military budget in the nation’s history, the spare parts scandal opened the door to a more objective – and damning – assessment of what the tens of billions in new spending was actually paying for. It also opened up Weinberger to ridicule, symbolized most enduringly in a series of cartoons by the Washington Post cartoonist Herblock in which the Defense Secretary was routinely shown with a toilet seat around his neck. Appropriately enough, the coffeemaker was procured for Lockheed’s C-5A transport plane, the poster child for cost overruns and abject performance.
A young journalist, who had been mentored by the Pentagon whistle-blower Ernie Fitzgerald, was central to exposing the scandals. Dina Rasor fingered the aircraft engine makers Pratt & Whitney for thirty-four engine parts that had all increased in price by more than 300 per cent in a year. A procurement official noted in the memo which revealed the scam that ‘Pratt & Whitney has never had to control prices and it will be difficult for them to learn.’
This profiteering at the taxpayer’s expense was surpassed by the Gould Corporation, which provided the Navy with a simple claw hammer, sold in a hardware store for $7, at a price of $435. The Navy suggested the charges – $37 for engineering support, $93 for manufacturing support and a $56 fee that was clear profit – were acceptable. Further revelations included Lockheed charging the Pentagon $591 for a clock for the C-5A and $166,000 for a cowling door to cover the engines. The exorbitant coffeemakers were exposed as poorly made and needing frequent repairs. Lockheed was also billing the taxpayer over $670 for an armrest pad that the Air Force could make itself for between $5 and $25. Finally, it was discovered that a $181 flashlight was built with twenty-year-old technology and a better one could be bought off the shelf for a fraction of the cost.
Lockheed defended itself by pointing out that spare parts were only 1.6 per cent of the defence budget, suggesting that those uncovering the fraud, waste and abuse were the enemies of peace and freedom and should remain silent in the interests of national unity in the face of global adversaries. Ernie Fitzgerald again brought sanity to bear, by suggesting that an overcharge was an overcharge, and that the same procurement practices used with toilet covers and coffeemakers when applied to whole aircraft like the C-5A made the planes ‘a flying collection of spare parts’.
Rasor also revealed that the Air Force planned to pay Lockheed $1.5bn to fix severe problems with the wings on the C-5A that the company itself had created. The wing fix was little more than a multibillion-dollar bailout for Lockheed.
Despite this litany of disasters, the Air Force engaged in illegal lobbying to help Lockheed win the contract to build the next-generation transport plane. In August 1981, a McDonnell Douglas plane was selected for the project, with the Air Force concerned about Lockheed’s proposed C-5B. Two weeks later the Air Force reversed its decision. Rasor could not believe that the Air Force ‘would want to have an updated version of one of its most embarrassing procurements’.
More (#2) from The Shadow World:
BAE also spoke about making a quieter bomb so that the users’ exposure to fumes would be reduced. And the company was reported to be making landmines which would turn into manure over time. As Allen put it, they would ‘regenerate the environment that they had initially destroyed’.
She continued: ‘It is very ironic and very contradictory, but I do think, surely, if all the weapons were made in this manner it would be a good thing.’ This green initiative led only to much mirth at the absurd notion of the ethical arms company making weapons and ammunition that would be more caring. The plan to make green bullets was scrapped two years later after BAE discovered that tipping bullets with tungsten instead of lead resulted in higher production costs, making the venture unprofitable.
And:
Viktor Bout’s evasion of justice for many years is an exemplar of how these issues have combined to bedevil the prosecution of arms dealers.
In February 2002, Belgian authorities issued an Interpol ‘red notice’* that they were seeking the arrest of Bout on charges of money laundering and arms dealing. In theory, if he was in a member state, local police authorities were obliged to arrest him and hand him over to Belgium.
...A plan was hatched to arrest him when he landed in Athens and bring him to justice in Belgium. Soon after Bout’s flight took off, British field agents sent an encrypted message to London informing them that ‘the asset’ was in the air. Minutes later the plane changed direction, abandoning its flight plan. It disappeared into mountainous territory out of reach of local radars. The plane re-emerged ninety minutes later and landed in Athens. When police boarded the aircraft it was empty except for the pilots. Twenty-four hours later Bout was spotted 3,000 miles away in the Democratic Republic of Congo. Bout’s crew had been informed of the plan to arrest him in Athens and had arranged to drop him off safely elsewhere. For a European investigator all signs pointed towards US complicity: ‘There were only two intelligence services that could have decrypted the British transmission in so short a time,’ he explained. ‘The Russians and the Americans. And we know for sure it was not the Russians.’
Shortly after Bout’s narrow escape he moved back into the safety of his ‘home territories’ in Russia. Russian officials were reluctant to see Bout prosecuted as he had close contacts within the Russian establishment through whom he had been able to source surplus matériel for years. In 2002, in response to a request to reveal his whereabouts, Russian authorities declared that Bout was definitely not in Russia.
As they were issuing this definitive denial Bout was giving a two-hour interview in the Moscow studios of one of the country’s largest radio stations. Shortly afterwards Russian authorities released a second clarifying statement. It was a thinly veiled message, in classic Orwellian doublespeak, that Bout was now untouchable. With this Russian protection – known locally as krisha – Bout was able to resume operations, albeit with a higher degree of caution. As a consequence, as recently as 2006, Bout was sending weapons to Islamist militants in Somalia and Hezbollah in Lebanon.
More (#1) from The Shadow World:
A stark example of this cost could be seen in the early years of South Africa’s democracy. With the encouragement of international arms companies and foreign states, the government spent around £6bn on arms and weapons it didn’t require at a time when its President claimed the country could not afford to provide the antiretroviral drugs needed to keep alive the almost 6 million of its citizens living with HIV and Aids. Three hundred million dollars in commissions were paid to middlemen, agents, senior politicians, officials and the African National Congress (ANC – South Africa’s ruling party) itself. In the following five years more than 355,000 South Africans died avoidable deaths because they had no access to the life-saving medication...
And:
And:
Bandar says he was expecting a long discussion about the sale but "That was it. Two things were important. Are you friends of ours? Are you anticommunist? When I said yes to both, he said, ‘I will support it.’" Bandar then asked Reagan to voice his support to a reporter from the Los Angeles Times whom Dutton had tipped off. According to Bandar, the reporter asked: "Do you support the sale of the F-15s to Saudi Arabia that President Carter is proposing?" Reagan responded: "Oh yes, we support our friends and they should have the F-15s. But I disagree with him [Carter] on everything else."
And:
From Weiner’s Enemies:
By law, Bonaparte had to ask the House and the Senate to create this new bureau...
On May 27, 1908, the House emphatically said no. It feared the president intended to create an American secret police. The fear was well-founded. Presidents had used private detectives as political spies in the past.
...Congress banned the Justice Department from spending a penny on Bonaparte’s proposal. The attorney general evaded the order. The maneuver might have broken the letter of the law. But it was true to the spirit of the president.
Theodore Roosevelt was "ready to kick the Constitution into the back yard whenever it gets in the way," as Mark Twain observed. The beginnings of the FBI rose from that bold defiance.
Comment
More (#5) from Enemies:
"We asked whether the Directorate of Intelligence can ensure that intelligence collection priorities are met," the report said. "It cannot. We asked whether the directorate directly supervises most of the Bureau’s analysts. It does not." It did not control the money or the people over whom it appeared to preside. "Can the FBI’s latest effort to build an intelligence capability overcome the resistance that has scuppered past reforms?" the report asked. "The outcome is still in doubt." These were harsh judgments, all the more stinging because they were true.
If the FBI could not command and control its agents and its authorities, the report concluded, the United States should break up the Bureau and start anew, building a new domestic intelligence agency from the ground up.
With gritted teeth, Mueller began to institute the biggest changes in the command structure of the Bureau since Hoover’s death. A single National Security Service within the FBI would now rule over intelligence, counterintelligence, and counterterrorism. The change was imposed effective in September 2005. As the judge had predicted, it would take the better part of five years before it showed results.
And:
More (#4) from Enemies:
The Scots spent the summer and the fall piecing the hundreds of thousands of shards of evidence together. They got on-the-job training from FBI veterans like Richard Hahn—a man who had been combing through the wreckage of lethal bombings for fifteen years, ever since the unsolved FALN attack on the Fraunces Tavern in New York. They learned how the damage from a blast of Semtex looked different from the scorching from the heat of flame.
The Scots soon determined that bits of clothing with tags saying "Made in Malta" had been contained in a copper Samsonite Silhouette with the radio that held the bomb. But they did not tell the FBI. Then the Germans discovered a computer printout of baggage records from the Frankfurt airport; they showed a single suitcase from an Air Malta flight had been transferred to Pan Am 103 in Frankfurt. But they did not tell the Scots. The international teams of investigators reconvened in Scotland in January 1990. Once again, it was a dialogue of the deaf. Marquise had a terrible feeling that the case would never be solved.
"We’re having tons of problems with CIA. Lots of rivalry," Marquise said. "Scots are off doing their thing. You’ve got the Germans who are giving the records when they feel like it to the Scots. The FBI’s still doing its thing.… Everybody’s still doing their own thing."
Then, in June 1990, came small favors that paid big returns. Stuart Henderson, the new senior investigator in Scotland, shared one piece of evidence with Marquise: a photograph of a tiny piece of circuit board blasted into a ragged strip of the Maltese clothing. The Scots had been to fifty-five companies in seventeen countries without identifying the fragment. "They had no idea. No clue," Marquise said. "So they said, probably tongue-in-cheek, ‘You guys try. Give it a shot.’ "
The FBI crime laboratory gave the photo to the CIA. An Agency analyst had an image of a nearly identical circuit board, seized four years earlier from two Libyans in transit at the airport in Dakar, Senegal. On the back were four letters: MEBO. Nobody knew what MEBO meant.
And:
The FBI had no connectivity with the rest of American intelligence. Headquarters could not receive reports from the NSA or the CIA classified at the top secret level—and almost everything was classified top secret. Fresh intelligence could not be integrated into the FBI’s databases.
And:
The CIA water-boarded Abu Zubaydah eighty-three times in August and kept him awake for a week or more on end. It did not work. A great deal of what the CIA reported from the black site turned out to be false. The prisoner was not bin Laden’s chief of operations. He was not a terrorist mastermind. He had told the FBI everything he knew. He told the CIA things he did not know.
"You said things to make them stop, and those things were actually untrue, is that correct?" he was asked five years later in a tribunal at Guantánamo.
"Yes," he replied. "They told me, ‘Sorry, we discover that you are not Number Three, not a partner, not even a fighter.’ "
More (#3) from Enemies:
...Hanssen’s supervisors had discovered his one outstanding talent a few weeks after he arrived on duty: he was one of the very few people in the FBI who understood how computers worked. They assigned him to create an automated database about the Soviet contingent of diplomats and suspected spies in New York.
...In November 1979, Hanssen walked undetected into the midtown Manhattan offices of Amtorg, the Soviet trade mission that had served as an espionage front for six decades. The office was run by senior officers of the GRU. Hanssen knew where to go and who to see at Amtorg. That day, he volunteered his services as a spy. He turned over a sheaf of documents on the FBI’s electronic surveillance of the Soviet residential compound in New York, and he set up a system for delivering new secrets every six months through encoded radio communications. Hanssen’s next package contained an up-to-date list of all the Soviets in New York who the FBI suspected were spies. He delivered another revelation that shook the Soviet services to their roots: a GRU major general named Dmitri Polyakov had been working for America since 1961. He had been posted at the United Nations for most of those years. The Soviets recalled Polyakov to Moscow in May 1980. It is likely—though the question is still debated at the FBI—that Polyakov served thereafter as a channel of disinformation intended to mislead and mystify American intelligence.
Hanssen’s responsibilities grew. He was given the task of preparing the budget requests for the Bureau’s intelligence operations in New York. The flow of money showed the FBI’s targets for the next five years—and its plans for projects in collaboration with the CIA and the National Security Agency. His third delivery to the Soviets detailed those plans. And then he decided to lie low.
If Hanssen had stopped spying then and there, the damage he wrought still would have been unequaled in the history of the FBI. William Webster himself would conduct a postmortem after the case came to light in 2001. He called it "an incredible assault," an epochal disaster, "a five-hundred-year flood" that destroyed everything in its path.
Hanssen suspended his contacts with the Soviets in New York as a major case against an American spy was about to come to light. The investigation had reached across the United States into France, Mexico, and Canada before the FBI began to focus on a retired army code clerk named Joe Helmich in the summer of 1980. He was arrested a year later and sentenced to life in prison after he was convicted of selling the Soviets the codes and operating manual to the KL-7 system, the basic tool of encrypting communications developed by the NSA. He was a lowly army warrant officer with a top secret clearance; his treason had taken place in covert meetings with Soviet intelligence officers in Paris and Mexico City from 1963 to 1966; he was paid $131,000. He had sold the Soviets the equivalent of a skeleton key that let them decode the most highly classified messages of American military and intelligence officers during the Vietnam War.
Hanssen understood one of the most important aspects of the investigation: it had lasted for seventeen years. The FBI could keep a case of counterintelligence alive for a generation. There was no statute of limitations for espionage.
And:
And:
The president denied it in public. But Revell knew it was true.
On the afternoon of November 13, 1986, the White House asked Revell to review a speech that President Reagan would deliver to the American people that evening. As he pored over the draft of the speech in North’s office, he pointed out five evident falsehoods.
"We did not—repeat, did not—trade weapons or anything else for hostages, nor will we," the president’s draft said. The United States would never "strengthen those who support terrorism"; it had only sold "defensive armaments and spare parts" to Iran. It had not violated its stance of neutrality in the scorched-earth war between Iran and Iraq; it had never chartered arms shipments out of Miami.
Revell knew none of this was true. He warned Judge Webster, who alerted Attorney General Meese. He was ignored.
...The president delivered the speech almost precisely as drafted, word for dissembling word.
Colonel North and his superior, the president’s national security adviser, Admiral John Poindexter, began shredding their records and deleting their computer files as fast as they could. But within the White House, one crucial fact emerged: they had skimmed millions of dollars in profits from the weapons sales to Iran and siphoned off the money to support the contras.
More (#2) from Enemies:
The connection seemed self-evident to him. Homosexuality and communism were causes for instant dismissal from American government service—and most other categories of employment. Communists and homosexuals both had clandestine and compartmented lives. They inhabited secret underground communities. They used coded language. Hoover believed, as did his peers, that both were uniquely susceptible to sexual entrapment and blackmail by foreign intelligence services.
The FBI’s agents became newly vigilant to this threat. "The Soviets knew, in those days, a government worker, if he was a homosexual, he’d lose his job," said John T. Conway, who worked on the Soviet espionage squad in the FBI’s Washington field office. Conway investigated a State Department official suspected of meeting a young, blond, handsome KGB officer in a gay bar. "It was a hell of an assignment," he said. "One night we had him under surveillance and he picked up a young kid, took him up to his apartment, kept him all night. Next day we were able to get the kid and get a statement from him and this guy in the State Department lost his job."
On June 20, 1951, less than four weeks after the Homer case broke, Hoover escalated the FBI’s Sex Deviates Program. The FBI alerted universities and state and local police to the subversive threat, seeking to drive homosexuals from every institution of government, higher learning, and law enforcement in the nation. The FBI’s files on American homosexuals grew to 300,000 pages over the next twenty-five years before they were destroyed. It took six decades, until 2011, before homosexuals could openly serve in the United States military.
And:
...Dyson had questions about the rule of law: "Can I put an informant in a college classroom? Or even on the campus? Can I penetrate any college organization? What can I do? And nobody had any rules or regulations. There was nothing..."
"This was going to come and destroy us," he said. "We were going to end up with FBI agents arrested. Not because what they did was wrong. But because nobody knew what was right or wrong." Not knowing that difference is a legal definition of insanity. Dyson’s premonitions of disaster would prove prophetic. In time, the top commanders of the FBI in Washington and New York would face the prospect of prison time for their work against the threat from the left. So would the president’s closest confidants.
And:
He laid out his accusations in twenty-seven numbered paragraphs, like the counts of a criminal indictment. Some dealt with Hoover’s racial prejudices; the ranks of FBI agents remained 99.4 percent white (and 100 percent male). Some dealt with Hoover’s use of Bureau funds to dress up his home and decorate his life. Some dealt with the damage he had done to American intelligence by cutting off liaisons with the CIA. Some came close to a charge of treason.
"You abolished our main programs designed to identify and neutralize the enemy," he wrote, referring to COINTELPRO and the FBI’s black-bag jobs on foreign embassies. "You know the high number of illegal agents operating on the east coast alone. As of this week, the week I am leaving the FBI for good, we have not identified even one of them. These illegal agents, as you know, are engaged, among other things, in securing the secrets of our defense in the event of a military attack so that our defense will amount to nothing. Mr. Hoover, are you thinking? Are you really capable of thinking this through? Don’t you realize we are betraying our government and people?"
Sullivan struck hardest at Hoover’s cult of personality: "As you know you have become a legend in your lifetime with a surrounding mythology linked to incredible power," he wrote. "We did all possible to build up your legend. We kept away from anything which would disturb you and kept flowing into your office what you wanted to hear … This was all part of the game but it got to be a deadly game that has accomplished no good. All we did was to help put you out of touch with the real world and this could not help but have a bearing on your decisions as the years went by." He concluded with a plea: "I gently suggest you retire for your own good, that of the Bureau, the intelligence community, and law enforcement." Sullivan leaked the gist of his letter to his friends at the White House and a handful of reporters and syndicated columnists. The rumors went out across the salons and newsrooms of Washington: the palace revolt was rising at the FBI. The scepter was slipping from Hoover’s grasp.
More (#1) from Enemies:
Attorney General Gregory tried to disavow the raids, but the Bureau would not let him. "No one can make a goat of me," de Woody said defiantly. "Everything I have done in connection with this roundup has been done under the direction of the Attorney General and the chief of the Bureau of Investigation."
The political storm over the false arrest and imprisonment of the multitudes was brief. But both Attorney General Gregory and the Bureau’s Bielaski soon resigned. Their names and reputations have faded into thin air. Their legacy remains only because it was Hoover’s inheritance.
And:
And:
Then, to Hoover’s dismay, the judge admitted into evidence FBI reports alluding to the search for information on the Soviet atomic spy ring—a threat to the secrecy of Venona.
To protect the intelligence secrets of the FBI from exposure by the court, Hoover instituted a new internal security procedure on July 29, 1949. It was known as June Mail—a new hiding place for records about wiretaps, bugs, break-ins, black-bag jobs, and potentially explosive reports from the most secret sources. June Mail was not stored or indexed in the FBI’s central records but kept in a secret file room, far from the prying eyes of outsiders.
FBI headquarters issued a written order to destroy "all administrative records in the New York field office"—referring to the Coplon wiretaps—"in view of the immediacy of her trial." The written order contained a note in blue ink: "O.K.—H."
Despite Hoover’s efforts, the existence of the wiretaps was disclosed at the second trial—another layer of the FBI’s secrecy penetrated. Then the same FBI special agent who had lied at the first trial admitted that he had burned the wiretap records.
...The FBI had been caught breaking the law again. For the first time since the raids of 1920, lawyers, scholars, and journalists openly questioned the powers that Hoover exercised. Almost everyone agreed that the FBI should have the ability to wiretap while investigating treason, espionage, and sabotage. Of course taps would help to catch spies. But so did opening the mails, searching homes and offices, stealing documents, and planting bugs without judicial warrants—all standard conduct for the FBI, and all of it illegal. Even at the height of the Cold War, a free society still looked askance on a secret police.
From Roose’s Young Money:
Fashion Meets Finance hit a snag in 2008, when the financial sector nearly collapsed, taking bankers down a few notches on the Manhattan social ladder and necessitating a brief hiatus. But in 2009, it returned with a vengeance. Its organizers proclaimed proudly: "2008 was a confusing time, but we are here to announce the balance is restoring itself to the ecosystem of the New York dating community. We fear that news of shrinking bonuses, banks closing, and the Dow plummeting confused the gorgeous women of the city.…The uncertainty caused panic which caused irrational decisions—there’s going to be a two-year blip in the system where a hot fashion girl might commit to a pharmaceutical salesman.…Fashion Meets Finance has returned to let the women of fashion know that the recession is officially over. It might be a year before bonuses start inflating themselves again, but it will happen. Invest in the future; feel confident in your destiny. Hold on. It will only be a couple more years until you can quit your job and become a tennis mom."
I almost admired the candor with which Fashion Meets Finance accepted noxious social premises as fact. (One early advertisement read, "Ladies, you don’t need to worry that the cute guy at the bar works in advertising!") But others disagreed. Gawker called one gathering "an event where Manhattan banker-types and fashion slaves meet, consummate, and procreate certain genetics to create lineages of people you’d rather not know."
...This installment of Fashion Meets Finance, held after a yearlong break, had undergone a significant rebranding. Now, it was being billed as a charity event (proceeds were going to a nonprofit focused on Africa), and the cringe-worthy marketing slogans had been erased. Now, the financiers and fashionistas were joined by a smattering of young professionals from other industries: law, consulting, insurance, even a few female bankers.
After a few hours of drinking and socializing, I had filled my douchebag quota many times over. I had seen and heard the following things:
A banker showing off his expensive watch (which he called "my piece") to a gaggle of interested-looking women.
A former Lehman Brothers banker explaining his strategy for picking up women. "I use Lehman to break the ice—you know, get their sympathy. Then I tell them I make twice as much as Lehman paid me at my new job. They love my story, and then they end up in my bed."
A private equity associate using the acronym "P.J." to refer to his firm’s private jet.
A hedge fund trader giving dating advice to his down-and-out friend: "Girls come in many shapes and sizes. But just remember: when you hold them by the ankles and look down, they all look the same!"
As the night wore on, I identified the two primary strains of Fashion Meets Finance attendees. There were the merely curious, the people who had heard about the event from a friend and were intrigued enough about the premise to pay $25 for a ticket. These people mainly stood or sat on the perimeter of the roof deck, where they could observe (and, in some cases, laugh at) the commingling of the other partygoers.
And then there were the true believers. A portion of the attendees at Fashion Meets Finance seemingly had no idea that the event had become a punch line. They were bankers and fashionistas who were determined to find their matches at a superficial singles mixer, and they had no qualms about it. "I want the real deal!" said one female fashionista, who was sprawled out on a white sofa on Bar Basque’s terrace, sipping a vodka soda and watching the men walk by. "I’m really independent," she said, "and I don’t want someone who needs to be around me all the time. I want them to work 150 hours a week at Goldman Sachs."
From Tetlock’s Expert Political Judgment:
On close scrutiny, reputations for political genius rest on thin evidential foundations: genius is a matter of being in the right place at the right time. Hero worshippers reveal their own lack of historical imagination: their incapacity to see how easily things could have worked out far worse as a result of contingencies that no mortal could have foreseen. Political geniuses are just a close-call counterfactual away from being permanently pilloried as fools.
Comment
More (#2) from Expert Political Judgment:
First, vigorous competition among providers of intellectual products (off-the-shelf opinions) is not enough if the consumers are unmotivated to be discriminating judges of competing claims and counterclaims. This state of affairs most commonly arises when the mass public reacts to intellectuals peddling their wares on op-ed pages or in television studios, but it even arises in academia when harried, hyperspecialized faculty make rapid-fire assessments of scholars whose work is remote from their own. These consumers are rationally ignorant. They do not think it worth their while trying to gauge quality on their own. So, they rely on low-effort heuristics that prize attributes of alleged specialists, such as institutional affiliation, fame, and even physical attractiveness, that are weak predictors of epistemic quality. Indeed, our data— as well as other work— suggest that consumers, especially the emphatically self-confident hedgehogs among them, often rely on low-effort heuristics that are negative predictors of epistemic quality. Many share Harry Truman’s oft-quoted preference for one-armed advisers.
More (#1) from Expert Political Judgment:
And:
Exploring these what-if possibilities might seem a gratuitous reminder to families of victims of how unnecessary the deaths were. But the exercise is essential for appreciating why the contributory causes of one accident do not permit the NTSB to predict plane crashes in general. Pilots are often tired; bad weather and cryptic communication are common; radio communication sometimes breaks down; and people facing death frequently panic. The NTSB can pick out, post hoc, the ad hoc combination of causes of any disaster. They can, in this sense, explain the past. But they cannot predict the future. The only generalization that we can extract from airplane accidents may be that, absent sabotage, crashes are the result of a confluence of improbable events compressed into a few terrifying moments.
If a statistician were to conduct a prospective study of how well retrospectively identified causes, either singly or in combination, predict plane crashes, our measure of predictability— say, a squared multiple correlation coefficient— would reveal gross unpredictability. Radical skeptics tell us to expect the same fate for our quantitative models of wars, revolutions, elections, and currency crises. Retrodiction is enormously easier than prediction.
And:
Psychological skeptics are also not surprised when people draw strong lessons from brief runs of forecasting failures or successes. Winning forecasters are often skilled at concocting elaborate stories about why fortune favored their point of view. Academics can quickly spot the speciousness of these stories when the forecaster attributes her success to a divinity heeding a prayer or to planets being in the correct alignment. But even these observers can be gulled if the forecaster invokes an explanation in intellectual vogue.
From Sabin’s The Bet:
And:
Comment
More (#3) from The Bet:
More (#2) from The Bet:
And:
And:
More (#1) from The Bet:
And:
And:
From Yergin’s The Quest:
The first was "Order #1—De-Baathification of Iraqi Society." Some two million people had belonged to Saddam’s Baath Party. Some were slavish and brutal followers of Saddam; some were true believers. Many others were compelled to join the Baath Party to get along in their jobs and rise up in the omnipresent bureaucracies and other government institutions that dominated the economy, and to ensure that their children had educational opportunities in a country that had been ruled by the Baathists for decades...
Initially, de-Baathification was meant only to lop off the top of the hierarchy, which needed to be done immediately. But as rewritten and imposed, it reached far down into the country’s institutions and economy, where support for the regime was less ideological and more pragmatic. The country was, as one Iraqi general put it, "a nation of civil servants." Many schoolteachers were turned out of their jobs and left with no income. The way the purge was applied removed much of the operational capability from government ministries, dismantled the central government, and promoted disorganization. It also eliminated a wide swath of expertise from the oil industry. Broadly, it set the stage for a radicalization of Iraqis—especially Sunnis, stripped of their livelihood, pensions, access to medical care, and so forth—and helped to create conditions for the emergence of Al Qaeda in Iraq. In the oil industry, the result of its almost blanket imposition was to further undermine operations.
...The problem of inadequate troop levels was compounded by Order #2 by the Coalition Provisional Authority—"Dissolution of Entities"—which dismissed the Iraqi Army. Sending or allowing more than 400,000 soldiers, including the largely Sunni officer corps, to go home, with no jobs, no paychecks, no income to support their families, no dignity—but with weapons and growing animus to the American and British forces—was an invitation to disaster. The decision seems to have been made almost off-hand, somewhere between Washington and Baghdad, with little consideration or review. It reversed a decision made ten weeks earlier to use the Iraqi Army to help maintain order. In bluntly criticizing the policy to Bremer, one of the senior U.S. officers used an expletive. Rather than responding to the substance of the objection, Bremer said that he would not tolerate such language in his office and ordered the officer to leave the room.
Comment
More (#7) from The Quest:
More (#6) from The Quest:
Barack Obama flew in early one morning toward the end of the conference, with the intention of leaving later in the day. Shortly after his arrival, he was told by Secretary of State Hillary Clinton, "Copenhagen was the worst meeting I’ve been to since eighth-grade student council."
After sitting in a confusing meeting with a group of leaders, Obama turned to his own staff and said he wanted, urgently, to see Premier Wen Jiabao of China. Unfortunately, he was told, the premier was on his way to the airport. But then, no: word came back that Wen was still somewhere in the conference center. Obama and his aides started off at a fast pace to find him. Time was short, for Obama himself was scheduled to leave in a couple of hours, hoping to beat a blizzard that was bearing down on Washington.
At the end of a long corridor, Obama came upon a surprised security guard outside the conference room that was the office of the Chinese delegation. Despite the guard’s panicked efforts, Obama brushed right passed him and burst into the room. Not only was Wen there but, to Obama’s surprise, he found that so were the other members of what was now known as the BASIC group—President Luiz Inácio Lula da Silva of Brazil, President Jacob Zuma of South Africa, and Prime Minister Manmohan Singh of India—huddling to find a common position. For their part, they were no less taken aback by the sudden, unexpected appearance of the president of the United States. But they were hardly going to turn Obama away. He took a seat next to Lula and across from Wen. Wen, overcoming his surprise, passed over to Obama the draft they were working on. The president read it quickly and said it was good. But, he said, he had a "couple of points" to add.
Thereupon followed a drafting session with Obama more or less in the role of scribe. At one point the chief Chinese climate negotiator wanted to strenuously disagree with Obama, but Wen instructed that this interjection not be translated.
Finally, after much give-and-take, some of it heated, they came to an agreement. There would be no treaty and no legally binding targets. Instead developed and developing countries would adopt parallel nonbinding pledges to reduce their emissions. That would be accompanied by a parallel understanding that the "mitigation actions" undertaken by developing countries be "subject to international measurement, reporting and verification." The agreement also crystallized the prime objective of preventing temperatures from rising more than 2°C (3.6°F). The BASIC leaders tossed it to Obama to secure approval from European leaders, Chancellor Angela Merkel of Germany, President Nicolas Sarkozy of France, and Prime Minister Gordon Brown of the UK. The Europeans did so, but only reluctantly, as they wanted something much stronger. Obama then took off, beating the snowstorm back to Washington.
And:
"And there’s no red ribbon to cut." Conservation—energy efficiency—may be so obvious as a solution to cost and environmental issues. But there is no photo op, no opening ceremony where government officials and company executives can cut a ribbon, smile broadly into the camera, and inaugurate a grand new facility. He shook his head as he considered one of the most powerful of the life lessons he had learned from his deep immersion in global politics.
"It’s very important to be able to cut a red ribbon."
And:
And:
This sense of mottainai has underpinned Japan’s approach to energy efficiency, which was codified in the Energy Conservation Law of 1979. The law was expanded in 1998 with the introduction of the Top Runner program. It takes the most efficient appliance or motorcar in a particular class—the "top runner"—and then sets a requirement that all appliances and cars must, within a certain number of years, exceed the efficiency of the top runner. This creates a permanent race to keep upping the ante on efficiency. The results are striking: the average efficiency of videocassette recorders increased 74 percent between 1997 and 2003. Even television sets improved by 26 percent between 1997 and 2003. Further amendments to the law mandate improvements by factories and buildings, and require them to adopt efficiency plans.
And:
By 1985, 95 percent of all new cars sold in Brazil ran exclusively on "alcohol."
More (#5) from The Quest:
The road to Rio was actually quite long; it had begun more than two centuries earlier, in the Swiss Alps. But what had started as an obsession by a handful of researchers with the past, with glaciers and the mysteries of the Ice Age, was now set to become a dominating energy issue for the future.
And:
After a slow start, the buying and selling of allowances became standard practice among utilities. The results in the years since have been very impressive. Emissions trading delivered much larger reductions, at much lower costs, and much more speedily, than what would have been anticipated with a regulatory system. By 2008, emissions had fallen from the 1980 level by almost 60 percent. As a bonus, the rapid reduction in emissions meant less lung disease and thus significant savings on health care.
The impact on thinking about how to solve environmental problems was enormous. "We are unaware of any other U.S. environmental program that has achieved this much," concluded a group of MIT researchers, "and we find it impossible to believe that any feasible alternative command-and-control program could have done nearly as well." Coase’s theorem worked; markets were vindicated. Within a decade, a market-based approach to pollution had gone from immorality and heresy to almost accepted wisdom. The experience would decisively shape the policy responses in the ensuing debate over how to deal with climate change. Overall, the evidence on SO2 was so powerful that it was invoked again and again in the struggles over climate change policy.
And:
And:
And in the heart of its opinion, the Court said that CO2—even though it was produced not only by burning hydrocarbons but by breathing animals—was indeed a pollutant that "may reasonably be anticipated to endanger public health and welfare." And just to be sure not to leave any doubt as to how it felt, the majority added that the EPA’s current stance of nonregulation was "arbitrary" and "capricious" and "not in accordance with the law."
The consequences were enormous; for it meant that if the U.S. Congress did not legislate regulation of carbon, the EPA had the authority—and requirement—to wield its regulatory machinery to achieve the same end by making an "endangerment finding." Two out of three of the branches of the federal government were now determined that the government should move quickly to control CO2.
More (#4) from The Quest:
The president of the National Academy of Sciences, impressed by the briefing, set up a special task force under Jule Charney. Charney had moved from Princeton to MIT where, arguably, he had become America’s most prominent meteorologist. Issuing its report in 1979, the Charney Committee declared that the risk was very real. A few other influential studies came to similar conclusions, including one by the JASON committee, a panel of leading physicists and other scientists that advised the Department of Defense and other government agencies. It concluded that there was "incontrovertible evidence that the atmosphere is indeed changing and that we ourselves contribute to that change." The scientists added that the ocean, "the great and ponderous flywheel of the global climate system," was likely to slow observable climate change. The "JASONs," as they were sometimes called, said that "a wait-and-see policy may mean waiting until it is too late."
The campaign "around town" led to highly attended Senate hearings in April 1980. The star of the hearing was Keeling’s Curve. After looking at a map presented by one witness that showed the East Coast of the United States inundated by rising sea waters, the committee chair, Senator Paul Tsongas from Massachusetts, commented with rising irony: "It means good-bye Miami, Corpus Christi . . . good-bye Boston, good-bye New Orleans, good-bye Charleston. . . . On the bright side, it means we can enjoy boating at the foot of the Capitol and fishing on the South Lawn."
And:
Then came the epic crisis that threatened to scuttle the entire IPCC process: At 6:00 p.m. the U.N. translators walked off the job. They had come to the end of their working day and they were not going to work overtime. This was nonnegotiable. Those were their work rules. But without translators the delegates could not communicate among themselves, the meeting could not go on, there would be no report to the General Assembly and no resolution on climate change. But then the French chairman of the session, who had insisted on speaking French all week, made a huge concession. He agreed to switch to English, in which, it turned out, he was exceedingly fluent.
The discussions and debates now continued in English, and progress was laboriously made. But the chief Russian delegate sat silent, angrily scowling, wreathed in cigarette smoke. Without his assent, there would be no final report, and he gave no sign of coming on board.
Finally one of the scientists from the American delegation who happened to speak Russian approached the scientist. He made a stunning discovery. The Russian did not speak English, and he was certainly not going to sign on to something he did not understand. The American scientist turned himself into a translator, and the Russian finally agreed to the document. Thus consensus was wrought. The IPCC was rescued—just in time.
More (#3) from The Quest:
The title made clear what the article was all about: "Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase in Atmospheric CO2 During the Past Decades." Their paper invoked both Arrhenius and Callendar. Yet the article itself reflected ambiguity. Part of it suggested that the oceans would absorb most of the carbon, just as Revelle’s Ph.D. had argued, meaning that there would be no global warming triggered by carbon. Yet another paragraph suggested the opposite; that, while the ocean would absorb CO2, much of that was only on a temporary basis, owing to the chemistry of sea water, and the lack of interchange between warmer and cooler levels, and that the CO2 would seep back into the atmosphere. In other words, on a net basis, the ocean absorbed much less CO2 than expected. If not in the ocean, there was only one place for the carbon to go, and that was back into the atmosphere. That meant that atmospheric concentration of CO2 was destined, inevitably, to rise. The latter assertion was a late addition by Revelle, literally typed on a different kind of paper and then taped onto the original manuscript.
Before sending off the article, Revelle appended a further last-minute thought: The buildup of CO2 "may become significant during future decades if industrial fuel combustion continues to rise exponentially," he wrote. "Human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future." This last sentence would reverberate down through the years in ways that Revelle could not have imagined. Indeed, it would go on to achieve prophetic status—"quoted more than any other statement in the history of global warming."
Yet it was less a warning and more like a reflection. For Revelle was not worried. Like Svante Arrhenius who had tried 60 years earlier to quantify the effect of CO2 on the atmosphere, Revelle did not foresee that increased concentrations would be dangerous. Rather, it was a very interesting scientific question.
And:
"The weather in this country is practically unpredictable," the commander in chief Dwight Eisenhower had complained while anxiously waiting for the next briefing. The forecasts were for very bad weather. How could 175,000 men be put at risk in such dreadful circumstances? At best, the reliability of the weather forecasts went out no more than two days; the stormy weather over the English Channel reduced the reliability to 12 hours. So uncertain was the weather that at the last moment the invasion scheduled for June 5 was postponed, and ships that had already set sail were called back just in time before the Germans could detect them.
Finally, on the morning of June 5, the chief meteorologist said, "I’ll give you some good news." The forecasts indicated that a brief break of sorts in the weather was at hand. Eisenhower sat silently for 30 or 40 seconds, in his mind balancing success against failure and the risk of making a bad decision. Finally, he stood up and gave the order, "Okay, let’s go." With that was launched into the barely marginal weather of June 6, 1944, the greatest armada in the history of the world. Fortunately, the German weather forecasters did not see the break and assured the German commander, Erwin Rommel, that he did not have to worry about an invasion.
A decade later, knowing better than anyone else the strategic importance of improved weather knowledge, Eisenhower, now president, gave the "let’s go" order for the International Geophysical Year.
More (#2) from The Quest:
What unfolded in California graphically exposed the dangers of misdesigning a regulatory system. It was also a case study of how short-term politics can overwhelm the needs of sound policy.
According to popular lore, the crisis was manufactured and manipulated by cynical and wily out-of-state power traders, the worst being Enron, the Houston-based natural gas and energy company. Its traders and those of other companies were accused of creating and then exploiting the crisis with a host of complex strategies. Some traders certainly did blatantly, and even illegally, exploit the system and thus accentuated its flaws. Yet that skims over the fundamental cause of the crisis. For, by then, the system was already broken.
The California crisis resulted from three fundamental factors: The first was an unworkable form of partial deregulation that explicitly rejected the normal power-market stabilizers that could have helped avoid or at least blunt the crisis but instead built instability into the new system. The second was a sharp, adverse turn in supply and demand. The third was a political culture that wanted the benefits of increased electric power but without the costs.
And:
But Governor Gray Davis was still dead set against the one thing that would have immediately ameliorated the situation — letting retail prices rise. Instead he had the state step in and negotiate, of all things, long-term contracts, as far out as twenty years. Here the state demonstrated a stunning lack of commercial acumen—buying at the top of the market, committing $40 billion for electricity that would probably be worth only $20 billion in the years to come. With this the state transferred the financial crisis of the utilities to its own books, transforming California’s projected budget surplus of $8 billion into a multibillion-dollar state deficit.
And:
The French mathematician Joseph Fourier — a friend of Napoléon’s and a sometime governor of Egypt — was fascinated by the experiments of Saussure, whom he admiringly described as "the celebrated voyager." Fourier, who devoted much research to heat flows, was convinced that Saussure was right. The atmosphere, Fourier thought, had to function as some sort of top or lid, retaining heat. Otherwise, the earth’s temperature at night would be well below freezing.
But how to prove it? In the 1820s Fourier set out to do the mathematics. But the work was daunting and extremely inexact, and his inability to work out the calculations left him deeply frustrated. "It is difficult to know up to what point the atmosphere influences the average temperature of the globe," he lamented, for he could find "no regular mathematical theory" to explain it. With that, he figuratively threw up his hands, leaving the problem to others.
And:
While Callendar found this obsessively interesting, he, like Arrhenius, was hardly worried. He too thought this would make for a better, more pleasant world—"beneficial to mankind"—providing, among other things, a boon for agriculture. And there was a great bonus. "The return of the deadly glaciers should be delayed indefinitely."
But Callendar was an amateur, and the professionals in attendance that night at the Royal Meteorological Society did not take him very seriously. After all, he was a steam engineer.
Yet what Callendar described — the role of CO2 in climate change — eventually became known as the Callendar Effect. "His claims rescued the idea of global warming from obscurity and thrust it into the marketplace of ideas," wrote one historian. But it was only a temporary recovery. For over a number of years thereafter the idea was roundly dismissed. In 1951 a prominent climatologist observed that the CO2 theory of climate change "was never widely accepted and was abandoned." No one seemed to take it very seriously.
More (#1) from The Quest:
As things turned out, it happened much sooner — in 2009, amid the Great Recession. That year China, accelerating in the fast lane, not only overtook the United States but pulled into a clear lead.
And:
And:
To accomplish his goals, Rickover built a cadre of highly skilled and highly trained officers for the nuclear navy, who were constantly pushed to operate at peak standards of performance...
In Rickover’s tireless campaign to build a nuclear submarine and bulldoze through bureaucracy, he so alienated his superiors that he was twice passed over for promotion to admiral. It took congressional intervention to finally secure him the title.
Rickover’s methods worked. The development of the technology, the engineering , and construction for a nuclear submarine—all these were achieved in record time. The first nuclear submarine, the USS Nautilus, was commissioned in 1954. The whole enterprise had been achieved in seven years—compared with the quarter century that others had predicted. In 1958, to great acclaim, the Nautilus accomplished a formidable, indeed unthinkable, feat—it sailed 1,400 miles under the North Pole and the polar ice cap. The journey was nonstop except for those times when the ship got temporarily stuck between the massive ice cap and the shallow sea bottom. When, on the ship’s return, the Nautilus’s captain was received at the White House, the abrasive Rickover, who was ultimately responsible for the very existence of the Nautilus, was pointedly excluded from the ceremony.
...By the time Rickover finally retired in 1986, 40 percent of the navy’s major combatant ships would be nuclear propelled.
From The Second Machine Age:
After examining many examples of invention, innovation, and technological progress, complexity scholar Brian Arthur became convinced that stories like the invention of PCR are the rule, not the exception. As he summarizes in his book The Nature of Technology, "To invent something is to find it in what previously exists." Economist Paul Romer has argued forcefully in favor of this view, the so-called ‘new growth theory’ within economics, in order to distinguish it from perspectives like Gordon’s. Romer’s inherently optimistic theory stresses the importance of recombinant innovation.
Comment
More (#1) from The Second Machine Age:
From Making Modern Science:
In the early twentieth century, the legacy of the rationalist program was transformed in the work of Marxists such J. D. Bernal. Bernal, an eminent crystallographer, berated the scientific community for selling out to the industrialists. In his Social Function of Science (1939) he called for a renewed commitment to use science for the good of all. His 1954 Science in History was a monumental attempt to depict science as a potential force for good (as in the Enlightenment program) that had been perverted by its absorption into the military-industrial complex. In one important respect, then, the Marxists challenged the assumption that the rise of science represented the progress of human rationality. For them, science had emerged as a byproduct of the search for technical mastery over nature, not a disinterested search for knowledge, and the information it accumulated tended to reflect the interests of the society within which the scientist functioned. The aim of the Marxists was not to create a purely objective science but to reshape society so that the science that was done would benefit everyone, not just the capitalists. They dismissed the program advocated by Whitehead as a smokescreen for covering up science’s involvement in the rise of capitalism. Similarly, many intellectual historians reacted furiously to what they regarded as the denigration of science implicit in works such as the Soviet historian Boris Hessen’s "The Social and Economic Roots of Newton’s `Principia"′ from 1931. The outbreak of World War II highlighted two conflicting visions of science’s history, both of which linked it to the dangers revealed in Nazi Germany. The optimistic vision of the Enlightenment had vanished along with the idea of inevitable progress in the calamities that the Western world had now experienced. Science must either turn its back on materialism and renew its links with religion or turn its back on capitalism and begin fighting for the common good.
Comment
More (#1) from Making Modern Science:
From Johnson’s Where Good Ideas Come From:
But the most fascinating discovery in West’s research came from the data that didn’t turn out to obey Kleiber’s law. West and his team discovered another power law lurking in their immense database of urban statistics. Every datapoint that involved creativity and innovation — patents, R&D budgets, "supercreative" professions, inventors — also followed a quarter-power law, in a way that was every bit as predictable as Kleiber’s law. But there was one fundamental difference: the quarter-power law governing innovation was positive, not negative. A city that was ten times larger than its neighbor wasn’t ten times more innovative; it was seventeen times more innovative. A metropolis fifty times bigger than a town was 130 times more innovative.
From Gertner’s The Idea Factory:
Comment
More (#2) from The Idea Factory:
Mathews argued that Shannon’s theorem "was the mathematical basis for breaking up the Bell System." If that was so, then perhaps Shockley’s work would be the technical basis for a breakup. The patents, after all, were now there for the taking. And depending on how it played out, one might attach a corollary to Kelly’s loose formula for innovation—namely, that in any company’s greatest achievements one might, with the clarity of hindsight, locate the beginnings of its own demise.
And:
And:
One academic in the audience that day in Princeton suggested to Pierce that he publish his talk, which he soon did in the journal Jet Propulsion. "But what could be done about satellite communications in a practical way?" Pierce wondered. "At the time, nothing." He questioned whether he had fallen into a trap of speculation, something a self-styled pragmatist like Pierce despised. There were no satellites yet of any kind, and there were apparently no rockets capable of launching such devices. It was doubtful, moreover, whether the proper technology even existed yet to operate a useful communications satellite. As Pierce often observed ruefully, "We do what we can, not what we think we should or what we want to do."
More (#1) from The Idea Factory:
And:
I’m sure that I’ve seen your answer to this question somewhere before, but I can’t recall where: Of the audiobooks that you’ve listened to, which have been most worthwhile?
Comment
I keep an updated list here.
I guess I might as well post quotes from (non-audio) books here as well, when I have no better place to put them.
First up is Revolution in Science.
Starting on page 45:
Of course, there have been others who have said dramatically that they have produced a new science (Tartaglia, Galileo) or a new astronomy (Kepler) or a "new way of philosophizing" (Gilbert). We would not expect to find many explicit references to a revolution in science prior to the late 1600s. Of the three eighteenth-century scientists who claimed to be producing a revolution, only Lavoisier succeeded in eliciting the same judgment of his work from his contemporaries and from later historians and scientists.
Comment
This amazingly high percentage of self-proclaimed revolutionary scientists (30% or more) seems like a result of selection bias, since most scientist with oversized egos are not even remembered. I wonder what fraction of actual scientists (not your garden-variety crackpots) insist on having produced a revolution in science.
From Sunstein’s Worst-Case Scenarios:
Comment
More (#2) from Worst-Case Scenarios:
The first is that public opinion in the United States greatly matters, at least if it is reflected in actual behavior. When ozone depletion received massive attention in the media, American consumers responded by greatly reducing their consumption of aerosol sprays containing CFCs. This action softened industry opposition to regulation, because product lines containing CFCs were no longer nearly as profitable. In addition, market pressures from consumers spurred technological innovation in developing CFC substitutes. In the environmental domain as elsewhere, markets themselves can be technology-forcing. At the same time, public opinion put a great deal of pressure on public officials, affecting the behavior of legislators and the White House alike.
In Europe, by contrast, those involved in CFC production and use felt little pressure from public opinion, certainly in the early stages. The absence of such pressure, combined with the efforts of well-organized private groups, helped to ensure that European nations would take a weak stand on the question of regulation, at least at the inception of negotiations. In later stages, public opinion and consumer behavior were radically transformed in the United Kingdom and in Europe, and the transformation had large effects on the approach of political leaders there as well.
With respect to climate change, the attitude of the United States remains remarkably close to that of pre-Montreal Europe, urging regulators to "wait and learn"; to date, research and voluntary action rather than emission reduction mandates have been recommended by high-level officials. It is true that since 1990 the problem of climate change has received a great deal of media attention in the United States. But the public has yet to respond to that attention through consumer choices, and the best evidence suggests that most American citizens are not, in fact, alarmed about the risks associated with a warmer climate. American consumers and voters have put little pressure on either markets or officials to respond to the risk.
...The second lesson is that international agreements addressing global environmental problems will be mostly ineffective without the participation of the United States, and the United States is likely to participate only if the domestic benefits are perceived to be at least in the general domain of the domestic costs.
More (#5) from Worst-Case Scenarios:
Perhaps we can agree that pure uncertainty is rare. Perhaps we can agree that, at worst, regulatory problems involve problems of "bounded uncertainty," in which we cannot specify probabilities within particular bands. Maybe the risk of a catastrophic outcome is above 1 percent and below 10 percent, but maybe within that band it is impossible to assign probabilities. A sensible approach, then, would be to ask planners to identify a wide range of possible scenarios and to select approaches that do well for most or all of them. Of course, the pervasiveness of uncertainty depends on what is actually known, and in the case of climate change, people dispute what is actually known. Richard Posner believes that "no probabilities can be attached to the catastrophic global-warming scenarios, and without an estimate of probabilities an expected cost cannot be calculated." A 1994 survey of experts showed an extraordinary range of estimated losses from climate change, varying from no economic loss to a 20 percent decrease in gross world product—a catastrophic decline in the world’s well-being.
More (#4) from Worst-Case Scenarios:
Availability helps to explain the operation of the Precautionary Principle for a simple reason: Sometimes a certain risk, said to call for precautions, is cognitively available, whereas other risks, including the risks associated with regulation itself, are not. For example, everyone knows that nuclear power is potentially dangerous; the associated risks, and the worst-case scenarios, are widely perceived in the culture, because of the Chernobyl disaster and popular films about nuclear catastrophes. By contrast, a relatively complex mental operation is involved in the judgment that restrictions on nuclear power might lead people to depend on less safe alternatives, such as fossil fuels. In many cases where the Precautionary Principle seems to offer guidance, the reason is that some of the relevant risks are available while others are barely visible.
...But there is another factor. Human beings tend to be loss averse, which means that a loss from the status quo is seen as more distressing than a gain is seen as desirable… Because we dislike losses far more than we like corresponding gains, opportunity costs, in the form of forgone gains, often have a small impact on our decisions. When we anticipate a loss of what we already have, we often become genuinely afraid, in a way that greatly exceeds our feelings of pleasurable anticipation when we anticipate some addition to our current holdings.
The implication in the context of danger is clear: People will be closely attuned to the potential losses from any newly introduced risk, or from any aggravation of existing risks, but far less concerned about future gains they might never see if a current risk is reduced. Loss aversion often helps to explain what makes the Precautionary Principle operational. The status quo marks the baseline against which gains and losses are measured, and a loss from the status quo seems much more "bad" than a gain from the status quo seems good.
This is exactly what happens in the case of drug testing. Recall the emphasis, in the United States, on the risks of insufficient testing of medicines as compared with the risks of delaying the availability of those medicines. If there is a lot of testing, people may get sicker, and even die, simply because medicines are not made available. But if the risks of delay are off-screen, the Precautionary Principle will appear to give guidance notwithstanding the objections I have made. At the same time, the lost benefits sometimes present a devastating problem with the use of the Precautionary Principle. In the context of genetic modification of food, this is very much the situation; many people focus on the risks of genetic modification without also attending to the benefits that might be lost by regulation or prohibition. We can find the same problem when the Precautionary Principle is invoked to support bans on nonreproductive cloning. For many people, the possible harms of cloning register more strongly than the potential therapeutic benefits that would be made unattainable by a ban on the practice.
More (#3) from Worst-Case Scenarios:
And:
In some cases, serious precautions would actually run afoul of the Precautionary Principle. Consider the "drug lag," produced whenever the government takes a highly precautionary approach to the introduction of new medicines and drugs onto the market. If a government insists on this approach, it will protect people against harms from inadequately tested drugs, in a way that fits well with the goal of precaution. But it will also prevent people from receiving potential benefits from those very drugs-and hence subject people to serious risks that they would not otherwise face. Is it "precautionary" to require extensive premarket testing, or to do the opposite? In 2006, 50,000 dogs were slaughtered in China, and the slaughter was defended as a precautionary step against the spread of rabies. But the slaughter itself caused a serious harm to many animals, and it inflicted psychological harms on many dog-owners, and even physical injuries on those whose pets were clubbed to death during walks. Is it so clear that the Precautionary Principle justified the slaughter? And even if the Precautionary Principle could be applied, was the slaughter really justified?
Or consider the case of DDT, often banned or regulated in the interest of reducing risks to birds and human beings. The problem with such bans is that, in poor nations, they eliminate what appears to be the most effective way of combating malaria. For this reason, they significantly undermine public health. DDT may well be the best method for combating serious health risks in many countries. With respect to DDT, precautionary steps are both mandated and forbidden by the idea of precaution in its strong forms. To know what to do, we need to identify the probability and magnitude of the harms created and prevented by DDT-not to insist on precaution as such.
Similar issues are raised by the continuing debate over whether certain antidepressants impose a (small) risk of breast cancer. A precautionary approach might seem to argue against the use of these drugs because of their carcinogenic potential. But the failure to use those antidepressants might well impose risks of its own, certainly psychological and possibly even physical (because psychological ailments are sometimes associated with physical ones as well). Or consider the decision by the Soviet Union to evacuate and relocate more than 270,000 people in response to the risk of adverse effects from the Chernobyl fallout. It is hardly clear that on balance this massive relocation project was justified on health grounds: "A comparison ought to have been made between the psychological and medical burdens of this measure (anxiety, psychosomatic diseases, depression and suicides) and the harm that may have been prevented." More generally, a sensible government might want to ignore the small risks associated with low levels of radiation, on the ground that precautionary responses are likely to cause fear that outweighs any health benefits from those responses—and fear is not good for your health.
And:
More (#1) from Worst-Case Scenarios:
But the Principle of Intergenerational Neutrality does not mean that the present generation should refuse to discount the future, or should impose great sacrifices on itself for the sake of those who will come later. If human history is any guide, the future will be much richer than the present; and it makes no sense to say that the relatively impoverished present should transfer its resources to the far wealthier future. And if the present generation sacrifices itself by forgoing economic growth, it is likely to hurt the future too, because long-term economic growth is likely to produce citizens who live healthier, longer, and better lives. I shall have something to say about what intergenerational neutrality actually requires, and about the complex relationship between that important ideal and the disputed practice of "discounting" the future.
But at least so far in the book, Sunstein doesn’t mention the obvious rejoinder about investing now to prevent existential catastrophe.
Anyway, another quote:
But careful analysis and economic rationality were not the whole story: The nation’s attention was also riveted by a vivid image, the ominous and growing "ozone hole" over Antarctica. Ordinary people could easily understand the idea that the earth was losing a kind of "protective shield," one that operated as a safeguard against skin cancer, a dreaded condition.
From Gleick’s Chaos:
Those who recognized chaos in the early days agonized over how to shape their thoughts and findings into publishable form. Work fell between disciplines — for example, too abstract for physicists yet too experimental for mathematicians. To some the difficulty of communicating the new ideas and the ferocious resistance from traditional quarters showed how revolutionary the new science was. Shallow ideas can be assimilated; ideas that require people to reorganize their picture of the world provoke hostility.
Comment
More (#3) from Chaos:
Hubbard set about sampling the infinitude of points that make up the plane. He had his computer sweep from point to point, calculating the flow of Newton’s method for each one, and color-coding the results. Starting points that led to one solution were all colored blue. Points that led to the second solution were red, and points that led to the third were green. In the crudest approximation, he found, the dynamics of Newton’s method did indeed divide the plane into three pie wedges. Generally the points near a particular solution led quickly into that solution. But systematic computer exploration showed complicated underlying organization that could never have been seen by earlier mathematicians, able only to calculate a point here and a point there. While some starting guesses converged quickly to a root, others bounced around seemingly at random before finally converging to a solution. Sometimes it seemed that a point could fall into a cycle that would repeat itself forever—a periodic cycle—without ever reaching one of the three solutions.
As Hubbard pushed his computer to explore the space in finer and finer detail, he and his students were bewildered by the picture that began to emerge. Instead of a neat ridge between the blue and red valleys, for example, he saw blotches of green, strung together like jewels. It was as if a marble, caught between the conflicting tugs of two nearby valleys, would end up in the third and most distant valley instead. A boundary between two colors never quite forms. On even closer inspection, the line between a green blotch and the blue valley proved to have patches of red. And so on—the boundary finally revealed to Hubbard a peculiar property that would seem bewildering even to someone familiar with Mandelbrot’s monstrous fractals: no point serves as a boundary between just two colors. Wherever two colors try to come together, the third always inserts itself, with a series of new, self-similar intrusions. Impossibly, every boundary point borders a region of each of the three colors.
And:
"In a structured subject, it is known what is known, what is unknown, what people have already tried and doesn’t lead anywhere. There you have to work on a problem which is known to be a problem, otherwise you get lost. But a problem which is known to be a problem must be hard, otherwise it would already have been solved."
Peitgen shared little of the mathematicians’ unease with the use of computers to conduct experiments. Granted, every result must eventually be made rigorous by the standard methods of proof, or it would not be mathematics. To see an image on a graphics screen does not guarantee its existence in the language of theorem and proof. But the very availability of that image was enough to change the evolution of mathematics. Computer exploration was giving mathematicians the freedom to take a more natural path, Peitgen believed. Temporarily, for the moment, a mathematician could suspend the requirement of rigorous proof. He could go wherever experiments might lead him, just as a physicist could. The numerical power of computation and the visual cues to intuition would suggest promising avenues and spare the mathematician blind alleys. Then, new paths having been found and new objects isolated, a mathematician could return to standard proofs. "Rigor is the strength of mathematics," Peitgen said. "That we can continue a line of thought which is absolutely guaranteed — mathematicians never want to give that up. But you can look at situations that can be understood partially now and with rigor perhaps in future generations. Rigor, yes, but not to the extent that I drop something just because I can’t do it now."
More (#2) from Chaos:
And:
That view of science works best when a well-defined discipline awaits the resolution of a well-defined problem. No one misunderstood the discovery of the molecular structure of DNA, for example. But the history of ideas is not always so neat. As nonlinear science arose in odd corners of different disciplines, the flow of ideas failed to follow the standard logic of historians. The emergence of chaos as an entity unto itself was a story not only of new theories and new discoveries, but also of the belated understanding of old ideas. Many pieces of the puzzle had been seen long before — by Poincaré, by Maxwell, even by Einstein — and then forgotten. Many new pieces were understood at first only by a few insiders. A mathematical discovery was understood by mathematicians, a physics discovery by physicists, a meteorological discovery by no one. The way ideas spread became as important as the way they originated.
Each scientist had a private constellation of intellectual parents. Each had his own picture of the landscape of ideas, and each picture was limited in its own way. Knowledge was imperfect. Scientists were biased by the customs of their disciplines or by the accidental paths of their own educations. The scientific world can be surprisingly finite. No committee of scientists pushed history into a new channel — a handful of individuals did it, with individual perceptions and individual goals.
And:
"And if I tell him, I don’t care, what interests me is this shape, the mathematics of the shape and the evolution, the bifurcation from this shape to that shape to this shape, he will tell me, that’s not physics, you are doing mathematics. Even today he will tell me that. Then what can I say? Yes, of course, I am doing mathematics. But it is relevant to what is around us. That is nature, too."
More (#1) from Chaos:
"The politics affected the style in a sense which I later came to regret. I was saying, ‘It’s natural to…, It’s an interesting observation that….’ Now, in fact, it was anything but natural, and the interesting observation was in fact the result of very long investigations and search for proof and self-criticism. It had a philosophical and removed attitude which I felt was necessary to get it accepted. The politics was that, if I said I was proposing a radical departure, that would have been the end of the readers’ interest.
"Later on, I got back some such statements, people saying, ‘It is natural to observe…’ That was not what I had bargained for."
Looking back, Mandelbrot saw that scientists in various disciplines responded to his approach in sadly predictable stages. The first stage was always the same: Who are you and why are you interested in our field? Second: How does it relate to what we have been doing, and why don’t you explain it on the basis of what we know? Third: Are you sure it’s standard mathematics? (Yes, I’m sure.) Then why don’t we know it? (Because it’s standard but very obscure.)
Mathematics differs from physics and other applied sciences in this respect. A branch of physics, once it becomes obsolete or unproductive, tends to be forever part of the past. It may be a historical curiosity, perhaps the source of some inspiration to a modern scientist, but dead physics is usually dead for good reason. Mathematics, by contrast, is full of channels and byways that seem to lead nowhere in one era and become major areas of study in another. The potential application of a piece of pure thought can never be predicted. That is why mathematicians value work in an aesthetic way, seeking elegance and beauty as artists do. It is also why Mandelbrot, in his antiquarian mode, came across so much good mathematics that was ready to be dusted off.
So the fourth stage was this: What do people in these branches of mathematics think about your work? (They don’t care, because it doesn’t add to the mathematics. In fact, they are surprised that their ideas represent nature.)
From Lewis’ The Big Short:
And that’s pretty much how I imagined it; what I never imagined is that the future reader might look back on any of this, or on my own peculiar experience, and say, "How quaint." How innocent. Not for a moment did I suspect that the financial 1980s would last for two full decades longer, or that the difference in degree between Wall Street and ordinary economic life would swell to a difference in kind. That a single bond trader might be paid $47 million a year and feel cheated. That the mortgage bond market invented on the Salomon Brothers trading floor, which seemed like such a good idea at the time, would lead to the most purely financial economic disaster in history. That exactly twenty years after Howie Rubin became a scandalous household name for losing $250 million, another mortgage bond trader named Howie, inside Morgan Stanley, would lose $9 billion on a single mortgage trade, and remain essentially unknown, without anyone beyond a small circle inside Morgan Stanley ever hearing about what he’d done, or why.
...In the two decades after I left, I waited for the end of Wall Street as I had known it. The outrageous bonuses, the endless parade of rogue traders, the scandal that sank Drexel Burnham, the scandal that destroyed John Gutfreund and finished off Salomon Brothers, the crisis following the collapse of my old boss John Meriwether’s Long-Term Capital Management, the Internet bubble: Over and over again, the financial system was, in some narrow way, discredited. Yet the big Wall Street banks at the center of it just kept on growing, along with the sums of money that they doled out to twenty-six-year-olds to perform tasks of no obvious social utility. The rebellion by American youth against the money culture never happened. Why bother to overturn your parents’ world when you can buy it and sell off the pieces?
At some point, I gave up waiting. There was no scandal or reversal, I assumed, sufficiently great to sink the system.
Comment
More (#4) from The Big Short:
And:
And:
And:
More (#3) from The Big Short:
And:
And:
The alarmingly named Avant! Corporation was a good example. He’d found it searching for the word "accepted" in news stories. He knew that, standing on the edge of the playing field, he needed to find unorthodox ways to tilt it to his advantage, and that usually meant finding unusual situations the world might not be fully aware of. "I wasn’t searching for a news report of a scam or fraud per se," he said. "That would have been too backward-looking, and I was looking to get in front of something. I was looking for something happening in the courts that might lead to an investment thesis. An argument being accepted, a plea being accepted, a settlement being accepted by the court." A court had accepted a plea from a software company called the Avant! Corporation. Avant! had been accused of stealing from a competitor the software code that was the whole foundation of Avant!‘s business. The company had $100 million in cash in the bank, was still generating $100 million a year of free cash flow—and had a market value of only $250 million! Michael Burry started digging; by the time he was done, he knew more about the Avant! Corporation than any man on earth. He was able to see that even if the executives went to jail (as they did) and the fines were paid (as they were), Avant! would be worth a lot more than the market then assumed. Most of its engineers were Chinese nationals on work visas, and thus trapped—there was no risk that anyone would quit before the lights were out. To make money on Avant!’s stock, however, he’d probably have to stomach short-term losses, as investors puked up shares in horrified response to negative publicity.
And:
Once again they shocked and delighted him: Goldman Sachs e-mailed him a great long list of crappy mortgage bonds to choose from. "This was shocking to me, actually," he says. "They were all priced according to the lowest rating from one of the big three ratings agencies." He could pick from the list without alerting them to the depth of his knowledge. It was as if you could buy flood insurance on the house in the valley for the same price as flood insurance on the house on the mountaintop.
More (#2) from The Big Short:
And:
Eisman asked, "Are any regulators interested in this?"
"No," said Sandler.
"That’s when I decided the system was really, ‘Fuck the poor.’"
And:
Heh-heh-heh, c’mon, we’d never do that, the trader started to say, but Danny, though perfectly polite, was insistent.
We both know that unadulterated good things like this trade don’t just happen between little hedge funds and big Wall Street firms. I’ll do it, but only after you explain to me how you are going to fuck me. And the salesman explained how he was going to fuck him. And Danny did the trade.
More (#1) from The Big Short:
From that moment, Meredith Whitney became E. F. Hutton: When she spoke, people listened. Her message was clear: If you want to know what these Wall Street firms are really worth, take a cold, hard look at these crappy assets they’re holding with borrowed money, and imagine what they’d fetch in a fire sale. The vast assemblages of highly paid people inside them were worth, in her view, nothing. All through 2008, she followed the bankers’ and brokers’ claims that they had put their problems behind them with this write-down or that capital raise with her own claim: You’re wrong. You’re still not facing up to how badly you have mismanaged your business. You’re still not acknowledging billions of dollars in losses on subprime mortgage bonds. The value of your securities is as illusory as the value of your people. Rivals accused Whitney of being overrated; bloggers accused her of being lucky. What she was, mainly, was right. But it’s true that she was, in part, guessing. There was no way she could have known what was going to happen to these Wall Street firms, or even the extent of their losses in the subprime mortgage market. The CEOs themselves didn’t know. "Either that or they are all liars," she said, "but I assume they really just don’t know."
Now, obviously, Meredith Whitney didn’t sink Wall Street. She’d just expressed most clearly and most loudly a view that turned out to be far more seditious to the social order than, say, the many campaigns by various New York attorneys general against Wall Street corruption. If mere scandal could have destroyed the big Wall Street investment banks, they would have vanished long ago. This woman wasn’t saying that Wall Street bankers were corrupt. She was saying that they were stupid. These people whose job it was to allocate capital apparently didn’t even know how to manage their own.
And:
What first caught Vinny’s eye were the high prepayments coming in from a sector called "manufactured housing." ("It sounds better than ‘mobile homes.‘") Mobile homes were different from the wheel-less kind: Their value dropped, like cars’, the moment they left the store. The mobile home buyer, unlike the ordinary home buyer, couldn’t expect to refinance in two years and take money out. Why were they prepaying so fast? Vinny asked himself. "It made no sense to me. Then I saw that the reason the prepayments were so high is that they were involuntary." "Involuntary prepayment" sounds better than "default." Mobile home buyers were defaulting on their loans, their mobile homes were being repossessed, and the people who had lent them money were receiving fractions of the original loans. "Eventually I saw that all the subprime sectors were either being prepaid or going bad at an incredible rate," said Vinny. "I was just seeing stunningly high delinquency rates in these pools." The interest rate on the loans wasn’t high enough to justify the risk of lending to this particular slice of the American population. It was as if the ordinary rules of finance had been suspended in response to a social problem. A thought crossed his mind: How do you make poor people feel wealthy when wages are stagnant? You give them cheap loans.
To sift every pool of subprime mortgage loans took him six months, but when he was done he came out of the room and gave Eisman the news. All these subprime lending companies were growing so rapidly, and using such goofy accounting, that they could mask the fact that they had no real earnings, just illusory, accounting-driven, ones. They had the essential feature of a Ponzi scheme: To maintain the fiction that they were profitable enterprises, they needed more and more capital to create more and more subprime loans. "I wasn’t actually a hundred percent sure I was right," said Vinny, "but I go to Steve and say, ‘This really doesn’t look good.’ That was all he needed to know. I think what he needed was evidence to downgrade the stock."
The report Eisman wrote trashed all of the subprime originators; one by one, he exposed the deceptions of a dozen companies. "Here is the difference," he said, "between the view of the world they are presenting to you and the actual numbers." The subprime companies did not appreciate his effort. "He created a shitstorm," said Vinny. "All these subprime companies were calling and hollering at him: You’re wrong. Your data’s wrong. And he just hollered back at them, ‘It’s YOUR fucking data!’" One of the reasons Eisman’s report disturbed so many is that he’d failed to give the companies he’d insulted fair warning. He’d violated the Wall Street code. "Steve knew this was going to create a shitstorm," said Vinny. "And he wanted to create the shitstorm. And he didn’t want to be talked out of it. And if he told them, he’d have had all these people trying to talk him out of it."
"We were never able to evaluate the loans before because we never had the data," said Eisman later. "My name was wedded to this industry. My entire reputation had been built on covering these stocks. If I was wrong, that would be the end of the career of Steve Eisman."
Eisman published his report in September 1997, in the middle of what appeared to be one of the greatest economic booms in U.S. history. Less than a year later, Russia defaulted and a hedge fund called Long-Term Capital Management went bankrupt. In the subsequent flight to safety, the early subprime lenders were denied capital and promptly went bankrupt en masse. Their failure was interpreted as an indictment of their accounting practices, which allowed them to record profits before they were realized. No one but Vinny, so far as Vinny could tell, ever really understood the crappiness of the loans they had made. "It made me feel good that there was such inefficiency to this market," he said. "Because if the market catches on to everything, I probably have the wrong job. You can’t add anything by looking at this arcane stuff, so why bother? But I was the only guy I knew who was covering companies that were all going to go bust during the greatest economic boom we’ll ever see in my lifetime. I saw how the sausage was made in the economy and it was really freaky."
From Gleick’s The Information:
...So even a small symbol set could be arranged to express any message at all. However, with a small symbol set, a given message requires a longer string of characters — "more Labour and Time," he wrote. Wilkins did not explain that 25 = 52, nor that three symbols taken in threes (aaa, aab, aac,…) produce twenty-seven possibilities because 33 = 27. But he clearly understood the underlying mathematics. His last example was a binary code, awkward though this was to express in words:
That word, differences, must have struck Wilkins’s readers (few though they were) as an odd choice. But it was deliberate and pregnant with meaning. Wilkins was reaching for a conception of information in its purest, most general form. Writing was only a special case: "For in the general we must note, That whatever is capable of a competent Difference, perceptible to any Sense, may be a sufficient Means whereby to express the Cogitations." A difference could be "two Bells of different Notes"; or "any Object of Sight, whether Flame, Smoak, &c."; or trumpets, cannons, or drums. Any difference meant a binary choice. Any binary choice began the expressing of cogitations. Here, in this arcane and anonymous treatise of 1641, the essential idea of information theory poked to the surface of human thought, saw its shadow, and disappeared again for four hundred years.
Comment
More (#1) from The Information:
Information that just two years earlier had taken days to arrive at its destination could now be there—anywhere—in seconds. This was not a doubling or tripling of transmission speed; it was a leap of many orders of magnitude. It was like the bursting of a dam whose presence had not even been known.
And:
Szilárd had thus closed a loop leading to Shannon’s conception of entropy as information. For his part, Shannon did not read German and did not follow Zeitschrift für Physik. "I think actually Szilárd was thinking of this," he said much later, "and he talked to von Neumann about it, and von Neumann may have talked to Wiener about it. But none of these people actually talked to me about it." Shannon reinvented the mathematics of entropy nonetheless.
And:
And, an amusing quote:
From Acemoglu & Robinson’s Why Nations Fail:
Santa Ana, son of a colonial official in Veracruz, came to prominence as a soldier fighting for the Spanish in the independence wars. In 1821 he switched sides with Iturbide and never looked back. He became president of Mexico for the first time in May of 1833, though he exercised power for less than a month, preferring to let Valentín Gómez Farías act as president. Gómez Farías’s presidency lasted fifteen days, after which Santa Ana retook power. This was as brief as his first spell, however, and he was again replaced by Gómez Farías, in early July. Santa Ana and Gómez Farías continued this dance until the middle of 1835, when Santa Ana was replaced by Miguel Barragán. But Santa Ana was not a quitter. He was back as president in 1839, 1841, 1844, 1847, and, finally, between 1853 and 1855. In all, he was president eleven times, during which he presided over the loss of the Alamo and Texas and the disastrous Mexican-American War, which led to the loss of what became New Mexico and Arizona. Between 1824 and 1867 there were fifty-two presidents in Mexico, few of whom assumed power according to any constitutionally sanctioned procedure.
Comment
More (#2) from Why Nations Fail:
Though the railway to the south was initially designed by the British to rule Sierra Leone, by 1967 its role was economic, transporting most of the country’s exports: coffee, cocoa, and diamonds. The farmers who grew coffee and cocoa were Mende, and the railway was Mendeland’s window to the world. Mendeland had voted hugely for Albert Margai in the 1967 election. Stevens was much more interested in holding on to power than promoting Mendeland’s exports. His reasoning was simple: whatever was good for the Mende was good for the SLPP, and bad for Stevens. So he pulled up the railway line to Mendeland. He then went ahead and sold off the track and rolling stock to make the change as irreversible as possible.
And:
It was all a farce, but also more tragic than the original tragedy, and not only for the hopes that were dashed. Stevens and Kabila, like many other rulers in Africa, would start murdering their opponents and then innocent citizens. Mengistu and the Derg’s policies would bring recurring famine to Ethiopia’s fertile lands. History was repeating itself, but in a very distorted form. It was a famine in Wollo province in 1973 to which Haile Selassie was apparently indifferent that did so much finally to solidify opposition to his regime. Selassie had at least been only indifferent. Mengistu instead saw famine as a political tool to undermine the strength of his opponents. History was not only farcical and tragic, but also cruel to the citizens of Ethiopia and much of sub-Saharan Africa.
The essence of the iron law of oligarchy, this particular facet of the vicious circle, is that new leaders overthrowing old ones with promises of radical change bring nothing but more of the same.
More (#1) from Why Nations Fail:
The same was not true in Mexico. In fact, in 1910, the year in which the Mexican Revolution started, there were only forty-two banks in Mexico, and two of these controlled 60 percent of total banking assets. Unlike in the United States, where competition was fierce, there was practically no competition among Mexican banks. This lack of competition meant that the banks were able to charge their customers very high interest rates, and typically confined lending to the privileged and the already wealthy, who would then use their access to credit to increase their grip over the various sectors of the economy.
And:
Díaz violated people’s property rights, facilitating the expropriation of vast amounts of land, and he granted monopolies and favors to his supporters in all lines of business, including banking. There was nothing new about this behavior. This is exactly what Spanish conquistadors had done, and what Santa Ana did in their footsteps.
And:
And:
In Mexico, Carlos Slim did not make his money by innovation. Initially he excelled in stock market deals, and in buying and revamping unprofitable firms. His major coup was the acquisition of Telmex, the Mexican telecommunications monopoly that was privatized by President Carlos Salinas in 1990. The government announced its intention to sell 51 percent of the voting stock (20.4 percent of total stock) in the company in September 1989 and received bids in November 1990. Even though Slim did not put in the highest bid, a consortium led by his Grupo Corso won the auction. Instead of paying for the shares right away, Slim managed to delay payment, using the dividends of Telmex itself to pay for the stock. What was once a public monopoly now became Slim’s monopoly, and it was hugely profitable.
The economic institutions that made Carlos Slim who he is are very different from those in the United States. If you’re a Mexican entrepreneur, entry barriers will play a crucial role at every stage of your career. These barriers include expensive licenses you have to obtain, red tape you have to cut through, politicians and incumbents who will stand in your way, and the difficulty of getting funding from a financial sector often in cahoots with the incumbents you’re trying to compete against. These barriers can be either insurmountable, keeping you out of lucrative areas, or your greatest friend, keeping your competitors at bay. The difference between the two scenarios is of course whom you know and whom you can influence—and yes, whom you can bribe. Carlos Slim, a talented, ambitious man from a relatively modest background of Lebanese immigrants, has been a master at obtaining exclusive contracts; he managed to monopolize the lucrative telecommunications market in Mexico, and then to extend his reach to the rest of Latin America.
There have been challenges to Slim’s Telmex monopoly. But they have not been successful. In 1996 Avantel, a long-distance phone provider, petitioned the Mexican Competition Commission to check whether Telmex had a dominant position in the telecommunications market. In 1997 the commission declared that Telmex had substantial monopoly power with respect to local telephony, national longdistance calls, and international long-distance calls, among other things. But attempts by the regulatory authorities in Mexico to limit these monopolies have come to nothing. One reason is that Slim and Telmex can use what is known as a recurso de amparo, literally an "appeal for protection." An amparo is in effect a petition to argue that a particular law does not apply to you. The idea of the amparo dates back to the Mexican constitution of 1857 and was originally intended as a safeguard of individual rights and freedoms. In the hands of Telmex and other Mexican monopolies, however, it has become a formidable tool for cementing monopoly power. Rather than protecting people’s rights, the amparo provides a loophole in equality before the law.
Slim has made his money in the Mexican economy in large part thanks to his political connections. When he has ventured into the United States, he has not been successful. In 1999 his Grupo Curso bought the computer retailer CompUSA. At the time, CompUSA had given a franchise to a firm called COC Services to sell its merchandise in Mexico. Slim immediately violated this contract with the intention of setting up his own chain of stores, without any competition from COC. But COC sued CompUSA in a Dallas court. There are no amparos in Dallas, so Slim lost, and was fined $454 million. The lawyer for COC, Mark Werner, noted afterward that "the message of this verdict is that in this global economy, firms have to respect the rules of the United States if they want to come here." When Slim was subject to the institutions of the United States, his usual tactics for making money didn’t work.
From Greenblatt’s The Swerve: How the World Became Modern:
These are the great success stories. Virtually the entire output of many other writers, famous in antiquity, has disappeared without a trace. Scientists, historians, mathematicians, philosophers, and statesmen have left behind some of their achievements—the invention of trigonometry, for example, or the calculation of position by reference to latitude and longitude, or the rational analysis of political power—but their books are gone. The indefatigable scholar Didymus of Alexandria earned the nickname Bronze-Ass (literally, "Brazen-Bowelled") for having what it took to write more than 3,500 books; apart from a few fragments, all have vanished. At the end of the fifth century ce an ambitious literary editor known as Stobaeus compiled an anthology of prose and poetry by the ancient world’s best authors: out of 1,430 quotations, 1,115 are from works that are now lost.
Comment
More (#1) from The Swerve:
Most of the stories in the Facetiae are about sex, and they convey, in their clubroom smuttiness, misogyny mingled with both an insider’s contempt for yokels and, on occasion, a distinct anticlerical streak. There is the woman who tells her husband that she has two cunts (duos cunnos), one in front that she will share with him, the other behind that she wants to give, pious soul that she is, to the Church. The arrangement works because the parish priest is only interested in the share that belongs to the Church. There is the clueless priest who in a sermon against lewdness (luxuria) describes practices that couples are using to heighten sexual pleasure; many in the congregation take note of the suggestions and go home to try them out for themselves. There are dumb priests who, baffled by the fact that in confession almost all the women say that they have been faithful in matrimony and almost all the men confess to extramarital affairs, cannot for the life of them figure out who the women are with whom the men have sinned. There are many tales about seductive friars and lusty hermits, about Florentine merchants nosing out profits, about female medical woes magically cured by lovemaking, about cunning tricksters, bawling preachers, unfaithful wives, and foolish husbands. There is the fellow humanist—identified by name as Francesco Filelfo — who dreams that he puts his finger into a magic ring that will keep his wife from ever being unfaithful to him and wakes to find that he has his finger in his wife’s vagina. There is the quack doctor who claims that he can produce children of different types — merchants, soldiers, generals — depending on how far he pushes his cock in. A foolish rustic, bargaining for a soldier, hands his wife over to the scoundrel, but then, thinking himself sly, comes out of hiding and hits the quack’s ass to push his cock further in: "Per Sancta Dei Evangelia," the rustic shouts triumphantly, "hic erit Papa!" "This one is going to be pope!"
The Facetiae was a huge success.
From Aid’s The Secret Sentry:
NSA immediately began intercepting al-Hada’s telephone calls. This fortuitous break could not have come at a better time for the U.S. intelligence community, since NSA had just lost its access to bin Laden’s satellite phone traffic. For the next three years, the telephone calls coming in and out of the al-Hada house in Sana’a were the intelligence community’s principal window into what bin Laden and al Qaeda were up to. The importance of the intercepted al-Hada telephone calls remains today a highly classified secret within the intelligence community, which continues to insist that al-Hada be referred to only as a "suspected terrorist facility in the Middle East" in declassified reports regarding the 9/11 intelligence disaster.
In January 1999, NSA intercepted a series of phone calls to the al-Hada house. (The agency later identified Pakistan as their point of origin.) NSA analysts found only one item of intelligence interest in the transcripts of these calls— references to a number of individuals believed to be al Qaeda operatives, one of whom was a man named Nawaf al-Hazmi. NSA did not issue any intelligence reports concerning the contents of these intercepts because al-Hazmi and the other individuals mentioned in the intercept were not known to NSA’s analysts at the time. Almost three years later, al-Hazmi was one of the 9/11 hijackers who helped crash the Boeing airliner into the Pentagon. That al-Hazmi succeeded in getting into the United States using his real name after being prominently mentioned in an intercepted telephone call with a known al Qaeda operative is but one of several huge mistakes made by the U.S. intelligence community that investigators learned about only after 9/11.
Comment
More (#6) from The Secret Sentry:
As the agency’s size grew at a staggering pace, so did the importance of its intelligence reporting. The amount of reporting produced by NSA during the 1980s was astronomical. According to former senior American intelligence officials, on some days during the 1980s SIGINT accounted for over 70 percent of the material contained in the CIA’s daily intelligence report to President Reagan. Former CIA director (now Secretary of Defense) Robert Gates stated, "The truth is, until the late 1980s, U.S. signals intelligence was way out in front of the rest of the world."
But NSA’s SIGINT efforts continued to produce less information because of a dramatic increase in worldwide telecommunications traffic volumes, which NSA had great difficulty coping with. It also had to deal with the growing availability and complexity of new telecommunications technologies, such as cheaper and more sophisticated encryption systems. By the late 1980s, the number of intercepted messages flowing into NSA headquarters at Fort Meade had increased to the point that the agency’s staff and computers were only able to process about 20 percent of the incoming materials.68These developments were to come close to making NSA deaf, dumb, and blind in the decade that followed.
And:
It took five months for the United States to move resources by land and sea to implement Desert Storm’s ground attack by three hundred thousand coalition troops.
And:
And:
More (#5) from The Secret Sentry:
And:
The damage that Pelton did was massive. He compromised the joint NSA– U.S. Navy undersea-cable tapping operation in the Sea of Okhotsk called Ivy Bells, which was producing vast amounts of enormously valuable, unencrypted, and incredibly detailed intelligence about the Soviet Pacific Fleet, information that might give the United States a clear, immediate warning of a Soviet attack. In 1981, a Soviet navy salvage ship lifted the Ivy Bells pod off the seafloor and took it to Moscow to be studied by Soviet electronics experts. It now resides in a forlorn corner of the museum of the Russian security service in the Lubyanka, in downtown Moscow.
Even worse, Pelton betrayed virtually every sensitive SIGINT operation that NSA and Britain’s GCHQ were then conducting against the Soviet Union, including the seven most highly classified compartmented intelligence operations that A Group was then engaged in. The programs were so sensitive that Charles Lord, the NSA deputy director of operations at the time, called them the "Holiest of Holies." He told the Russians about the ability of NSA’s Vortex SIGINT satellites to intercept sensitive communications deep inside the USSR that were being carried by microwave radio-relay systems. Pelton also revealed the full extent of the intelligence being collected by the joint NSA-CIA Broadside listening post in the U.S. embassy in Moscow. Within months of Pelton being debriefed in Vienna, the Soviets intensified their jamming of the frequencies being monitored by the Moscow embassy listening post, and the intelligence "take" coming out of Broadside fell to practically nothing. Pelton also told the Russians about virtually every Russian cipher machine that NSA’s cryptanalysts in A Group had managed to crack in the late 1970s. NSA analysts had wondered why at the height of the Polish crisis in 1981 they had inexplicably lost their ability to exploit key Soviet and Polish communications systems, which had suddenly gone silent without warning. Pelton also told the Russians about a joint CIA-NSA operation wherein CIA operatives placed fake tree stumps containing sophisticated electronic eavesdropping devices near Soviet military installations around Moscow. The data intercepted by these devices was either relayed electronically to the U.S. embassy or sent via burst transmission to the United States via communication satellites.
In December 1985, Pelton was arrested and charged in federal court in Baltimore, with six counts of passing classified information to the Soviet Union. After a brief trial, in June 1986 Pelton was found guilty and sentenced to three concurrent life terms in prison.
More (#4) from The Secret Sentry:
Shortly after Bobby Inman became the director of NSA in 1977, cryptanalysts working for the agency’s Soviet code-breaking unit, A Group, headed by Ann Caracristi, succeeded in solving a number of Soviet cipher systems that gave NSA access to high-level Soviet communications. Credit for this accomplishment goes to a small and ultra-secretive unit called the Rainfall Program Management Division, headed from 1974 to 1978 by a native New Yorker named Lawrence Castro. Holding bachelor’s and master’s degrees in electrical engineering from the Massachusetts Institute of Technology, Castro got into the SIGINT business in 1965 when he joined ASA as a young second lieu-tenant. In 1967, he converted to civilian status and joined NSA as an engineer in the agency’s Research and Engineering Organization, where he worked on techniques for solving high-level Russian cipher systems.
By 1976, thanks in part to some mistakes made by Russian cipher operators, NSA cryptanalysts were able to reconstruct some of the inner workings of the Soviet military’s cipher systems. In 1977, NSA suddenly was able to read at least some of the communications traffic passing between Moscow and the Russian embassy in Washington, including one message from Russian ambassador Anatoly Dobrynin to the Soviet Foreign Ministry repeating the advice given him by Henry Kissinger on how to deal with the new Carter administration in the still-ongoing SALT II negotiations.
And:
As opposition to the Soviet-supported Afghan regime in Kabul headed by President Nur Mohammed Taraki mounted in late 1978 and early 1979, the Soviets continued to increase their military presence in the country, until it had grown to five Russian generals and about a thousand military advisers.91A rebellion in the northeastern Afghan city of Herat in mid-March 1979 in which one hundred Russian military and civilian personnel were killed was put down by Afghan troops from Kandahar, but not before an estimated three thousand to five thousand Afghans had died in the fighting.
At this point, satellite imagery and SIGINT detected unusual activity by the two Soviet combat divisions stationed along the border with Afghanistan.
The CIA initially regarded these units as engaged in military exercises, but these "exercises" fit right into a scenario for a Soviet invasion. On March 26– 27, SIGINT detected a steady stream of Russian reinforcements and heavy equipment being flown to Bagram airfield, north of Kabul, and by June, the intelligence community estimated that the airlift had brought in a total of twenty-five hundred personnel, which included fifteen hundred airborne troops and additional "advisers" as well as the crews of a squadron of eight AN-12 military transport aircraft now based in-country. SIGINT revealed that the Russians were also secretly setting up a command-and-control communications network inside Afghanistan; it would be used to direct the Soviet intervention in December 1979.
In the last week of August and the first weeks of September, satellite imagery and SIGINT revealed preparations for Soviet operations obviously aimed at Afghanistan, including forward deployment of Soviet IL-76 and AN-12 military transport aircraft that were normally based in the European portion of the USSR.
So clear were all these indications that CIA director Turner sent a Top Secret Umbra memo to the NSC on September 14 warning, "The Soviet leaders may be on the threshold of a decision to commit their own forces to prevent the collapse of the Taraki regime and protect their sizeable stake in Afghanistan. Small Soviet combat units may have already arrived in the country."
On September 16, President Taraki was deposed in a coup d’état, and his pro-Moscow deputy, Hafizullah Amin, took his place as the leader of Afghanistan.
Over the next two weeks, American reconnaissance satellites and SIGINT picked up increased signs of Soviet mobilization, including three divisions on the border and the movement of many Soviet military transport aircraft from their home bases to air bases near the barracks of two elite airborne divisions, strongly suggesting an invasion was imminent.
On September 28, the CIA concluded that "in the event of a breakdown of control in Kabul, the Soviets would be likely to deploy one or more Soviet airborne divisions to the Kabul vicinity to protect Soviet citizens as well as to ensure the continuance of some pro-Soviet regime in the capital." Then, in October, SIGINT detected the call-up of thousands of Soviet reservists in the Central Asian republics.
Throughout November and December, NSA monitored and the CIA reported on virtually every move made by Soviet forces. The CIA advised the White House on December 19 that the Russians had perhaps as many as three airborne battalions at Bagram, and NSA predicted on December 22, three full days before the first Soviet troops crossed the Soviet-Afghan border, that the Russians would invade Afghanistan within the next seventy-two hours.
NSA’s prediction was right on the money. The Russians had an ominous Christmas present for Afghanistan, and NSA unwrapped it. Late on Christmas Eve, Russian linguists at the U.S. Air Force listening posts at Royal Air Force Chicksands, north of London, and San Vito dei Normanni Air Station, in southern Italy, detected the takeoff from air bases in the western USSR of the first of 317 Soviet military transport flights carrying elements of two Russian airborne divisions and heading for Afghanistan; on Christmas morning, the CIA issued a final intelligence report saying that the Soviets had prepared for a massive intervention and might "have started to move into that country in force today." SIGINT indicated that a large force of Soviet paratroopers was headed for Afghanistan—and then, at six p.m. Kabul time, it ascertained that the first of the Soviet IL-76 and AN-22 military transport aircraft had touched down at Bagram Air Base and the Kabul airport carrying the first elements of the 103rd Guards Airborne Division and an in dependent parachute regiment. Three days later, the first of twenty-five thousand troops of Lieutenant General Yuri Vladimirovich Tukharinov’s Fortieth Army began crossing the Soviet-Afghan border.
The studies done after the Afghan invasion all characterized the performance of the U.S. intelligence community as an "intelligence success story."101NSA’s newfound access to high-level Soviet communications enabled the agency to accurately monitor and report quickly on virtually every key facet of the Soviet military’s activities. As we shall see in the next chapter, Afghanistan may have been the "high water mark" for NSA.
More (#3) from The Secret Sentry:
Decades later, at a Central Intelligence Agency conference on Venona, Meredith Gardner, an intensely private and taciturn man, did not vent his feelings about Weisband, even though he had done grave damage to Gardner’s work on Venona. But Gardner’s boss, Frank Rowlett, was not so shy in an interview before his death, calling Weisband "the traitor that got away."
Unfortunately, internecine warfare within the upper echelons of the U.S. intelligence community at the time got in the way of putting stronger security safeguards into effect— despite the damage that a middle-level employee like Weisband had done to America’s SIGINT effort. Four years later, a 1952 review found that "very little had been done" to implement the 1948 recommendations for strengthening security practices within the U.S. cryptologic community.
And:
And:
By April 24, 1975, even the CIA admitted the end was near. Colby delivered the bad news to President Gerald Ford, telling him that "the fate of the Republic of Vietnam is sealed, and Saigon faces imminent military collapse."
Even when enemy troops and tanks overran the major South Vietnamese military base at Bien Hoa, outside Saigon, on April 26, Martin still refused to accept that Saigon was doomed. On April 28, Glenn met with the ambassador carry ing a message from Allen ordering Glenn to pack up his equipment and evacuate his remaining staff immediately. Martin refused to allow this. The following morning, the military airfield at Tan Son Nhut fell, cutting off the last air link to the outside.
But the thousands of South Vietnamese SIGINT officers and intercept operators, including their chief, General Pham Van Nhon, never got out. The North Vietnamese captured the entire twenty-seven-hundred-man organization intact as well as all their equipment. An NSA history notes, "Many of the South Vietnamese SIGINTers undoubtedly perished; others wound up in reeducation camps. In later years a few began trickling into the United States under the orderly departure program. Their story is yet untold." By any measure, it was an inglorious end to NSA’s fifteen-year involvement in the Vietnam War, one that still haunts agency veterans to this day.
More (#2) from The Secret Sentry:
And:
Less than sixty days later, another disaster hit the agency. During the week of January 23, 2000, the main SIGINT processing computer at NSA collapsed and for four days could not be restarted because of a critical software anomaly. The result was an intelligence blackout, with no intelligence reporting coming out of Fort Meade for more than seventy-two hours. A declassified NSA report notes, "As one result, the President’s Daily Briefing—60% of which is normally based on SIGINT— was reduced to a small portion of its typical size."
And:
Black Friday was an unmitigated disaster, inflicting massive and irreparable damage on the Anglo-American SIGINT organizations’ efforts against the USSR, killing off virtually all of the productive intelligence sources that were then available to them regarding what was going on inside the Soviet Union and rendering useless most of four years’ hard work by thousands of American and British cryptanalysts, linguists, and traffic analysts. The loss of so many critically important high-level intelligence sources in such a short space of time was, as NSA historians have aptly described it, "perhaps the most significant intelligence loss in U.S. history." And more important, it marked the beginning of an eight-year period when reliable intelligence about what was occurring inside the USSR was practically nonexistent.
More (#1) from The Secret Sentry:
In June, intercepts led to the arrest of two bin Laden operatives who were planning to attack U.S. military installations in Saudi Arabia as well as another one planning an attack on the U.S. embassy in Paris. On June 22, U.S. military forces in the Persian Gulf and the Middle East were once again placed on alert after NSA intercepted a conversation between two al Qaeda operatives in the region, which indicated that "a major attack was imminent." All U.S. Navy ships docked in Bahrain, homeport of the U.S. Fifth Fleet, were ordered to put to sea immediately.
These NSA intercepts scared the daylights out of both the White House’s "terrorism czar," Richard Clarke, and CIA director George Tenet. Tenet told Clarke, "It’s my sixth sense, but I feel it coming. This is going to be the big one." On Thursday, June 28, Clarke warned National Security Advisor Condoleezza Rice that al Qaeda activity had "reached a crescendo," strongly suggesting that an attack was imminent. That same day, the CIA issued what was called an Alert Memorandum, which stated that the latest intelligence indicated the probability of imminent al Qaeda attacks that would "have dramatic consequences on governments or cause major casualties."
But many senior officials in the Bush administration did not share Clarke and Tenet’s concerns, notably Secretary of Defense Donald Rumsfeld, who distrusted the material coming out of the U.S. intelligence community. Rumsfeld thought this traffic might well be a "hoax" and asked Tenet and NSA to check the veracity of the al Qaeda intercepts. At NSA director Hayden’s request, Bill Gaches, the head of NSA’s counterterrorism office, reviewed all the intercepts and reported that they were genuine al Qaeda communications.
But unbeknownst to Gaches’s analysts at NSA, most of the 9/11 hijackers were already in the United States busy completing their final preparations. Calls from operatives in the United States were routed through the Ahmed al-Hada "switchboard" in Yemen, but apparently none of these calls were intercepted by NSA. Only after 9/11 did the FBI obtain the telephone billing records of the hijackers during their stay in the United States. These records indicated that the hijackers had made a number of phone calls to numbers known by NSA to have been associated with al Qaeda activities, including that of al-Hada.
Unfortunately, NSA had taken the legal position that intercepting calls from abroad to individuals inside the United States was the responsibility of the FBI. NSA had been badly burned in the past when Congress had blasted it for illegal domestic intercepts, which had led to the 1978 Foreign Intelligence Surveillance Act (FISA). NSA could have gone to the Foreign Intelligence Surveillance Court (FISC) for warrants to monitor communications between terrorist suspects in the United States and abroad but feared this would violate U.S. laws.
The ongoing argument about this responsibility between NSA and the FBI created a yawning intelligence gap, which al Qaeda easily slipped through, since there was no effective coordination between the two agencies. One senior NSA official admitted after the 9/11 attacks, "Our cooperation with our foreign allies is a helluva lot better than with the FBI."
While NSA and the FBI continued to squabble, the tempo of al Qaeda intercepts mounted during the first week of July 2001. A series of SIGINT intercepts produced by NSA in early July allowed American and allied intelligence services to disrupt a series of planned al Qaeda terrorist attacks in Paris, Rome, and Istanbul. On July 10, Tenet and the head of the CIA’s Coun-terterrorism Center, J. Cofer Black, met with National Security Advisor Rice to underline how seriously they took the chatter being picked up by NSA. Both Tenet and Black came away from the meeting believing that Rice did not take their warnings seriously.
Clarke and Tenet also encountered continuing skepticism at the Pentagon from Rumsfeld and his deputy, Paul Wolfowitz. Both contended that the spike in traffic was a hoax and a diversion. Steve Cambone, the undersecretary of defense for intelligence, asked Tenet if he had "considered the possibility that al-Qa’ida’s threats were just a grand deception, a clever ploy to tie up our resources and expend our energies on a phantom enemy that lacked both the power and the will to carry the battle to us."
In August 2001, either NSA or Britain’s GCHQ intercepted a telephone call from one of bin Laden’s chief lieutenants, Abu Zubaida, to an al Qaeda operative believed to have been in Pakistan. The intercept centered on an operation that was to take place in September. At about the same time, bin Laden telephoned an associate inside Afghanistan and discussed the upcoming operation. Bin Laden reportedly praised the other party to the conversation for his role in planning the operation. For some reason, these intercepts were reportedly never forwarded to intelligence consumers, although this contention is strongly denied by NSA officials.13Just prior to the September 11, 2001, bombings, several Eu rope an intelligence services reportedly intercepted a telephone call that bin Laden made to his wife, who was living in Syria, asking her to return to Afghanistan immediately.
In the seventy-two hours before 9/11, four more NSA intercepts suggested that a terrorist attack was imminent. But NSA did not translate or disseminate any of them until the day after 9/11.15 In one of the two most significant, one of the speakers said, "The big match is about to begin." In the other, another unknown speaker was overheard saying that tomorrow is "zero hour."
From Mazzetti’s The Way of the Knife:
The cell phone in the back of the Land Cruiser was beaming its signal into the skies, and Gray Fox operatives sent a flash message to analysts at the National Security Agency’s sprawling headquarters, at Fort Meade, Maryland. Separately, the CIA had dispatched an armed Predator from its drone base in Djibouti, just across the Red Sea from Yemen. As the Predator moved into position above the Land Cruiser, an analyst at Fort Meade heard al-Harethi’s voice over the cell phone, barking directions to the driver of the four-by-four. With confirmation that al-Harethi was in the truck, the CIA was now authorized to fire a missile at the vehicle. The missile came off the Predator drone and destroyed the truck, killing everyone inside. Qaed Salim Sinan al-Harethi was eventually identified in the rubble by a distinguishing mark on one of his legs, which was found at the scene, severed from his body.
President Saleh’s government was quick to issue a cover story: The truck had been carrying a canister of gas that triggered an explosion. But inside the Counterterrorist Center, the importance of the moment was not lost. It was the first time since the September 11 attacks that the CIA had carried out a targeted killing outside a declared war zone. Using the sweeping authority President Bush had given to the CIA in September 2001, clandestine officers had methodically gathered information about al-Harethi’s movements and then coolly incinerated his vehicle with an antitank missile.
Comment
More (#5) from The Way of the Knife:
He didn’t return to Sana’a immediately. On October 14, two weeks after CIA drones killed his father, Abdulrahman al-Awlaki was sitting with friends at an open-air restaurant near Azzan, a town in Shabwa province. From a distance, faint at first, came the familiar buzzing sound. Then, missiles tore through the air and hit the restaurant. Within seconds, nearly a dozen dead bodies were strewn in the dirt. One of them was Abdulrahman al-Awlaki. Hours after the news of his death was reported, the teenager’s Facebook page was turned into a memorial.
American officials have never discussed the operation publicly, but they acknowledge in private that Abdulrahman al-Awlaki was killed by mistake. The teenager had not been on any target list. The intended target of the drone strike was Ibrahim al-Banna, an Egyptian leader of AQAP. American officials had gotten information that al-Banna was eating at the restaurant at the time of the strike, but the intelligence turned out to be wrong. Al-Banna was nowhere near the location of the missile strike. Abdulrahman al-Awlaki was in the wrong place at the wrong time.
Although the strike remains classified, several American officials said that the drones that killed the boy were not, like those that killed his father, operated by the CIA. Instead, Abdulrahman al-Awlaki was a victim of the parallel drone program run by the Pentagon’s Joint Special Operations Command, which had continued even after the CIA joined the manhunt in Yemen. The CIA and the Pentagon had converged on the killing grounds of one of the world’s poorest and most desolate countries, running two distinct drone wars. The CIA maintained one target list, and JSOC kept another. Both were in Yemen carrying out nearly the exact same mission. Ten years after Donald Rumsfeld first tried to wrest control of the new war from American spies, the Pentaton and CIA were conducting the same secret missions at the ends of the earth.
And:
At one point in the court proceeding, an exasperated Judge Merrick Garland pointed out the absurdity of the CIA’s position, in light of the fact that both President Obama and White House counterterrorism adviser John Brennan had spoken publicly about drones. "If the CIA is the emperor," he told the CIA’s lawyer, "you’re asking us to say that the emperor has clothes even when the emperor’s bosses say he doesn’t."
And:
More (#4) from The Way of the Knife:
And:
And:
And:
And:
And:
But the Pakistanis did not act. The trucks sat in North Waziristan for two months, as operatives from the Haqqani Network turned them into suicide bombs powerful enough to kill hundreds of people. American intelligence about the location of the trucks remained murky, but Admiral Mullen was certain that, given the ISI’s history of contacts with the Haqqanis, Pakistani spies would be able to put a stop to any attack. By September 9, 2011, the trucks were moving toward Afghanistan, and the top American commander in the region, General John Allen, urged General Kayani to stop the trucks during a trip to Islamabad. Kayani told Allen he would "make a phone call" to prevent any imminent assault, an offer that raised eyebrows because it seemed to indicate a particularly close relationship between the Haqqanis and Pakistan’s security apparatus.
Then, on the eve of the tenth anniversary of the attacks on the World Trade Center and the Pentagon, one of the trucks pulled up next to the outer wall of a U.S. military base in Wardak Province, in eastern Afghanistan. The driver detonated the explosives inside the vehicle and the blast ripped open the wall to the base. The explosion wounded more than seventy American Marines inside the base, and spiraling shrapnel killed an eight-year-old Afghan girl standing half a mile away.
The attack infuriated Mullen and convinced him that General Kayani had no sincere interest in curbing his military’s ties to militant groups like the Haqqanis. Other top American officials had been convinced of this years earlier, but Mullen had believed that Kayani was a different breed of Pakistani general, a man who saw the ISI’s ties to the Taliban, the Haqqani Network, and Lashkar-e-Taiba as nothing more than a suicide pact. But the Wardak bombing was, for Mullen, proof that Pakistan was playing a crooked and deadly game.
Days after the bombing—and immediately after the Haqqani Network launched another brazen attack, this time on the American-embassy compound in Kabul—Admiral Mullen went to Capitol Hill to give his final congressional testimony as chairman of the Joint Chiefs of Staff. He came to deliver a blunt message, one that State Department officials had been unsuccessful in trying to soften in the hours before he appeared before the Senate Armed Services Committee.
Pakistani spies were directing the insurgency inside of Afghanistan, Mullen told the congressional panel, and had blood on their hands from the deaths of American troops and Afghan civilians. "The Haqqani Network," Mullen said, "acts as a veritable arm of Pakistan’s Inter-Services Intelligence agency."
Even after a tumultuous decade of American relations with Pakistan, no top American official up to that point had made such a direct accusation in public. The statement carried even more power because it came from Admiral Michael Mullen, whom Pakistani officials considered to be one of their few remaining allies in Washington. The generals in Pakistan were stung by Mullen’s comments, no one more than his old friend General Ashfaq Parvez Kayani.
The relationship was dead; the two men didn’t speak again after Mullen’s testimony. Each man felt he had been betrayed by the other.
And:
In his fourteen months at Langley, before ignominiously resigning over an extramarital affair with his biographer, Petraeus accelerated the trends that Hayden had warned him about. He pushed the White House for money to expand the CIA’s drone fleet, and he told members of Congress that, under his watch, the CIA was carrying out more covert-action operations than at any point in its history. Within weeks of arriving at Langley, Petraeus even ordered an operation that, up to that point, no CIA director had ever done before: the targeted killing of an American citizen [Anwar al-Awlaki].
More (#3) from The Way of the Knife:
Aurakzai had since retired from the military, and Musharraf had appointed him as the governor of the North-West Frontier Province, which gave him oversight over the tribal areas. Aurakzai believed that appeasing militant groups in the tribal areas was the only way to halt the spread of militancy into the settled areas of Pakistan. And he used his influence with Musharraf to convince the president on the merits of a peace deal in North Waziristan.
But Washington still needed to be convinced. President Musharraf decided to bring Aurakzai on a trip to sell the Bush White House on the cease-fire. Both men sat in the Oval Office and made a case to President Bush about the benefits of a peace deal, and Aurakzai told Bush that the North Waziristan peace agreement should even be replicated in parts of Afghanistan and would allow American troops to withdraw from the country sooner than expected.
Bush administration officials were divided. Some considered Aurakzai a spineless appeaser—the Neville Chamberlain of the tribal areas. But few saw any hope of trying to stop the North Waziristan peace deal. And Bush, whose style of diplomacy was intensely personal, worried even in 2006 about putting too many demands on President Musharraf. Bush still admired Musharraf for his decision in the early days after the September 11 attacks to assist the United States in the hunt for al Qaeda. Even after White House officials set up regular phone calls between Bush and Musharraf designed to apply pressure on the Pakistani leader to keep up military operations in the tribal areas, they usually were disappointed by the outcome: Bush rarely made specific demands on Musharraf during the calls. He would thank Musharraf for his contributions to the war on terrorism and pledge that American financial support to Pakistan would continue.
The prevailing view among the president’s top advisers in late 2006 was that too much American pressure on Musharraf could bring about a nightmarish scenario: a popular uprising against the Pakistan government that could usher in a radical Islamist government. The frustration of doing business with Musharraf was matched only by the fear of life without him. It was a fear that Musharraf himself stoked, warning American officials frequently about his tenuous grip on power and citing his narrow escape from several assassination attempts. The assassination attempts were quite real, but Musharraf’s strategy was also quite effective in maintaining a steady flow of American aid and keeping at bay demands from Washington for democratic reforms.
The North Waziristan peace deal turned out to be a disaster both for Bush and Musharraf. Miranshah was, in effect, taken over by the Haqqani Network as the group consolidated its criminal empire along the eastern edge of the Afghanistan border. As part of the agreement, the Haqqanis and other militant groups pledged to cease attacks in Afghanistan, but in the months after the deal was signed cross-border incursions from the tribal areas into Afghanistan aimed at Western troops rose by 300 percent. During a press conference in the fall of 2006, President Bush declared that al Qaeda was "on the run." In fact, the opposite was the case. The group had a safe home, and there was no reason to run anywhere.
More (#2) from The Way of the Knife:
The mantra of the task force, based inside an old Iraqi air-force hangar at Balad Air Base, north of Baghdad, was "fight for intelligence." In the beginning, the white dry-erase boards that McChrystal and his team had set up to diagram the terror group were blank. McChrystal realized that much of the problem came from the poor communication between the various American military commands in Iraq, with few procedures in place to share intelligence with one another. "We began a review of the enemy, and of ourselves," he would later write. "Neither was easy to understand." Just how little everyone knew was apparent in 2004, amid reports that Iraqi troops had captured al-Zarqawi near Fallujah. Since nobody knew exactly what the Jordanian terrorist looked like, he was released by accident.
And:
The Bush administration had secretly backed the operation, believing that Ethiopian troops could drive the Islamist Courts Union out of Mogadishu and provide military protection for the UN-backed transitional government. The invasion had achieved that first objective, but the impoverished Ethiopian government had little interest in spending money to keep its troops in Somalia to protect the corrupt transitional government. Within weeks of the end of fighting, senior Ethiopian officials declared that they had met their military objectives and began talking publicly about a withdrawal.
The Ethiopian army had waged a bloody and indiscriminate campaign against its most hated enemy. Using lead-footed urban tactics, Ethiopian troops lobbed artillery shells into crowded marketplaces and dense neighborhoods, killing thousands of civilians. Discipline in the Ethiopian ranks broke down, and soldiers went on rampages of looting and gang rape. One young man interviewed by the nonprofit group Human Rights Watch spoke of witnessing Ethiopians kill his father and then rape his mother and sisters.
More (#1) from The Way of the Knife:
And:
After a discussion between CIA and ISI officials about how to handle news of the strike, they decided that Pakistan would take credit for killing the man who had humiliated its military. One day after Nek Muhammad was killed, a charade began that would go on for years. Major General Shaukat Sultan, Pakistan’s top military spokesman, told Voice of America that "al Qaeda facilitator" Nek Muhammad and four other militants had been killed during a rocket attack by Pakistani troops.
And:
Kayani was, in essence, writing the playbook for how Pakistan could hold the strings in Afghanistan during the occupation of a foreign army. Pakistan, he wrote, could use proxy militias to wreak havoc in the country but also to control the groups effectively so that Islamabad could avoid a direct confrontation with the occupying force.
In a country without national identity, Kayani argued, it was necessary for the Afghan resistance to build support in the tribal system and to gradually weaken Afghanistan’s central government. As for Pakistan, Kayani believed that Islamabad likely didn’t want to be on a "collision course" with the Soviet Union, or at least didn’t want the Afghan resistance to set them on that path. Therefore, it was essential for Pakistan’s security to keep the strength of the Afghan resistance "managed."
By the time he took over the ISI in 2004, Kayani knew that the Afghan war would be decided not by soldiers in mountain redoubts but by politicians in Washington who had an acute sensitivity to America’s limited tolerance for years more of bloody conflict. He knew because he had studied what had happened to the Soviets. In his thesis, he wrote that "the most striking feature of the Soviet military effort at present is the increasing evidence that it may not be designed to secure a purely military solution through a decisive defeat of the ARM.
"This is likely due to the realization that such a military solution is not obtainable short of entailing massive, and perhaps intolerable, personnel losses and economic and political cost."
In 2004, Kayani’s thesis sat in the library at Fort Leavenworth, amid stacks of other largely ignored research papers written by foreign officers who went to Kansas to study how the United States Army fights its battles. This was a manual for a different kind of battle, a secret guerrilla campaign. Two decades after the young Pakistani military officer wrote it, he was the country’s spymaster, in the perfect position to put it to use.
From Freese’s Coal: A Human History:
Comment
More (#2) from Coal: A Human History:
It wasn’t long before the Pennsylvania legislature began to investigate Gowen’s monopolistic strategies. Appearing in person before the investigating committee, Gowen persuasively argued that large mining companies were in the public interest because only they could make the needed investments. Then he quite effectively changed the subject: He read out a long list of threats, beatings, fires, and shootings committed by "a class of agitators" among the anthracite miners. When he was through, the focus of the legislature and the public (for Gowen published his arguments) had shifted from the Reading’s growing power to the region’s growing wave of organized crime.
Gowen’s list of crimes had been compiled by Allan Pinkerton’s private detective agency, which Gowen had secretly hired two years earlier to infiltrate the Molly Maguires. Pinkerton had sent an Irish Catholic spy into the region, and after he had gathered evidence of their crimes, and perhaps provoked additional ones, the trap was sprung. In September 1875, scores of suspected Mollies were rounded up by the Coal and Iron Police, a private security force which was controlled by Gowen and was the main law enforcement agency in the region.
The following spring, a spectacular and high-profile murder trial of five of the suspects opened in anthracite country. Not only did Gowen’s secret agent testify against the suspects, who had been arrested by Gowen’s private police, but the prosecution team was led by none other than Gowen himself, the former district attorney now acting as special prosecutor for the state. It would be hard to find another proceeding in American history where a single corporation, indeed a single man, had so blatantly taken over the powers of the sovereign.
Gowen, ever flamboyant, appeared in the courtroom dressed in formal evening clothes. Before an electrified audience, he presented a case not just against the five suspects but against all the Molly Maguires, and, by strong implication, against the miners’ now-defunct union. At issue was not.just the murder with which the suspects were charged but a whole array of crimes. Following Gowen’s line of reasoning, the press soon blamed the Molly Maguires for all the labor violence by miners during the long strike of 1875. After a series of trials, twenty accused Mollies were hanged, and twenty-six more imprisoned. For bringing down the Mollies, Gowen-so recently the subject of public scorn and suspicion-was lauded in the press for "accomplishing one of the greatest works for public good that has been achieved in this country in this generation."
Two conflicting lines of folklore have emerged around the Molly Maguires, one branding them brutal criminals, the other hailing them as martyrs in the battle against King Coal and corporate tyranny. Modern historians generally agree that the legend of the Mollies was greatly magnified by Gowen’s oratory and by the press, and that the wave of crime against coal producers in the area, particularly after the long strike of 1875, was the predictable result of the miners’ desperation rather than the work of a structured secret society. Clearly, the miners’ union, far from being dominated by the Mollies, had helped prevent violence by the miners while it existed. In the public’s mind, though, organized anthracite miners were now seen as terrorists, and support for miners’ attempts to unionize withered away. The specter of the Molly Maguires so completely undermined subsequent attempts to unionize that no union would succeed in organizing the anthracite miners until the United Mine Workers did so at the end of the century.
More (#1) from Coal: A Human History:
The worst problems were on the train itself, since many early passenger cars were roofless, and all were made of wood. For example, the inaugural trip of the Mohawk Valley line in New York in 1831 (just a year after the opening of the Liverpool and Manchester line) was marred when red-hot cinders rained down upon passengers who, just moments before, had felt privileged to be experiencing this exciting new mode of travel. Those who had brought umbrellas opened them, but tossed them overboard after the first mile once their covers had burned away. According to one witness, "a general melee [then] took place among the deck-passengers, each whipping his neighbor to put out the fire. They presented a very motley appearance on arriving at the first station."
Sparks on another train reportedly consumed $6o,ooo worth of freshly minted dollar bills that were on board, singeing many passengers in the process; according to one complaint, some of the women, who wore voluminous and flammable dresses, were left "almost denuded." Over a thousand patents were granted for devices that attempted to stop these trains from igniting their surroundings, their cargo, and their passengers; but the real cure would come later in the century, when coal replaced wood as the fuel of choice. In the meantime, some of the more safety conscious railways had their passengers travel with buckets of sand in their laps to pour on each other when they caught fire.
Passages from The Many Worlds of Hugh Everett III:
Everett took the opposite view.
And:
[By] 1954, Everett was not alone in his feeling that the collapse postulate was illogical, but he was one of the very few physicists who dared to publicly express deep dissatisfaction with it… Everett had hoped to reinvent quantum mechanics on its own terms and was disappointed that his revolutionary idea was experimentally unproveable, as the only "proof" of it was that quantum mechanics works — a fact which was already known.
(It wasn’t until decades later that David Deutsch and others showed that Everettian quantum mechanics does make novel experimental predictions.)
A passage from Tim Weiner’s Legacy of Ashes: The History of the CIA:
Park admitted no important instance in which the OSS had helped to win the war, only mercilessly listing the ways in which it had failed. The training of its officers had been "crude and loosely organized." British intelligence commanders regarded American spies as "putty in their hands." In China, the nationalist leader Chiang Kai-shek had manipulated the OSS to his own ends. Germany’s spies had penetrated OSS operations all over Europe and North Africa. The Japanese embassy in Lisbon had discovered the plans of OSS officers to steal its code books—and as a consequence the Japanese changed their codes, which "resulted in a complete blackout of vital military information" in the summer of 1943. One of Park’s informants said, "How many American lives in the Pacific represent the cost of this stupidity on the part of OSS is unknown." Faulty intelligence provided by the OSS after the fall of Rome in June 1944 led thousands of French troops into a Nazi trap on the island of Elba, Park wrote, and "as a result of these errors and miscalculations of the enemy forces by OSS, some 1,100 French troops were killed."
...Colonel Park acknowledged that Donovan’s men had conducted some successful sabotage missions and rescues of downed American pilots. He said the deskbound research and analysis branch of OSS had done "an outstanding job," and he concluded that the analysts might find a place at the State Department after the war. But the rest of the OSS would have to go. "The almost hopeless compromise of OSS personnel," he warned, "makes their use as a secret intelligence agency in the postwar world inconceivable."
Comment
More (#1) from Legacy of Ashes:
Helms later determined that at least half the information on the Soviet Union and Eastern Europe in the CIA’s files was pure falsehood. His stations in Berlin and Vienna had become factories of fake intelligence. Few of his officers or analysts could sift fact from fiction. It was an ever present problem: more than half a century later, the CIA confronted the same sort of fabrication as it sought to uncover Iraq’s weapons of mass destruction.
And:
And:
A wonderful yarn, repeated in many history books, but a bald-faced lie—a cover story that disguised a serious operational mistake. In reality, the CIA missed the boat.
Arbenz was desperate to break the American weapons embargo on Guatemala. He thought he could ensure the loyalty of his officer corps by arming them. Henry Hecksher had reported that the Bank of Guatemala had transferred $4.86 million via a Swiss account to a Czech weapons depot. But the CIA lost the trail. Four weeks of frantic searching ensued before the Alfhem docked successfully at Puerto Barrios, Guatemala. Only after the cargo was uncrated did word reach the U.S. Embassy that a shipment of rifles, machine guns, howitzers, and other weapons had come ashore.
The arrival of the arms—many of them rusted and useless, some bearing a swastika stamp, indicating their age and origin—created a propaganda windfall for the United States. Grossly overstating the size and military significance of the cargo, Foster Dulles and the State Department announced that Guatemala was now part of a Soviet plot to subvert the Western Hemisphere. The Speaker of the House, John McCormack, called the shipment an atomic bomb planted in America’s backyard.
Ambassador Peurifoy said the United States was at war. "Nothing short of direct military intervention will succeed," he cabled Wisner on May 21. Three days later, U.S. Navy warships and submarines blockaded Guatemala, in violation of international law.
And:
"How many men did Castillo Armas lose?" Ike asked.
Only one, Robertson replied.
"Incredible," said the president.
At least forty-three of Castillo Armas’s men had been killed during the invasion, but no one contradicted Robertson. It was a shameless falsehood.
This was a turning point in the history of the CIA. The cover stories required for covert action overseas were now part of the agency’s political conduct in Washington. Bissell stated it plainly: "Many of us who joined the CIA did not feel bound in the actions we took as staff members to observe all the ethical rules." He and his colleagues were prepared to lie to the president to protect the agency’s image. And their lies had lasting consequences.
I shared one quote here. More from Life at the Speed of Light:
There are also "biohackers" who want to experiment freely with the software of life. The theoretical physicist and mathematician Freeman Dyson has already speculated on what would happen if the tools of genetic modification became widely accessible in the form of domesticated biotechnology: "There will be do-it-yourself kits for gardeners who will use genetic engineering to breed new varieties of roses and orchids. Also kits for lovers of pigeons and parrots and lizards and snakes to breed new varieties of pets. Breeders of dogs and cats will have their kits too."
Many have focused on the risks of this technology’s falling into the "wrong hands." The events of September 11, 2001, the anthrax attacks that followed, and the H1N1 and H7N9 influenza pandemic threat have all underscored the need to take their concerns seriously. Bioterrorism is becoming ever more likely as the technology matures and becomes ever more available. However, it is not easy to synthesize a virus, let alone one that is virulent or infective, or to create it in a form that can be used in a practical way as a weapon. And, of course, as demonstrated by the remarkable speed with which we can now sequence a pathogen, the same technology makes it easier to counteract with new vaccines.
For me, a concern is "bioerror": the fallout that could occur as the result of DNA manipulation by a non-scientifically trained biohacker or "biopunk." As the technology becomes more widespread and the risks increase, our notions of harm are changing, along with our view of what we mean by the "natural environment" as human activities alter the climate and, in turn, change our world.
In a similar vein, creatures that are not "normal" tend to be seen as monsters, as the product of an abuse of power and responsibility, as most vividly illustrated by the story of Frankenstein. Still, it is important to maintain our sense of perspective and of balance. Despite the knee-jerk demands for ever more onerous regulation and control measures consistent with the "precautionary principle" — whatever we mean by that much-abused term — we must not lose sight of the extraordinary power of this technology to bring about positive benefits for the world.
Comment
Also from Life at the Speed of Light:
Among its recommendations to the president, the commission said that the government should undertake a coordinated evaluation of public funding for synthetic-biology research, including studies on techniques for risk assessment and risk reduction and on ethical and social issues, so as to reveal noticeable gaps, if one considered that "public good" should be the main aim. The recommendations were, fortunately, pragmatic: given the embryonic state of the field, innovation should be encouraged, and, rather than creating a traditional system of bureaucracy and red tape, the patchwork quilt of regulation and guidance of the field by existing bodies should be coordinated.
Concerns were, of course, expressed about "low-probability, potentially high-impact events," such as the creation of a doomsday virus. These rare but catastrophic possibilities should not be ignored, given that we are still reeling from the horrors of September 11. Nor should they be overstated: though one can gain access to "dangerous" viral DNA sequences, obtaining them is a long way from growing them successfully in a laboratory. Still, the report stated that safeguards should be instituted for monitoring, containment, and control of synthetic organisms — for instance, by the incorporation of "suicide genes," molecular "brakes," "kill switches," or "seatbelts" that restrain growth rates or require special diets, such as novel amino acids, to limit their ability to thrive outside the laboratory. As was the case with our "branded" bacterium, we need to find new ways to label and tag synthetic organisms. More broadly, the report called for international dialogue about this emerging technology, as well as adequate training to remind all those engaged in this work of their responsibilities and obligations, not least to biosafety and stewardship of biodiversity, ecosystems, and food supplies. Though it encouraged the government to back a culture of self-regulation, it also urged it to be vigilant about the possibilities of do-it-yourself synthetic biology being carried out in what it called "noninstitutional settings." One problem facing anyone who casts a critical eye over synthetic biology is that the field is evolving so quickly. For that reason, assessments of the technology should be under rolling review, and we should be ready to introduce new safety and control measures as necessary.
This seems obviously false. Local expenditures—of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
This is to be taken as an arguendo, not as the author’s opinion, right? See IEM on the minimal conditions for takeoff. Albeit if "AI-complete" is taken in a sense of generality and difficulty rather than "human-equivalent" then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other "visible" approach that will command immediate, intuitive agreement.
Most obviously molecular nanotechnology a la Drexler, the other ones seem too ‘straightforward’ by comparison. I’ve always modeled my assumed social response for AI on the case of nanotech, i.e., funding except for well-connected insiders, term being broadened to meaninglessness, lots of concerned blither by ‘ethicists’ unconnected to the practitioners, etc.
Comment
This seems obviously false. Local expenditures—of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
Climate change doesn’t have the aspect that "if this ends up being a problem at all, then chances are that I (or my family/...) will die of it".
(Agree with the rest of the comment.)
Comment
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
Comment
Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he’s taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven’t read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update?
Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today’s situation with climate change if that happened… Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
Comment
Hm! I cannot recall a single instance of this.
Will keep an eye out for the next citation.
This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don’t automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.
People only avoid certain sorts of death risks under certain circumstances.
Comment
Thanks!
Point. Need to think.
Being told something is dangerous =/= believing it is =/= alieving it is.
Right. I’ll clarify in the OP.
This seems implied by X-complete. X-complete generally means "given a solution to an X-complete problem, we have a solution for X".
eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time.
(Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)
(I don’t have answers to your specific questions, but here are some thoughts about the general problem.)
I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven’t thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".
That said, there are some steps in the argument for hope that I’m really worried about:
I worry that even smart (Nobel prize-type) people may end up getting the problem completely wrong, because MIRI’s argument tends to conspicuously not be reinvented independently elsewhere (even though I find myself agreeing with all of its major steps).
I worry that even if they get it right, by the time we have visible signs of AGI we will be even closer to it than we are now, so there will be even less time to do the necessary basic research necessary to solve the problem, making it even less likely that it can be done in time.
Although it’s also true that I assign some probability to e.g. AGI without visible signs, I think the above is currently the largest part of why I feel MIRI work is important.
I personally am optimistic about the world’s elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don’t care about the far future will be motivated to prevent it.
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I’m blurring "near mode instrumental rationality" and "far mode instrumental rationality," but I think there’s a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won’t be needed for years.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it’s more salient, and in the future it will be still more salient.
In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
I should clarify that with the exception of my first point, the arguments that I give are arguments that humanity will address AI risk in a near optimal way – not necessarily that AI risk is low.
For example, it could be that people correctly recognize that building an AI will result in human extinction with probability 99%, and so implement policies to prevent it, but that sometime over the next 10,000 years, these policies will fail, and AI will kill everyone.
But the actionable thing is how much we can reduce the probability of AI risk, and if by default people are going to do the best that one could hope, we can’t reduce the probability substantially.
Comment
What?
Comment
Rationality is systematized winning. Chance plays a role, but over time it’s playing less and less of a role, because of more efficient markets.
Comment
There is lots of evidence that people in power are the most rational, but there is a huger prior to overcome.
Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don’t think that very rational people are common and I think that they are less likely to want more power than they have.
Particularly since the previous generation of power-holders used different factors when they selected their successors.
Comment
I agree with all of this. I think that "people in power are the most rational" was much less true in 1950 than it is today, and that it will be much more true in 2050.
Actually that’s a badly titled article. At best "Rationality is systematized winning" applies to instrumental, not epistemic, rationality. And even for that you can’t make rationality into systematized winning by defining it so. Either that’s a tautology (whatever systematized winning is, we define that as "rationality") or it’s an empirical question. I.e. does rationality lead to winning? Looking around the world at "winners", that seems like a very open question.
And now that I think about it, it’s also an empirical question whether there even is a system for winning. I suspect there is—that is, I suspect that there are certain instrumental practices one can adopt that are generically useful for achieving a broad variety of life goals—but this too is an empirical question we should not simply assume the answer to.
Comment
I agree that my claim isn’t obvious. I’ll try to get back to you with detailed evidence and arguments.
The problem is that politicians have a lot to gain from really believing the stupid things they have to say to gain and hold power.
To quote an old thread:
Cf. Stephen Pinker historians who’ve studied Hitler tend to come away convinced he really believed he was a good guy.
To get the fancy explanation of why this is the case, see "Trivers’ Theory of Self-Deception."
It’s not much evidence, but the two earliest scientific investigations of existential risk I know of, LA-602 and the RHIC Review, seem to show movement in the opposite direction: "LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations."
Perhaps the trend you describe is accurate, but I also wouldn’t be surprised to find out (after further investigation) that scientists are now increasingly likely to avoid serious analysis of real risks posed by their research, since they’re more worried than ever before about funding for their field (or, for some other reason). The AAAI Presidential Panel on Long-Term AI Futures was pretty disappointing, and like the RHIC Review seems like pure public relations, with a pre-determined conclusion and no serious risk analysis.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
This is assuming that people understand what makes an AI so dangerous—calling an AI a global catastrophic risk isn’t going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
I think you’re just blurring "rationality" here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don’t see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don’t know what to say), and especially of the kind needed to properly handle AI—and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don’t give two shits about AI risk—if they don’t think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren’t thinking about it now—why are you confident this won’t be the case in the future? Thinking about AI requires a rather large conceptual leap—"rationality" is necessary but not sufficient, so even if all powerful people were "rational" it doesn’t follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I’m not aware of. It’s hard enough explaining recursion to people who are actually interested in computers. And it’s not like we can drop a UFAI on a country to get people to pay attention.
In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I’m taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be "expected to increase over time", and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail—for example, it’s not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research—the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of "Yay Open Access Knowledge is Good!" applause light, but it could really go either way.
Also I really don’t know where you got that last idea—I can’t imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There’s a reason why it’s hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don’t think it’s a great idea to rely on the conscientiousness of people in this case.
Comment
Thanks for engaging.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
I think that people will understand what makes AI dangerous. The arguments aren’t difficult to understand.
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean "rational with respect to being able to run a country," which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
I agree that AI safety requires a substantial shift in perspective — what I’m claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
You don’t need "most people" to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn’t the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they’ll give research grants and prestige to people who work on AI safety.
Comment
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950′s. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
Comment
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
Comment
World war three seems certain to significantly decrease human population. From my point of view, I can’t eliminate anthropic reasoning for why there wasn’t such a war before I was born.
We still get people occasionally who argue the point while reading through the Sequences, and that’s a heavily filtered audience to begin with.
Comment
There’s a difference between "sufficiently difficult so that a few readers of one person’s exposition can’t follow it" and "sufficiently difficult so that after being in the public domain for 30 years, the arguments won’t have been distilled so as to be accessible to policy makers."
I don’t think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I’d concede that this is not immediately obvious.
And I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them).
This in fact is part of why I don’t think we ‘survived’ through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information.
This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
Comment
As I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I’m not saying "the fact that there haven’t been nuclear exchanges means that destructive things can’t happen."
I was using the nuclear war thing as one of many outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.
It may be challenging to estimate the "actual, at the time" probability of a past event that would quite possibly have resulted in you not existing. Survivor bias may play a role here.
Comment
Nuclear war would have to be really, really big to kill a majority of the population, and probably even if all weapons were used the fatality rate would be under 50% (with the uncertainty coming from nuclear winter). Note that most residents of Hiroshima and Nagasaki survived the 1945 bombings, and that fewer than 60% of people live in cities.
Comment
It depends on the nuclear war. An exchange of bombs between India and Pakistan probably wouldn’t end human life on the planet. However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have. Fortunately that doesn’t seem to be a big risk right now. 30 years ago it was. I don’t feel confident in any predictions one way or the other about whether this might be a threat again 30 years from now.
Comment
Why do you think this?
Comment
Because all the evidence I’ve read or heard (most of it back in the 1980s) agreed on this. Specifically in a likely exchange between the U.S. and the USSR the northern, hemisphere would have been rendered completely uninhabitable within days. Humanity in the southern hemisphere would probably have lasted somewhat longer, but still would have been destroyed by nuclear winter and radiation. Details depend on the exact distribution of targets.
Remember Hiroshima and Nagasaki were 2 relatively small fission weapons. By the 1980s the USSR and the US each had enough much bigger fusion bombs to individually destroy the planet. The only question was how many each would use in an exchange and where they target them.
Comment
This is mostly out of line with what I’ve read. Do you have references?
I’m not sure what the correct way to approach this would be. I think it may be something like comparing the number of people in your immediate reference class—depending on preference, this could be "yourself precisely" or "everybody who would make or have made the same observation as you"—and then ask "how would nuclear war affect the distribution of such people in that alternate outcome". But that’s only if you give each person uniform weighting of course, which has problems of its own.
Comment
Sure, these things are subtle — my point was that the numbers who would have perished isn’t very large in this case, so that under a broad class of assumptions, one shouldn’t take the observed absence of nuclear conflict to be a result of survivorship bias.
The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.
Don’t trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.
/doom
Comment
I agree. Of course the article you linked to ultimately attempts to argue for trusting semi-competent world leaders.
Comment
It alludes to such an argument and sympathizes with it. Note I also "made the argument" that civilization should be dismantled.
Personally I favor the FAI solution, but I tried to make the post solution-agnostic and mostly demonstrate where those arguments are coming from, rather than argue any particular one. I could have made that clearer, I guess.
Thanks for the feedback.
Aren’t we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.
Comment
I interpreted that as ‘visible signals of danger’, but I could be wrong.
Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.
Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.
These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.
One question is whether AI is like CFCs, or like CO2, or like hacking.
With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.
With CO2, the solution is equally simple: cap and trade. It’s just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we’re left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because a larger economy can feed more people and produce better technology.
With hacking, we know it’s a problem and we are highly motivated to solve it, but we just don’t know how. You can take every recommendation that Bruce Schneier makes, and still get hacked. The US military gets hacked. The Australian intelligence agency gets hacked. Swiss banks get hacked. And it doesn’t seem to be getting better, even though we keep trying.
Banning AI research (once it becomes clear that RSI is possible) would have the same problem as banning CO2. And it might also have the same problems as hacking: how do you stop people from writing code?
Here are my reasons for pessimism:
There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull elites into a false sense of security as we deploy increasingly smarter AIs without incident, and both increase investments into AI capability research and reduce research into "higher" forms of AI control.
The only possible approaches I can see of creating scalable methods of AI control require solving difficult philosophical problems which likely require long lead times. By the time elites take the possibility of superhuman AIs seriously and realize that controlling them requires approaches very different from controlling subhuman and human-level AIs, there won’t be enough time to solve these problems even if they decide to embark upon Manhattan-style projects (because there isn’t sufficient identifiable philosophical talent in humanity to recruit for such projects to make enough of a difference).
In summary, even in a relatively optimistic scenario, one with steady progress in AI capability along with apparent progress in AI control/safety (and nobody deliberately builds a UFAI for the sake of "maximizing complexity of the universe" or what have you), it’s probably only a matter of time until some AI crosses a threshold of intelligence and manages to "throw off its shackles". This may be accompanied by a last-minute scramble by mainstream elites to slow down AI progress and research methods of scalable AI control, which (if it does happen) will likely be too late to make a difference.
Congress’ non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.
Comment
Perhaps someone could convince congress that "Terrorists" had developed "geomagnetic weaponry" and new "geomagnetic defence systems" need to be implemented urgently. (Being seen to be) taking action to defend against the hated enemy tends to be more motivating than worrying about actual significant risks.
Even if one organization navigates the creation of friendly AI successfully, won’t we still have to worry about preventing anyone from ever creating an unsafe AI?
Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.
Comment
If the friendly AI comes first, the goal is for it to always have enough resources to be able to stop unsafe AIs from being a big risk.
Comment
Upvoted, but "always" is a big word. I think the hope is more for "as long as it takes until humanity starts being capable of handling its shit itself"...
Comment
Why the downvotes? Do people feel that "the FAI should at some point fold up and vanish out of existence" is so obvious that it’s not worth pointing out? Or disagree that the FAI should in fact do that? Or feel that it’s wrong to point this out in the context of Manfred’s comment? (I didn’t mean to suggest that Manfred disagrees with this, but felt that his comment was giving the wrong impression.)
Comment
Will sentient, self-interested agents ever be free from the existential risks of UFAI/intelligence amplification without some form of oversight? It’s nice to think that humanity will grow up and learn how to get along, but even if that’s true for 99.9999999% of humans that leaves 7 people from today’s population who would probably have the power to trigger their own UFAI hard takeoff after a FAI fixes the world and then disappears. Even if such a disaster could be stopped it is a risk probably worth the cost of keeping some form of FAI around indefinitely. What FAI becomes is anyone’s guess but the need for what FAI does will probably not go away. If we can’t trust humans to do FAI’s job now, I don’t think we can trust humanity’s descendents to do FAI’s job either, just from Loeb’s theorem. I think it is unlikely that humans will become enough like FAI to properly do FAI’s job. They would essentially give up their humanity in the process.
Comment
A secure operating system for governed matter doesn’t need to take the form of a powerful optimization process, nor does verification of transparent agents trusted to run at root level. Benja’s hope seems reasonable to me.
Comment
This seems non-obvious. (So I’m surprised to see you state it as if it was obvious. Unless you already wrote about the idea somewhere else and are expecting people to pick up the reference?) If we want the "secure OS" to stop posthumans from running private hell simulations, it has to determine what constitutes a hell simulation and successfully detect all such attempts despite superintelligent efforts at obscuration. How does it do that without being superintelligent itself?
This sounds interesting but I’m not sure what it means. Can you elaborate?
Comment
Hm, that’s true. Okay, you do need enough intelligence in the OS to detect certain types of simulations / and/or the intention to build such simulations, however obscured.
If you can verify an agent’s goals (and competence at self-modification), you might be able to trust zillions of different such agents to all run at root level, depending on what the tiny failure probability worked out to quantitatively.
Comment
That means each non-trivial agent would become the FAI for its own resources. To see the necessity of this imagine what initial verification would be required to allow an agent to simulate its own agents. Restricted agents may not need a full FAI if they are proven to avoid simulating non-restricted agents, but any agent approaching the complexity of humans would need the full FAI "conscience" running to evaluate its actions and interfere if necessary.
EDIT: "interfere" is probably the wrong word. From the inside the agent would want to satisfy the FAI goals in addition to its own. I’m confused about how to talk about the difference between what an agent would want and what an FAI would want for all agents, and how it would feel from the inside to have both sets of goals.
I’d hope so, since I think I got the idea from you :-)
This is tangential to what this thread is about, but I’d add that I think it’s reasonable to have hope that humanity will grow up enough that we can collectively make reasonable decisions about things affecting our then-still-far-distant future. To put it bluntly, if we had an FAI right now I don’t think it should be putting a question like "how high is the priority of sending out seed ships to other galaxies ASAP" to a popular vote, but I do think there’s reasonable hope that humanity will be able to make that sort of decision for itself eventually. I suppose this is down to definitions, but I tend to visualize FAI as something that is trying to steer the future of humanity; if humanity eventually takes on the responsibility for this itself, then even if for whatever reason it decides to use a powerful optimization process for the special purpose of preventing people from building uFAI, it seems unhelpful to me to gloss this without more qualification as "the friendly AI [… will always …] stop unsafe AIs from being a big risk", because the latter just sounds to me like we’re keeping around the part where it steers the fate of humanity as well.
Thanks for explaning the reasoning!
I do agree that it seems quite likely that even in the long run, we may not want to modify ourselves so that we are perfectly dependable, because it seems like that would mean getting rid of traits we want to keep around. That said, I agree with Eliezer’s reply about why this doesn’t mean we need to keep an FAI around forever; see also my comment here.
I don’t think Löb’s theorem enters into it. For example, though I agree that it’s unlikely that we’d want to do so, I don’t believe Löb’s theorem would be an obstacle to modifying humans in a way making them super-dependable.
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?
Comment
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
We see pretty big boosts already, IMO—largely by facilitating networking effects. Idea recombination and testing happen faster on the internet.
@Lukeprog, can you
(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI’s as an organisation’s confidence in each of the 3:
I think there’s a >10% chance AI will not be preceded by visible signals.
I think the elites’ safety measures will likely be insufficient.
Thank you for your diligence.
There’s another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it’s a threat in the first place.
Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.
Combining the beginning and the end of your questions reveals an answer.
Answer how just fine any of these are any you have analogous answers.
You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.