Unbounded Scales, Huge Jury Awards, & Futurism

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism

"Psychophysics," despite the name, is the respectable field that links physical effects to sensory effects. If you dump acoustic energy into air—make noise—then how loud does that sound to a person, as a function of acoustic energy? How much more acoustic energy do you have to pump into the air, before the noise sounds twice as loud to a human listener? It’s not twice as much; more like eight times as much. Acoustic energy and photons are straightforward to measure. When you want to find out how loud an acoustic stimulus sounds, how bright a light source appears, you usually ask the listener or watcher. This can be done using a bounded scale from "very quiet" to "very loud," or "very dim" to "very bright." You can also use an unbounded scale, whose zero is "not audible at all" or "not visible at all," but which increases from there without limit. When you use an unbounded scale, the observer is typically presented with a constant stimulus, the modulus, which is given a fixed rating. For example, a sound that is assigned a loudness of 10. Then the observer can indicate a sound twice as loud as the modulus by writing 20. And this has proven to be a fairly reliable technique. But what happens if you give subjects an unbounded scale, but no modulus? Zero to infinity, with no reference point for a fixed value? Then they make up their own modulus, of course. The ratios between stimuli will continue to correlate reliably between subjects. Subject A says that sound X has a loudness of 10 and sound Y has a loudness of 15. If subject B says that sound X has a loudness of 100, then it’s a good guess that subject B will assign loudness in the vicinity of 150 to sound Y. But if you don’t know what subject C is using as their modulus—their scaling factor—then there’s no way to guess what subject C will say for sound X. It could be 1. It could be 1,000. For a subject rating a single sound, on an unbounded scale, without a fixed standard of comparison, nearly all the variance is due to the arbitrary choice of modulus, rather than the sound itself. "Hm," you think to yourself, "this sounds an awful lot like juries deliberating on punitive damages. No wonder there’s so much variance!" An interesting analogy, but how would you go about demonstrating it experimentally? Kahneman et al. presented 867 jury-eligible subjects with descriptions of legal cases (e.g., a child whose clothes caught on fire) and asked them to either

Comment

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=coai7gX2S28rJ6ZY4

If you are asked to estimate a number that is a product (or sum) of many numbers, and you have good estimates for all those numbers but one, well variance in that last number you can’t estimate well will dominate the variance of your answer. It just takes one.

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=FWxwCmhq9wpKvbRbk

I strongly encourage any AI worker who hasn’t already done so to read Ian McDonald’s ‘River of Gods’. He’s pretty positive (in timescale terms...) on AI, his answer to the question "How long will it be until we have human-level AI?" is 2047 AD, and it’s a totally gob-smacking, brilliant, read.

Comment

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=CgiTLyKaQurkZrHWP

Is this the one you mean: River of gods ?

if so—it’s a novel… and it includes aliens… I admit I haven’t read it, but I’m skeptical as to how much you might deduce about AI’s likelihood...

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=8gYYkWG7E5v49r8Zk

Derrida must have done a thousand essays on how an author trying to be very precise about how language could possibly work, winds up in an infinte loop clarifying a final point that amounts to in effect starting over.

This contributes a lot to an indefinite future, whatever the modulus problem, if you take AI as just such a project.

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=9atmKKeJdmdBNdDsW

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, "How long will it be until we have human-level AI?" The responses I’ve seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, "Five hundred years." (!!)

Did you ask any of them how long they felt it would take to develop other "futuristic" technologies? (in other words, their rank ordering of technological changes).

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=bCKZ3XLkrPYg5Ljcj

The damages experiment, as described here, seems not to nail things down enough to say that what’s going on is that damages are expressions of outrage on a scale with arbitrary modulus. Here’s one alternative explanation that seems consistent with everything you’ve said: subjects vary considerably in their assessment of how effective a given level of damages is in deterring malfeasance, and that assessment influences (in the obvious way) their assessment of damages.

(I should add that I find the arbitrary-modulus explanation more plausible.)

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=TJJjuyFkYhriXsBcG

Interesting, but without the dollar values adjusted for inflation, I feel like the point is lost on me of that part of the data, all though get the idea.

Edit: It only went up to $.84, so I guess it doesn’t matter that much (used the Inflation Calculator)

https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism?commentId=ruo2u8HvYaQmfumpa

″Assign a dollar value to punitive damages‴ - does this corelated with the ammount of money, that peoples, who responded to this earn? It look plausible that people who earn more can assign a highely money punishement for body harm