[LINK]s: Who says Watson is only a narrow AI?

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai

OK, so it covers only a few human occupations:

But the list is steadily growing.

Now, connect it with a self-driving AI, and your cab e-driver can make small talk, advise on a suspicious skin lesion, evaluate your investment portfolio and help you fix an issue with your smartphone, all while cheaply and efficiently getting you to your destination.

How long until it can evaluate verbal or written customer requirements and write better routine software than your average programmer?

Comment

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=7ayMsK3GTLrW6ReRc

I’ve spent a bit of time trying to understand what Watson does, and couldn’t find a clear answer. I’d really appreciate a concise technical explanation.

What I got so far is that it runs a ton of different algorithms and combines the results in some sort of probabilistic reasoning to make a bet on the most likely correct answer. Is that roughly correct? And what are those algorithms then?

Comment

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=E2PYJAopA5RnMXL5y

Did you see this summary? (The actual description of the system starts on page 9.)

EDIT: Also, the list of papers citing that article may provide papers with further detail. For example, that list contained Question analysis: How Watson reads a clue, which goes into considerably more detail about the question analysis stage.

Comment

Thanks for that "How Watson reads a clue" paper, that made it much clearer for me.

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=7rG3ZmkRh5pHwmmRt

write better routine software than your average programmer

The average programmer already emulates the Watson algorithm—search Google for answers to "how do I" (sort a list, create a new Qt window, rotate a cube in OpenGL) and slap together any likely-looking chunks of code in a language that might compile. It’s even automated already.

The only problem is, of course, GIGO.

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=ccRQ5Csje7LcqRHLi

I say Watson is only a narrow AI.

Comment

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=xZtFDHqojMZd6atmn

Well that’s settled then.

(Incidentally, I think everyone agrees it’s narrow now, but it does make "joining up" narrow AI sound more plausible than before.)

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=KhQ9Q8Gqp68axfKp6

:)

But isn’t it getting too wide too quickly?

Anyway, I am guessing that by your definition the difference between a narrow and a general AI is not the number of problem solving or reasoning tasks where it is as good as or better than humans, even if it’s the vast majority of these tasks, but having a "general, flexible learning ability that would let them tackle entirely new domains", i.e. being vastly better than an average single human being, who generally sucks at adapting to "new domains".

Comment

the number of problem solving or reasoning tasks where it is as good as or better than humans

What, four?

Comment

Sigh. Charitable reading is not a strong suit of this place. Not four. How about 100? 1000? Would that be enough?

Comment

Sorry, the silliness too tempting. But however you want to count the number of things that go into what Watson does, it really is a small portion of things humans can do.

Comment

What exponent is on the number of things that humans can do, generalized to the degree of "drive cars"?

Comment

Hm, tough question. One way to get a quick lower bound might be "what’s something that uses the same general skills as driving cars, but is very different, and in how many ways is it different?" So if we use the same spatial skills to do knitting, and we say it’s different from driving in a car in about 10 ways (where what we consider a "way" sets our scale), then there are at least 2^10 things that use the same skills as, but are different from, knitting and driving cars (among other bad assumptions, this assumes that two things being alike in some way is binary and transitive). If there are 10 domains (everything is approximately 10) like "spatial skills and spatial planning, basic motor coordination," then the lower bound would be more like 2^100.

Comment

If all of the things that humans can do are required for knitting and driving cars, than there are two things that humans can do, generalized to that level. If an AI could learn the hard way to drive and to knit, it would be able to do everything a human could do. I estimate that controlling vehicles is about four different skills by that definition (road vehicles, fixed-wing, rotary-wing, and reaction-mass spacecraft), but knitting, crocheting, and sewing are the same skill, and there are probably only two or three different skills that cover all of athletics (an AI that could learn to play football would probably be able to learn curling, but it might not be able to learn gymnastics or swiming)

I think that our existing AIs haven’t demonstrated that they can learn to do anything the hard way. I could be wrong, because I don’t have any deep insight into if existing AIs learn or are created with full knowledge.

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=xaQ6kPnBjdQCanc22

What is the point of debating whether Watson should be called "general" or "narrow"? Do you think people who call Watson "narrow" are wrong in some substantial way (e.g., have wrong expectations or plans)? If so how?

Comment

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=bCuNsL8rqHM9paArc

Fair question. On this forum narrow AI = not really intelligent and benign, while general AI = potentially smarter than humans, ready to FOOM and dangerous. My point was that Watson might be some day providing an example of a smarter-than-human but benign (not FOOMable) AI, depending on how it is designed.

Comment

I think I understand your point, and have preemptively written a response at http://​​lesswrong.com/​​lw/​​bob/​​reframing_the_problem_of_ai_progress/​​. (In short, if Watson becomes smarter-than-human in many domains, it seems inevitable that the technological progress involved will be useful for building FOOMable AIs, even if Watson isn’t itself FOOMable.) If this doesn’t address your point, then I’ve probably misunderstood it, in which case maybe you can restate it in more detail?

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=ZsHoZ3jcG9BXsCDo3

Weren’t expert systems good at those kinds of things for several decades?

Comment

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=kvvFWsAHomQKiWzcx

Yes. What makes Watson exciting is that it can understand text well enough to prepare the text as inputs for expert systems; in medicine, for example, most expert systems needed an expert as I/​O, and so they were of limited usefulness.

Watson is also backed by a huge corporation, which makes it easier to surmount obstacles like "but doctors don’t like competition."

Comment

Watson is also backed by a huge corporation, which makes it easier to surmount obstacles like "but doctors don’t like competition."

On the other hand being a huge corporation makes it harder to surmount "relying on marketing hype to inflate the value-added of the product."

At any rate, the company I work for relies heavily on Cognos and the metrics there seem pretty arbitrary—Hocus pocus to conjure simple numbers so directors can pretend they’re making informed decisions and not operating on blind guesswork and vanity....And to rationalize firings, raise skimpings, additional bureaucracy and other unhappy decisions.*

*Come to think of it, "intelligence" or not, Cognos does emulate homo sapien psychology to a high degree of approximation.

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=xeh4r7m2FB4Zn3Ww9

Is anyone else interested in the possibility of Watson/​Watson like systems acting as a "juror" on court cases? Does Watson have any known biases?

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=tiNo9uFZGbRFk9uMB

Who says Watson is only a narrow AI?

From what I can tell, most people in the world who have considered the question, with the exception of shminux say that Watson is a narrow AI. For example googling "Watson narrow AI" produces many results describing it as a narrow AI while googling "Watson general AI" produces no results describing Watson as a GAI (and "Watson GAI" thinks I really mean "Watson Guy"). The wikipedia article describing what general artificial intelligence means also seems to exclude Watson as a candidate.

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=XdvsbbpA9cpHBTKFz

Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you’d be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can’t do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It’s not powerful in the way that a true GAI would be.

That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably broad domain (structured knowledge extraction from unformatted English) , which is a major leap forward. You might say that Watson is a rough analog to a language center connected to a memory system sitting in a box. It’s not a GAI by itself, but it could be a substantial component of one down the line.

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=iFunJRiufL42b2r6Y

Watson sounds cool, but this is a far step from General Artifical Intelligence. But how general is Watson as of now?

Comment

https://www.lesswrong.com/posts/JzFXw7bH4tK6n4h7q/link-s-who-says-watson-is-only-a-narrow-ai?commentId=kDokXzhsbB8Ldsikd

Not very, but it’s already better than humans in several unrelated areas and the list is getting longer. If some day it can behave more human than any single human, would it still be narrow?

Comment

The links in the OP are just press releases about pilot projects and product announcements. It seems too early to say "it’s already better than humans in several unrelated areas"?

Comment

Probably. We’ll have to wait for the production versions. But it does not appear to be just hype.