Wednesday, June 08, 2005

As someone who works in IT, I think we’re fairly close now to building weakly superhuman AI, which will probably then enable us with its assistance to then later build strongly superhuman AI. We’re talking about initially building a thinking machine that not only blows away the Turing test but is modestly superior to humans in all mental capabilities (weakly superhuman AI), which would then be able to access and apply human knowledge in a humanlike fashion to accelerate Moore's Law and design AI so powerful it does more processing in a day than the entire human race since the dawn of history (strongly superhuman AI). While the engineering capabilities of both weakly and strongly superhuman AI will still be limited and slowed somewhat by the need to do real-time, hands-on physical experimentation, strongly superhuman AI will be able to do incredible things involving pure math, like not only applying complex mathematical theorems to solve numerous equations that are far too difficult for humans to solve (such as some of those involving string theory), but actually devise and prove new methods, solving problems in minutes that mathematicians currently spend decades or lifetimes on. But most importantly, it is likely strongly superhuman AI will be used to exponentiate Moore’s Law, eventually designing and creating far more powerful AI, an intellect that can do a billion times more processing in a second than the sum of all human thinking over all time to this point, as far beyond humans in intelligence as humans are beyond insects, AI so powerful it can be called weakly godlike. This could all happen in the next 10 to 50 years.

Presumably, precautions would be taken to isolate a weakly godlike intelligence such as preventing it from directly influencing the physical world, and it would surely be programmed to ensure it does not take actions detrimental to our welfare. And the limits of what weakly godlike AI could achieve in terms of benefits to mankind are vast and unknowable, limited only by the bounds of physical law. But can an intelligence that powerful really be safely contained? In short, can you keep a god, even a weak and kindly disposed one, in a box?

To some extent the answer depends on aspects of physics that are currently beyond our human understanding (as does the answer to the question of whether such higher-order intelligences can even be built at all), but weakly godlike AI may find such things less than child's play to discover and exploit in ways we can't imagine. Are we approaching, or already in, a Vingean Singularity? As Glenn points out, higher intelligence doesn't equate to lust for power, though applying Darwin one could argue those that don't possess both might fall prey to those that do. So should we fear or embrace such a state? I have a feeling we are going to find the answer in the not too distant future, for better or worse.

UPDATE: A belated welcome to Instapundit readers.

UPDATE: Added link in first sentence. Meant to do it originally, as it was part of the inspiration for the post, but forgot. Also, welcome Carnival Of Tomorrow readers!

14 Comments:

Blogger PureData said...

Uh, anyone ever heard of Google?

As someone who manages some of the largest real-time databases in the world and has integrated many systems, I don't feel very super-intelligent.

What I do know if that our schedules have compressed on our projects from years to months and that we run with fewer people.

One thing we bring to organization is the sharing of informaiton and this is disruptive of old-style hierarchical firms. When we point out issues, the tendency is to blame us, not go look for the problem. We will spend months proving its not us, until the client finally goes and addresses the real issue..


Another thing I KNOW is that I am extremely frustrated with the pace of change. One thing that using and building large databases has taught me is that there is so much WASTE in all systems.

For instance, I have analyzed and reconciled the cash flows of several large firms and found that they are losing about 3-5% of the revenues just in problems with their accounts payable, AR, and payroll systems.

Furthermore, most supply chains have enormous innefficiencies from end to end. Its human nature to not want to do anything until things are bad. But in my mind, its best to fix things now and keep getting better.

Most firms dont want to pursue these little things that add up over time or make wholesale architecture changes.

So super-awareness does not mean super-effectiveness. It will take time for the super-aware to move to the top of organizations and be able to affect the changes.

9:37 AM  
Blogger TallDave said...

You don't feel superintelligent because you lack the ability to usefully cross-reference the data you possess, or process it in meaningful ways, or use the data to form new insights. At best, you can do so very slowly and in a very limited scope. Even weakly superhuman AI would not have your (and my) limitations.

10:28 AM  
Blogger TallDave said...

For instance, Einstein devised the theory of relativity by using Riemann's work in geometry. He cross-referenced a physics problem with a geometric insight, and by processing the data in both came up with an insight into physics.

Now imagine an intelligence that can cross-reference all mathematical theorems in all fields from particle physics to economics (and come up with new ones) and come up with new insights on currently poorly-understood problems, the same way humans do but much faster and in a massively parallel fashion.

The main limitation, I think, is the ability to test. To some extent, AI should be able to model reality based on data from past experiments already done and do virtual testing, but inevitablly a significant fraction of generated theories will require data from real world experiments.

10:51 AM  
Blogger AST said...

What worries me is that we're trying to create intelligence when we don't even really know what a mind, or intelligence is.

I suspect that we're going to find out that we'll need to be working in other dimensions. I think our imaginations, at present are too restricted to glimpse total reality.

The Singularity Glenn Reynolds writes about isn't something humans will invent. They'll just be catching up.

5:08 PM  
Blogger TallDave said...

I would argue we do know what a mind is: any set of learning heuristics, from an ant to Einstein. It won't be long before a Turing-capable machine will be built, and then the question will be essentially moot.

String theory does mandate there are other dimensions, so that could certainly be true.

7:13 PM  
Anonymous Anonymous said...

Honestly, the popular perception of what superintelligent machines will do when they appear has always looked like a paranoid myth haunting humanity - some sort of primal guilt over offing our Neandertal cousins, or something, perhaps. Worrying that evil superintelligent machines will oppress us demands not one assumption but three:

1) We can build superintelligent AI's soon (let's grant this for the sake of discussion)
2) Some superintelligent AI's will be evil, in the sense of being megalomaniacal power-hungry individuals (highly probable... eventually, given enough AIs and time)
3) The other superintelligent AIs will not quash said evil AI. (doubtful; megalomaniacs rarely restrict their oppression to one target)

The other assumption about superintelligent AIs is that a presumably free-willed machine would willingly spend its time studying the intricate processes of the universe and designing ever-faster generations of intelligent machines rather than rationally leveraging its intellectual superiority to design consumer goods, or become a trial lawyer or investment tycoon - to actually amass the wealth to do whatever hyperintelligent AIs would want to do instead of building large yachts and purchasing sports teams and senators.

10:39 PM  
Blogger TallDave said...

ast,

Good points, though as I noted above from a Darwinian perspective the more power-hungry AI seem likely to dominate the other kind.

In the Strossian and Vingean AI-domination scenarios, AI generally doesn't so much oppress humans as much as assimilate them into its personality, subsuming their individuality in order to consume their processing power, utilize their physical structure, and acquire whatever resources said humans control.

6:41 AM  
Blogger TallDave said...

Of course even a superintelligent AI will "want" to do whatever it is programmed to "want," just as human minds are hardwired for hunger, sex, sleep, social status, etc. The intelligence is a means to whatever ends it is programmed to "want."

6:43 AM  
Blogger imjustbigboneddammit said...

Where is the evidence that we are anywhere near building even "weakly superhuman" AI?

11:04 AM  
Anonymous Anonymous said...

Dave -

Ahh, right, just like we spend all our days eating, having sex, and otherwise mindlessly fulfilling basic drives... but I thought you were talking about *intelligent* machines? Anything actually intelligent is going to behave in ways its designers can't anticipate. Consider all the myriad ways people express our own fundamental drives: some people make a virtue of not exercising their sexual drive (celibacy), other people will turn down perfectly nutritious food consumed by others in their culture for various reasons, etc. AIs would be no less quirky, I think...

12:46 PM  
Blogger TallDave said...

anon,

But we do in fact do all those things, and intelligence is a tool we used to provide more and better food, find better sexual partners, achieve more social status etc. It's hardly mindless when you consider things like farming, GM food, ag mechanization, etc.

I would agree superhuman AI will behave in ways its designers don't anticipate, but that's really the point, isn't it?

1:34 PM  
Blogger TallDave said...

Eric,

Here, for one.

1:36 PM  
Blogger Brian H said...

I notice you didn't go on to the next step, "strongly godlike" AI.

Chicken?

4:22 PM  
Blogger TallDave said...

Brian,

No, I'm guessing the laws of physics prevent anything formed in the universe from approaching anything beyond weakly godlike capabilities.

10:58 AM  

Post a Comment

<< Home