As someone who works in IT, I think we’re fairly close now to building weakly superhuman AI, which will probably then enable us with its assistance to then later build strongly superhuman AI. We’re talking about initially building a thinking machine that not only blows away the Turing test but is modestly superior to humans in all mental capabilities (weakly superhuman AI), which would then be able to access and apply human knowledge in a humanlike fashion to accelerate Moore's Law and design AI so powerful it does more processing in a day than the entire human race since the dawn of history (strongly superhuman AI). While the engineering capabilities of both weakly and strongly superhuman AI will still be limited and slowed somewhat by the need to do real-time, hands-on physical experimentation, strongly superhuman AI will be able to do incredible things involving pure math, like not only applying complex mathematical theorems to solve numerous equations that are far too difficult for humans to solve (such as some of those involving string theory), but actually devise and prove new methods, solving problems in minutes that mathematicians currently spend decades or lifetimes on. But most importantly, it is likely strongly superhuman AI will be used to exponentiate Moore’s Law, eventually designing and creating far more powerful AI, an intellect that can do a billion times more processing in a second than the sum of all human thinking over all time to this point, as far beyond humans in intelligence as humans are beyond insects, AI so powerful it can be called weakly godlike. This could all happen in the next 10 to 50 years.
Presumably, precautions would be taken to isolate a weakly godlike intelligence such as preventing it from directly influencing the physical world, and it would surely be programmed to ensure it does not take actions detrimental to our welfare. And the limits of what weakly godlike AI could achieve in terms of benefits to mankind are vast and unknowable, limited only by the bounds of physical law. But can an intelligence that powerful really be safely contained? In short, can you keep a god, even a weak and kindly disposed one, in a box?
To some extent the answer depends on aspects of physics that are currently beyond our human understanding (as does the answer to the question of whether such higher-order intelligences can even be built at all), but weakly godlike AI may find such things less than child's play to discover and exploit in ways we can't imagine. Are we approaching, or already in, a Vingean Singularity? As Glenn points out, higher intelligence doesn't equate to lust for power, though applying Darwin one could argue those that don't possess both might fall prey to those that do. So should we fear or embrace such a state? I have a feeling we are going to find the answer in the not too distant future, for better or worse.
UPDATE: A belated welcome to Instapundit readers.
UPDATE: Added link in first sentence. Meant to do it originally, as it was part of the inspiration for the post, but forgot. Also, welcome Carnival Of Tomorrow readers!