Interesting (fairly long) article discussing the premise that the human brain seems to have some capacity for solving non-computable problems (i.e not solvable with any Turing machine, no matter how large), And possible areas to consider how this might be possible. Of course, understanding how non-computable problems can be solve itself requires stepping outside the algorithmic world. He also addresses a lot of objections to the arguments.
If true, than despite the people who say AI is the salvation of humanity (Ray Kurzwell) or a danger to be approached cautiously (Elon Musk), real AI is not coming anytime soon.
Responding to your comment about "Real AI" not coming anytime soon, I think it is sneaking up on us fast and will have huge social and economic impacts long before we get to the Sci-Fi type AI, Singularity, etc.
It's not too soon for an AI Doomsday Clock,
Couldn't read this all. Too long and complicated. The parts I read didn't even address the way in which quantum randomness crosses scale in complex systems.
Tom, I think "failure to be the masters of our own creations" is already underway.
This is true when considering "turing" machines, however, quantum computers are capable of solving problems that can't be solved by "turing" machines and are vastly superior in their computing power as well. With quantum computing, I believe we will see a "quantum" shift in AI, pardon the pun. And that is when we will see self-aware and "conscious" computers. Then the world will become an interesting place indeed. Lawrence Krauss certainly believes so and who am I to disagree?
I continue to be leery of any form of AI, myself. As reference, consider any dystopian fiction regarding ultra-intelligent computers, whether you wish to cite Colossus: The Forbin Project, the HAL 9000 or Skynet. Even the First Officer of the Enterprise had similar reservations:
Computers make excellent and efficient servants; but I have no wish to serve under them.
-- Spock, Star Trek, "The Ultimate Computer"
The most likely future scenario is that we as humans will spawn a technology evolutionary branch which will evolve at a far greater are than us. Our position as dominant species on this planet will be usurped by the new technology and the only thing that can save us is if AI also develops some form of compassion. In any case, how much compassion can an advanced being have for its pet? Even if that pet was it's creator?
This is why I advocate a knife-switch at the mains power feed. It's pretty difficult to hack a double pole, single throw switch!
For an AI whose intelligence is beyond ours the way ours is beyond an ant's, or even a chimpanzee's, social-engineering us to not cut off the power would be trivial. (It could even get us to encase the switch in a glob of epoxy, as an idea we wanted for our own reasons.)
It's worth revisiting the discussion "Terminator Movies Too Comforting"! (Lots there about how a superintelligent AI wouldn't be "human" or "inhuman"; our well-being, freedom, and even existence could be either irrelevant or a threat to the AI's primary goal, whatever that is.)
From Tim Urban's article quoted there:
[...] there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…
This may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.
Technology has already turned human desire into a commodity. Biological beings might well be just a transitional phase in the continuing course of evolution. The Internet was at first considered a powerful and unstoppable tool for human liberation. It's quickly looking more like an unstoppable means of enslavement.
"As computer graphics get better, we believe all images less."
Its amazing what you can do with Photoshop, these days, isn't it?
-- Rene Mathis, Casino Royale