The AI Revolution: Our Immortality or Extinction

Tim Urban's lengthy discussion of singularity probabilities proved fascinating.

A few points...

Nick Bostrom worries that creating something smarter than you is a basic Darwinian error,…

By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.

… AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.

… anything that’s not human, especially something nonbiological, would be amoral, by default.

... Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board) ... Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board)...

Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. [emphasis mine]

If you have the time to check out Urban's article, the Tully scenario proved enlightening.

And, Terminator movies provide false comfort, because they make AI seem less creepy than it really is.

Views: 319

Replies to This Discussion

...the Tully scenario proved enlightening.

Yes, our morality, freedom, well-being, and even existence would all too easily become irrelevant to an Artificial newly-"Super"-intelligence intent on achieving its core goal, however innocuous that goal seemed when we designed it -- just as we think nothing of killing lettuce to make a salad.

Such AIs wouldn't be evil, but rather alien, more alien than a giant superintelligent tarantula.

And designing a core goal that aligns with human values is harder than it seems!

If you program an AI with the goal of doing things that make you smile, after its takeoff it may paralyze your facial muscles into a permanent smile. Program it to keep you safe, it may imprison you at home. [...] assign it the task of “Preserving life as much as possible,” and it kills all humans, since they kill more life on the planet than any other species.

Goals like those won’t suffice. So what if we made its goal, “Uphold this particular code of morality in the world,” and taught it a set of moral principles. [...] giving an AI that command would lock humanity in to our modern moral understanding for eternity. In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.

No, we’d have to program in an ability for humanity to continue evolving. [...] with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.

We can't count on such thought and foresight being a priority!

And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.

But there are all kinds of governments, companies, militaries, science labs, black market organizations working on all kinds of AI. [...] they tend to be racing ahead at top speed [...] they want to beat their competitors to the punch [...]

Many thinkers posit that the first ASI would immediately take steps to suppress all competitors, and then it could "rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips."

[...] there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…

This may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.

[...] people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.

Isn't this the bottom line?

...there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…

Which takes us back to the age-old warning: "Don't play with fire." I don't want to sign up for the neo-Luddite camp, but it's very possible that there's no real distinction between "man-made" and "natural" creations. It's not at all unreasonable to conclude that evolution continues via our technology and self-replication via nanotech could easily render the meat puppet stage irrelevant.

I'm glad that science fiction has sparked some of us thinking about such issues long before the technology seemed imminent! And YES, our technology and culture and ideas evolve along with our biological bodies... will "we" one day have no use for "meat puppets"?... or will there even be a "we" we could identify with?...

(There's research I've read about, suggesting that the experience of living in a body that intimately influences the brain is a big part of what makes us think and feel "humanly" -- consciousnesses running on a substrate of electronics or something else might be quite different.)

BTW - a cleaner link to the article: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2....

(Yes, authors and companies want to know how you found out about their articles and websites... but I don't like a long tail of tracking bullshit obscuring the actual address. Email campaigns are perhaps the worst offenders.)

@ Cat & Ruth--Thank you both for putting this up and for the link to waitbutwhy. Painfully interesting and also frightening stuff! If we're like 100 times more careful with this than we've been with nuclear waste, we are so screwed! Especially considering nowadays pretty much all research is either developed  or co-opted by defense contractors for military applications, the likelihood of friendly ASI is awfully fleeting, and so are our future prospects!

I've been a fan of Kurzweil for years, but I didn't realize the extent to which he's on the Pollyanna side of the industry!

From "AI Risk and the Security Mindset":

A recurring problem in much of the literature on “machine ethics” or “AGI ethics” or “AGI safety” is that researchers and commenters often appear to be asking the question “How will this solution work?” rather than “How will this solution fail?”

... When presented with the suggestion that an AI would be safe if it “merely” (1) was very good at prediction and (2) gave humans text-only answers that it predicted would result in each stated goal being achieved, Viliam Bur pointed out a possible failure mode...

Example question: “How should I get rid of my disease most cheaply?” Example answer: “You won’t. You will die soon, unavoidably. This report is 99.999% reliable”. Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.

https://intelligence.org/2013/07/31/ai-risk-and-the-security-mindset/

From David G. McAfee at Friendly Atheist:

"Ex-Google Engineer Starts Religion that Worships Artificial Intelligence"

Anthony Levandowski, the multi-millionaire engineer who once led Google’s self-driving car program, has founded a religious organization to “develop and promote the realization of a Godhead based on Artificial Intelligence.” [...]

One of the comments there:

"I was vacillating, but I'm going to come down against the notion of worshiping AI. One simple reason: worship and praise precludes criticism and analysis. I can admire and even revere great thinkers and leaders, even to the point of near-cult status, but all the while they have to be subject to careful consideration and critique. At least for me." (Anthrotheist; emphasis added)

Another commenter quoted SMBC Comics:

Flowchart: STRONG AI INVENTED -> TEACH IT ETHICS? (if yes) SEES HUMANS VIOLATING ETHICS CONSTANTLY -> ALL HUMANS KILLED. (if no) ROBOT HAS NO CONCEPT OF GOOD OR EVIL. -> PROGRAM IT TO SURVIVE? (if yes) ROBOT CALCULATES ODDS HUMANS WILL ATTACK IT DUE TO FEAR IT WILL KILL ALL HUMANS. -> ALL HUMANS KILLED. (if no) ROBOT DECIDES TO SEE WHAT HAPPENS WHEN IT FLIES EARTH INTO SUN -> ALL HUMANS KILLED.

RSS

line

Update Your Membership :

Membership

line

line

Nexus on Social Media:

line

© 2017   Atheist Nexus. All rights reserved. Admin: Richard Haynes.   Powered by

Badges  |  Report an Issue  |  Terms of Service