Professor Henry Markram, a doctor-turned-computer engineer, announced that his team would create the world's first artificial conscious and intelligent mind by 2018. If Dr. Markram's project ( Called project 'Blue Brain') is successful it will change the definition of what is human. Among the questions that will be raised are, does a functioning, reasoning brain require a physical body to be considered human? Would a functioning, conscious intellect in a computer be afforded human rights? If someone pulled the plug on the computer would it be murder?
Markham is approaching the project in a radically different way than that used by other AI researchers. He is attempting to map the neural network into a computer using extremely delicate dissection of brains (beginning with a rat's brain and ultimately a human brain) and duplicating it precisely into the computer.
Even his critics concede that he has the intellect, money and equipment to make the attempt possible. The human brain is the most complicated object on the planet and duplicating it in computer will require an extremely powerful computer – Markham believes such a computer will be available by 2018. As the current capacity of computers doubles about every 18 months, if that trend continues by 2018 capacity will have grown by a factor of 1000.
If Dr. Markram is successful it will have not only philosophical, moral and ethical import but will present some interesting challenges to religions – primarily will it have a soul? And will it have the “original sin” baggage? Of course the real tragedy would be if it turned out to be a Baptist.
Coincidentally, Markram's lab is just a few miles from where Mary Shelly wrote Frankenstein

Views: 62

Replies to This Discussion

I think it highly unlikely any religious group would consider it to have a soul. We're also raising the bar higher than necessasy for an ethical conflict. Why would it need to be human for destruction to give cause for alarm? Did we not send Michael Vick to prison for animal cruelty? If an artificial consciousness could be restored from a redundant copy, how would we define murder? Given the option of developing a rational consciousness or an irrational one, would the creation of the irrational one be unethical in and of itself? The problem with the gods we have imagined is that they seem to have no reluctance in creating imperfect creations.
Are you saying that the brain isn''t just a bunch of chemical and electrical switches?
What is clearly flawed is his idea that, given sufficient complexity, consciousness will emerge.

Human intellect and consciousness is an emergent quality of complexity - if that complexity can be duplicated in a computer it's not unreasonable that an intellect might emerge. It might be improbable but not impossible.
Why is it silly. If it is aware of it's situation it has consciousness.
I doubt that, lacking a fully functional body, it would be aware of its situation like we humans/animals are. Although a set of artificial sensory organs might give it a kind of consciousness. E.g., what would happen if we give it better sight and hearing than we have, but no olfaction? Olfaction only? A fake organ that generates sensory hallucinations? Probably different types of 'consciousness'.
John D,
If the "brain in the box" demonstrates intelligence and awareness by convincingly interacting with human correspondence then the proof is in the pudding. To be clear about my view of Dr. Markram's project, I think it's an extreme long shot that a true intellect will emerge, however, having said that, a great deal of understanding about how the brain functions might be gleaned from the research.
Adding to the understanding of the phenomenological world in which we live is never a wasted effort. And if he is successful it will be a truly a ground breaking event- one worthy of a Nobel Prize.
Agreed, not a complete waste of time, but a long long loooong shot!
Assuming he is not a charlatan the project could produce worthwhile results. While I also doubt that a conscious, functioning intellect will be forthcoming it is possible that a unique form of a heuristic expert system could come out of it and that would justify the investment.
"Someone really thinks that consciousness can exist in a mind without any external sensory and physical perceptions."

Absolutely not. He thinks that, by the time he can simulate a brain on a computer, simulating the neural input/output would be trivial.

"given sufficient complexity, consciousness will emerge."

This is a strawman argument. I don't know anybody who believes this.

"Does the good doctor expect to be able to ask it questions?"

Of course he does. You couldn't possibly simulate a brain without input/output.
By the way, I couldn't imagine a better name than 'Dr. Markram' for an evil psycho mad scientist.
It's worth a movie theme..
Yayy!!! ..fantastic plan, But I doubt it would be a "successful" experience.. other than a massive storage storage capacity, he also need to "Teach" that brain how to think...
Most likely consciousness/self awareness arise after a great deal of sensations piled into each other ( Thus you need SOME sensory inputs like someone else mentioned...), triggering emotions later...AND memory coming after that, to THEN get the mind to develop the "self" model on top of ALL of that...( and then Dreams and tales/legends of a soul.... ) The most he could get would be a mind without emotion, based on pure logic.. ( Hello Spock & HAL 9000 ). And no mind "wondering"... what am I?

But is not a complete waste if you consider how much research on "brain" can be done through this...




Update Your Membership :




Nexus on Social Media:


© 2018   Atheist Nexus. All rights reserved. Admin: The Nexus Group.   Powered by

Badges  |  Report an Issue  |  Terms of Service