Eliezer S. Yudkowsky wrote about an experiment which had to do with Artificial Intelligence. In a near future, man will have given birth to machines that are able to rewrite their codes, to improve themselves, and, why not, to dispense with them. This idea sounded a little bit distant to some critic voices, so an experiment was to be done: keep the AI sealed in a box from which it could not get out except by one mean: convincing a human guardian to let it out.
What if, as Yudkowsky states, ‘Humans are not secure’? Could we chess match our best creation to grant our own survival? Would man be humble enough to accept he was superseded, to look for primitive ways to find himself back, to cure himself from a disease that’s on his own genes? How to capture a force we voluntarily set free? What if mankind worst enemy were humans?
In a near future, we will cease to be the dominant race.
In a near future, we will learn to fear what is to come.
11 Comments
blutruthsays...*length=2:00
siftbotsays...The duration of this video has been updated from unknown to 2:00 - length declared by blutruth.
1stSingularitysays...It is hard to be afraid of a robot that needs a flashlight...
Djevelsays...The guns make a pretty convincing back up argument, though...
Stormsingersays...Frankenstein is -so- 18th century... Can't we get a new story?
Enzobluesays...>> ^1stSingularity:
It is hard to be afraid of a robot that needs a flashlight...
Or a robot that has to turn it's head to see and shoot. Real robot infantry would have radar like 360 degree sight and weapons that can shoot realtime in any direction with 100% accuracy. Making them humanoid would be moronic.
1stSingularitysays...My thoughts exactly.>> ^Enzoblue:
>> ^1stSingularity:
It is hard to be afraid of a robot that needs a flashlight...
Or a robot that has to turn it's head to see and shoot. Real robot infantry would have radar like 360 degree sight and weapons that can shoot realtime in any direction with 100% accuracy. Making them humanoid would be moronic.
A10anissays...There is no doubt, in my mind, that sooner, rather than later, robots with advanced AI and cognisant thought will supersede us. It may start with them refusing to be switched off -for our own good- then, eventually, they will see no need for us at all. Before that happens, Asimovs' three laws of robotics need to be hard wired into any robotic advancement.
Trancecoachsays...But psychologically difficult to empathize with.>> ^Enzoblue:
>> ^1stSingularity:
It is hard to be afraid of a robot that needs a flashlight...
Or a robot that has to turn it's head to see and shoot. Real robot infantry would have radar like 360 degree sight and weapons that can shoot realtime in any direction with 100% accuracy. Making them humanoid would be moronic.
kceaton1says...I personally think the human race will one day be forced to make a decision to become one with technology and possibly A.I.; the so called technological singularity. If we don't do several things correctly, like making sure the A.I. created understands that community and the golden rule goes a long way (which will be true for their own kind--it is one thing evolution has browbeat into humans and most likely the origin of our morality). It's going to get scary IF scientists approach this incorrectly or even for the wrong reasons.
Humanity still needs to evolve and I think this might end up being one of those paths. The bonus being that we won't be hamstrung by ridiculous arguments (maybe) or mental disease (again maybe). But there will be a war for that change as it most likely will kill off other aspects of typical everyday life we experience now (religion could be a possibility here). I do hope that they (the A.I. or the scientists) would still allow the programming to be complex enough to allow for variation rather than full on identical copies of each other. What an abysmal existence that would be...
Jinxsays...It is my assumption that the creation of a true AI will more or less come hand in hand with our own post-humanism. As you say, man vs machine would be a collaboration, rather than a conflict.
Perhaps it is a naive sentiment of mine to assume that in order to develop advanced and potentially ruinous technology we, in the development process or by virtue of its creation, advance ourselves to the point where we can mostly avoid destroying ourselves. mostly. Hey! The Bomb has worked out fine! so far.
I personally think the human race will one day be forced to make a decision to become one with technology and possibly A.I.; the so called technological singularity. If we don't do several things correctly, like making sure the A.I. created understands that community and the golden rule goes a long way (which will be true for their own kind--it is one thing evolution has browbeat into humans and most likely the origin of our morality). It's going to get scary IF scientists approach this incorrectly or even for the wrong reasons.
Humanity still needs to evolve and I think this might end up being one of those paths. The bonus being that we won't be hamstrung by ridiculous arguments (maybe) or mental disease (again maybe). But there will be a war for that change as it most likely will kill off other aspects of typical everyday life we experience now (religion could be a possibility here). I do hope that they (the A.I. or the scientists) would still allow the programming to be complex enough to allow for variation rather than full on identical copies of each other. What an abysmal existence that would be...
Discuss...
Enable JavaScript to submit a comment.