Eliezer S. Yudkowsky wrote about an experiment which had to do with Artificial Intelligence. In a near future, man will have given birth to machines that are able to rewrite their codes, to improve themselves, and, why not, to dispense with them. This idea sounded a little bit distant to some critic voices, so an experiment was to be done: keep the AI sealed in a box from which it could not get out except by one mean: convincing a human guardian to let it out.
What if, as Yudkowsky states, ‘Humans are not secure’? Could we chess match our best creation to grant our own survival? Would man be humble enough to accept he was superseded, to look for primitive ways to find himself back, to cure himself from a disease that’s on his own genes? How to capture a force we voluntarily set free? What if mankind worst enemy were humans?
In a near future, we will cease to be the dominant race.
In a near future, we will learn to fear what is to come.
Load Comments...
Discuss...
Enable JavaScript to submit a comment.