Not yet a member? No problem!
Sign-up just takes a second.
Forgot your password?
Recover it now.
Already signed up?
Log in now.
Forgot your password?
Recover it now.
Not yet a member? No problem!
Sign-up just takes a second.
Remember your password?
Log in now.
2 Comments
ChaosEnginesays...As I've said before, even if we somehow wind up in a best case scenario of a benevolent superintelligent AI that shares our goals, we're still just pets at that point.
You might love your dog, but you don't let it make decisions for you.
But that's extremely unlikely. Max talks about "letting " the AI decide at one point in the video. The verb "let" implies a degree of control on our part. If we get a superintelligent AI, we won't be "letting" it do anything, any more than apes "let" humans do something. Maybe on a short scale/timeline, an individual ape might force a human to do something (i.e. a gorilla makes a human run away), but in real terms, apes don't really get a say if we decide to do something.
Basically, as soon as we get superintelligent AI* , the world will be unrecognisable.
* I mean a true superintelligent AGI, something that is smarter than us and therefore by definition able to write a better version of itself.
vilsays...Moral of the story: be nice to your dog, and write nice code.
Unfortunately, like most things, AI will be first used to fight other peoples AIs.
Discuss...
Enable JavaScript to submit a comment.