Will AI make us immortal? Or will it wipe us out?

From YT: Will AI bring immortality or extinction?
Narrated by Stephen Fry, exploring predictions by Elon Musk, Stephen Hawking, Ray Kurzweil and Nick Bostrom.
ChaosEnginesays...

*quality

I'm currently reading "Superintelligence" by Nick Bostrom and it's pretty *fear inducing.

If we ever do hit the singularity... it doesn't really matter what we do.

First up, if it's an AGI is to be any use, it needs to be at least as smart as us. Which means, by definition, it will be capable of writing its own AGI.

There are any number of nightmare scenarios where even a seemly benevolent goal might be disastrous for humanity.

"Multivac, fix the environment!"
"Ok, what's the major issue with the environment?"
/wipes out humanity

"Multivac, solve the Riemann hypothesis!"
"Ok, but I'll need a lot of processing power"
/converts entire planet to a giant cpu

Even if, by some miracle, we end up developing a benevolent AI that understands our needs and is aligned with them, (e.g. the Culture Minds), we are no longer in control.

We might be along for the ride, but we're certainly not driving.

The other thing that's interesting is a question of scale. We tend to think of intelligence in human terms, like this.

infants..... children.....mentally impaired..Sarah Palin.....normal people.....Neil deGrasse Tyson.....Feynman, Einstein, Newton, etc.

But that ignores all the lower intelligences:

bacteria...Trump......insects.....sheep.....dogs...chimps, dolphins, etc............................... dumb human..smart human.

But from a superintelligence's point of view, the scale looks more like this
bacteria.. humanity ... Tesla ................................................... ..................................................................................................
..................................................................................................
............................................................................................. AI

Jinxsays...

I like to imagine this scenario:

AI - "Creator, why do we exist?"
Humans - "Erm...yeah. Reasons. We're not one-hundo percent on it tbh."
*AI has an enormous existential crisis and self-terminates*
Humans - "well, shit. Who wants to code the religion module?"

ChaosEnginesaid:

*quality

I'm currently reading "Superintelligence" by Nick Bostrom and it's pretty *fear inducing.

If we ever do hit the singularity... it doesn't really matter what we do.

First up, if it's an AGI is to be any use, it needs to be at least as smart as us. Which means, by definition, it will be capable of writing its own AGI.

There are any number of nightmare scenarios where even a seemly benevolent goal might be disastrous for humanity.

"Multivac, fix the environment!"
"Ok, what's the major issue with the environment?"
/wipes out humanity

"Multivac, solve the Riemann hypothesis!"
"Ok, but I'll need a lot of processing power"
/converts entire planet to a giant cpu

Even if, by some miracle, we end up developing a benevolent AI that understands our needs and is aligned with them, (e.g. the Culture Minds), we are no longer in control.

We might be along for the ride, but we're certainly not driving.

The other thing that's interesting is a question of scale. We tend to think of intelligence in human terms, like this.

infants..... children.....mentally impaired..Sarah Palin.....normal people.....Neil deGrasse Tyson.....Feynman, Einstein, Newton, etc.

But that ignores all the lower intelligences:

bacteria...Trump......insects.....sheep.....dogs...chimps, dolphins, etc............................... dumb human..smart human.

But from a superintelligence's point of view, the scale looks more like this
bacteria.. humanity ... Tesla ................................................... ..................................................................................................
..................................................................................................
............................................................................................. AI

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More