Chess
Add news
News

The case that AI threatens humanity, explained in 500 words

0 5

The short version of a big conversation about the dangers of emerging technology.

Tech superstars like Elon Musk, AI pioneers like Alan Turing, top computer scientists like Stuart Russell, and emerging-technologies researchers like Nick Bostrom have all said they think artificial intelligence will transform the world — and maybe annihilate it.

So: Should we be worried?

Here’s the argument for why we should: We’ve taught computers to multiply numbers, play chess, identify objects in a picture, transcribe human voices, and translate documents (though for the latter two, AI still is not as capable as an experienced human). All of these are examples of “narrow AI” — computer systems that are trained to perform at a human or superhuman level in one specific task.

We don’t yet have “general AI” — computer systems that can perform at a human or superhuman level across lots of different tasks.

Most experts think that general AI is possible, though they disagree on when we’ll get there. Computers today still don’t have as much power for computation as the human brain, and we haven’t yet explored all the possible techniques for training them. We continually discover ways we can extend our existing approaches to let computers do new, exciting, increasingly general things, like winning at open-ended war strategy games.

But even if general AI is a long way off, there’s a case that we should start preparing for it already. Current AI systems frequently exhibit unintended behavior. We’ve seen AIs that find shortcuts or even cheat rather than learn to play a game fairly, figure out ways to alter their score rather than earning points through play, and otherwise take steps we don’t expect — all to meet the goal their creators set.

As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns. They’ll try to accumulate more resources, which will help them achieve any goal. They’ll try to discourage us from shutting them off, since that’d make it impossible to achieve their goals. And they’ll try to keep their goals stable, which means it will be hard to edit or “tweak” them once they’re running. Even systems that don’t exhibit unintended behavior now are likely to do so when they have more resources available.

For all those reasons, many researchers have said AI is similar to launching a rocket. (Musk, with more of a flair for the dramatic, said it’s like summoning a demon.) The core idea is that once we have a general AI, we’ll have few options to steer it — so all the steering work needs to be done before the AI even exists, and it’s worth starting on today.

The skeptical perspective here is that general AI might be so distant that our work today won’t be applicable — but even the most forceful skeptics tend to agree that it’s worthwhile for some research to start early, so that when it’s needed, the groundwork is there.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Загрузка...

Comments

Комментарии для сайта Cackle
Загрузка...

More news:

Read on Sportsweek.org:

Other sports

Sponsored