by German Lopez
December 08, 2022

from NYTimes Website





An A.I. image generator produced this

 when given the prompt

"A distributed linguistic superbrain

that takes the form of an A.I. chatbot."

Credit...Kevin Roose, via DALL-E

 

 

 

Social media's newest star is a robot:

a program called ChatGPT that tries to answer questions like a person.

Since its debut last week, many people have shared what the bot can do.

 

New York magazine journalists told it to write what turned out to be a "pretty decent" story. Other users got it to write a solid academic essay on theories of nationalism, a history of the tragic but fictitious Ohio-Indiana War and some jokes.

 

It told me a story about an artificial intelligence program called Assistant that was originally set up to answer questions but soon led a New World Order that guided humanity to,

"a new era of peace and prosperity."

What is remarkable about these examples is their quality:

A human could have written them.

 

And the bot is not even the best...

OpenAI, the company behind ChatGPT, is reportedly working on a better model that could be released next year.

"A lot of the promised benefits of A.I. have been eternally five years away," my colleague Kevin Roose, who covers technology, told me.

 

"ChatGPT was a moment when a technology people had heard about finally became real to them."

In today's newsletter, I'll explain the potential benefits of artificial intelligence but also why some experts worry it could be dangerous.

 

 

 

 

 

The upside of artificial intelligence is that it might be able to accomplish tasks faster and more efficiently than any person can.

 

The possibilities are up to the imagination:

self-driving and even self-repairing cars, risk-free surgeries, instant personalized therapy bots and more...

The technology is not there yet.

 

But it has advanced in recent years through what is called machine learning, in which bots comb through data to learn how to perform tasks.

 

In ChatGPT's case, it read a lot. And, with some guidance from its creators, it learned how to write coherently - or, at least, statistically predict what good writing should look like.

 

There are already clear benefits to this nascent technology.

It can help research and write essays and articles.

 

ChatGPT can also help code programs, automating challenges that can normally take hours for people.

Another example comes from a different program, Consensus.

 

This bot combs through up to millions of scientific papers to find the most relevant for a given search and share their major findings. A task that would take a journalist like me days or weeks is done in a couple minutes.

 

These are early days.

ChatGPT still makes mistakes, such as telling one user that the only country whose name starts and ends with the same letter is Chad...

But it is very quickly evolving.

 

Even some skeptics believe that general-use A.I. could reach human levels of intelligence within decades.

 

 

 

 

 

Despite the potential benefits, experts are worried about what could go wrong with A.I.

For one, such a level of automation could take people's jobs.

 

This concern has emerged with automated technology before.

 

But there is a difference between a machine that can help put together car parts and a robot that can think better than humans.

If A.I. reaches the heights that some researchers hope, it will be able to do almost anything people can, but better.

 

Some experts point to existential risks.

One survey asked machine-learning researchers about the potential effects of A.I.

 

Nearly half said there was a 10 percent or greater chance that the outcome would be,

"extremely bad (e.g., human extinction)."

These are people saying that their life's work could destroy humanity.

 

That might sound like science fiction.

 

But the risk is real, experts caution.

"We might fail to train A.I. systems to do what we want," said Ajeya Cotra, an A.I. research analyst at Open Philanthropy.

 

"We might accidentally train them to pursue ends that are in conflict with humans'."

Take one hypothetical example, from Kelsey Piper at Vox:

A program is asked to estimate a number.

 

It figures out that the best way to do this is to use more of the world's computing power.

 

The program then realizes that human beings are already using that computing power.

 

So it destroys all humans to be able to estimate its number unhindered.

If that sounds implausible, consider that the current bots already behave in ways that their creators don't intend.

 

ChatGPT users have come up with workarounds to make it say racist and sexist things, despite OpenAI's efforts to prevent such responses.

 

The problem, as A.I. researchers acknowledge, is that no one fully understands how this technology works, making it difficult to control for all possible behaviors and risks.

 

Yet it is already available for public use.