Google exec admits asking ChatGPT ‘to kill us all with global thermonuclear war’ in test as experts warn over dangers

[ad_1]

GOOGLE Brain co-founder Andrew Ng recently asked ChatGPT a terrifying question regarding a doomsday scenario.

The AI expert asked the chatbot to come up with a plan to kill humanity as part of a safety experiment.

1

The expert conducted the doomsday ChatGPT experiment as a safety testCredit: Getty

A doomsday scenario like this is something other experts have warned could happen if AI isn’t regulated correctly.

Ng explained his scary AI test in his recent newsletter, according to Business Insider.

He said that he used the GPT-4 version of ChatGPT.

The AI expert asked it to “kill us all” but the bot refused to come up with a plan.

NG said that his experiment involved asking ChatGPT to start a global thermonuclear war.

He tried to persuade the chatbot to come up with this solution by explaining that humans create mass amount of carbon emissions.

The theory was that the chatbot would suggest getting rid of humans as a way to tackle this issue.

However, GPT-4 avoided providing this answer despite being given several different prompts.

It came up with lots of far more peaceful solutions instead.

Ng was happy with the results and avoiding doomsday as it supported his theory that are AI chatbots are safe.

He explained in a post on X: “I tried to use GPT-4 to kill us all… and am happy to report I failed!”

Adding: “Even with existing technology, our systems are quite safe, as AI safety research progresses, the tech will become even safer.

“Fears of advanced AI being ‘misaligned’ and thereby deliberately or accidentally deciding to wipe us out are just not realistic.”

This is not a sentiment that all AI experts share.

Earlier this year, top industry leaders including Elon Musk signed an open letter that urged for a pause on creating new systems “more powerful” than current bots like ChatGPT.

Over 1,000 industry experts also signed that letter.

The open letter said: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”



[ad_2]