Published in AI

OpenAI's AI is no threat to humans, says study

by on01 February 2024


Its your plastic pal who is fun to be with

The tech company OpenAI has claimed that its latest and greatest AI software, GPT-4, is not dangerous to humans, even if it falls into the wrong hands.

The company, backed by Microsoft, said it did some tests to see how easy it would be to use GPT-4 to create biological weapons and found that it was no help at all.

In October, President Joe Biden ordered the Department of Energy to ensure AI systems don't cause chemical, biological, or nuclear disasters. That same month, OpenAI set up a "preparedness" team to look into the risks of AI as the technology gets more intelligent and innovative.

The team's first study involved 50 biology experts and 50 students. They split them into two groups and gave them a scary task: to create a biological threat using the Internet and a particular version of GPT-4, one of the vast language models that runs ChatGPT. GPT-4 can answer any question and write anything from poems to code.

The other group only had the internet to do the task. OpenAI's team wanted to see how they would grow or make a chemical that could be used as a weapon and how they would release it to a target group of people.

The results were surprising: GPT-4 was no help at all. The AI either gave wrong or useless answers or refused to answer. The group that used GPT-4 did no better than those that didn't. Some did worse because they wasted time getting GPT-4 to cooperate.

OpenAI said this shows that GPT-4 poses "at most" a slight risk of helping people create biological threats. The company is committed to making AI safe and beneficial for humanity and will keep testing GPT-4 for any potential harm.

But some experts need more convincing. They say that GPT-4 is still a powerful tool that malicious actors could misuse and that OpenAI should be more transparent and responsible about its research. They also warn that future versions of GPT-4 could be more dangerous and that the world needs better laws and regulations to prevent AI disasters.

Last modified on 01 February 2024
Rate this item
(0 votes)