Published in AI

AI Pioneer fears it will do humanity harm

by on02 May 2023


Geoffrey Hinton quit Google so he could speak out

The man who has been working on AI for 50 years feels that his life’s work might harm humans.

Geoffrey Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the AI systems that the tech industry’s most prominent companies believe are a key to their future.

However, he has now joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence.

Hinton quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of AI.

He said that he consoles himself with the excuse that if he had not done it, someone else would, however, he said that it was hard to see how you can prevent the bad actors from using it for bad things.

Hinton did not sign either of those letters and said he did not want to publicly criticise Google or other companies until he had quit his job. He notified the company last month that he was resigning.

In 2012, Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyse thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company Hinton and his two students started. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard.

Sutskever went on to become chief scientist at OpenAI. In 2018, Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from vast amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to how humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed.

He still believed the systems were inferior to the human brain in some ways but were eclipsing human intelligence in others. “Maybe what is going on in these systems is much better than what is happening in the brain.”

As companies improve their AI systems, they become increasingly dangerous. “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. Dr. Hinton said that the tech giants are locked in a competition that might be impossible to stop.

His immediate concern is that the internet will be flooded with false photos, videos, and text, and the average person will “not be able to know what is true anymore.”

He is worried that AI technologies will upend the job market in time. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work. It might take away more than that.”

 

Last modified on 02 May 2023
Rate this item
(1 Vote)

Read more about: