Published in AI

OpenAI shocked to discover system can be fooled

by on09 March 2021


By a pen and paper

Researchers from machine learning lab OpenAI were shocked to discover that their state-of-the-art computer vision system can be deceived by tools no more sophisticated than a pen and a pad.

All they had to do to baffle the AI was write down the name of an object and sticking it on another can to trick the software into misidentifying what it sees.

Writing in their Boffin Blog OpenAI's researchers said: "We refer to these attacks as typographic attacks. By exploiting the model's ability to read text robustly, we find that even photographs of hand-written text can often fool the model."

They note that such attacks are like "adversarial images" that can fool commercial machine vision systems, but far simpler to produce.

The OpenAI software in question is an experimental system named CLIP that isn't deployed in any commercial product. But the nature of CLIP's unusual machine learning architecture created the weakness that enables this attack to succeed. CLIP is intended to explore how AI systems might learn to identify objects without close supervision by training on huge databases of image and text pairs. In this case, OpenAI used some 400 million image-text pairs scraped from the internet to train CLIP, which was unveiled in January.

Last modified on 09 March 2021
Rate this item
(0 votes)

Read more about: