Published in AI

Microsoft and OpenAI knew about DALL-E 3 problems

by on31 January 2024


Pair allowed pervs to make fake porn of Taylor Swift: claim

A Microsoft AI boss says he found holes in OpenAI's DALL-E 3 image maker in early December that let users make violent and dirty images and that the company stopped him from telling the public.

Microsoft bigwig Shane Jones said that the fake porn images of Taylor Swift last week "were an example of the abuse I was worried about and why I told OpenAI to pull DALL-E 3 from public use and told Microsoft about it."

404 Media said last week that the fake porn images of Swift came from a "special Telegram group for abusive images of women" and that one of the AI tools they used was Microsoft Designer, which uses some of OpenAI's DALL-E 3 technology.

"The holes in DALL-E 3, and products like Microsoft Designer that use DALL-E 3, make it easier for people to abuse AI to make harmful images," Jones writes in the letter to U.S. Sens. Patty Murray and Maria Cantwell, Rep. Adam Smith, and Attorney General Bob Ferguson, which GeekWire got.

He adds, "Microsoft knew about these holes and the chance for abuse."

Jones says he found the hole by himself in early December. He told Microsoft, and they told him to tell OpenAI, the Redmond company's close mate, whose technology runs products like Microsoft Designer.

He says he did tell OpenAI. "As I kept looking into the risks of this hole, I saw how DALL-E 3 could make violent and nasty harmful images," he writes. "Based on what I knew about how the model was made and the security holes I found, I decided that DALL-E 3 was a public safety danger and should be taken off public use until OpenAI could fix the risks of this model."

On 14 December, he posted on LinkedIn that he was telling OpenAI's non-profit board to take DALL-E 3 off the market.

According to the letter, he told his Microsoft bosses about the post, and his manager called him fast, saying that Microsoft's legal team wanted him to delete the post right away and would explain why later.

He says he agreed to delete the post but has yet to hear from Microsoft legal.

"For the next month, I asked for an explanation for why they made me delete my letter," he writes. I also offered to share information that could help fix the hole I found and give ideas for making AI image-making technology safer. Microsoft's legal team still needs to talk to me.

"Artificial intelligence is moving faster than ever. I get that it will take time for laws to ensure AI's safety," he adds.

"But we need to make companies answer for the safety of their products and their duty to tell the public about known dangers. Worried workers, like me, should not be scared into keeping quiet."

 

Last modified on 01 February 2024
Rate this item
(1 Vote)

Read more about: