Published in News

Microsoft's racist twitter bot gets Swift response

by on12 September 2019


Claimed it used her name in vain

Apple fangirl Taylor Swift sued Microsoft over a racist AI chatbot which used twitter.

In 2016, Vole was incredibly embarrassed when its AI chatbot learnt to be a bigot. However, what was not known at the time that  Taylor Swift threatened legal action because the bot's name was Tay.

Tay was a social media chatbot geared toward teens first launched in China before adapting the three-letter-moniker when moving to the US. The bot, however, was programmed to learn how to talk based on Twitter conversations. In less than a day, the automatic responses the chatbot tweeted had Tay siding with Hitler, promoting genocide, and just generally hating everybody. Microsoft immediately removed the account and apologised.

The popular-beat combo singer believed that since she was also known as Tay, her fans could have confused her with the racist bot.  It is an easy mistake to make when you see the name Tay you automatically think of Taylor Swift in the same way that when you see a rounded rectangle, you always think of Apple.

When the bot was reprogrammed, Tay was relaunched as Zo which was less likely to hack off a pop diva if it started saying that the Holocaust never happened, or the iPhone 11 was pricey.

The story came out in a book Tools and Weapons by Microsoft president Brad Smith and Carol Ann Browne, Microsoft's communications director. According to The Guardian, the singer's lawyer threatened legal action over the chatbot's name before the bot broke bad. The singer claimed the name violated both federal and state laws. Rather than get in a legal battle with the singer, Smith writes, the company instead started considering new names.

Last modified on 12 September 2019
Rate this item
(2 votes)

Read more about: