Published in AI

FTC asks US companies to use AI nicely

by on08 May 2023


Because there will be no problems if everyone is nice

America's consumer-protecting federal agency, the FTC is asking US companies to be nice when it comes to using customer data on AI systems.

The FTC has a division overseeing advertising practices. Its website includes a "business guidance" section with "advice on complying with FTC law.” This week one of the agency's attorneys warned that the FTC "focuses on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers."

In a blog post entitled "The Luring Test: AI and the engineering of consumer trust" the FTC said that firms are starting to use AI tools in ways that can influence people's beliefs, emotions, and behaviour.

Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language, even when those answers are fictional.

A tendency to trust the output of these tools also comes in part from "automation bias," whereby people may be unduly trusting of answers from machines which may seem neutral or impartial, the FTC said.

“It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed to use personal pronouns and emojis. People could easily be led to think they're conversing with something that understands them and is on their side,” the FTC said.

Concern about their malicious use goes well beyond FTC jurisdiction. But, a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment.

Companies thinking about novel uses of generative AI, such as customising ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services.

“Manipulation can be deceptive or unfair when it causes people to act contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don't comprise a class of people protected by anti-discrimination laws,” the FTC said.

The FTC attorney also warns against paid placement within the output of a generative AI chatbot. ("Any generative AI output should distinguish clearly between what is organic and what is paid.")

"People should know if an AI product's response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they're communicating with a real person or a machine."

"Given these many concerns about using new AI tools, it's not the best time for firms to build or deploy them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC calls and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look. "

So at the moment, the suggestion for companies is to play nice about their use of AI instead of doing evil and hoping that they will not be caught.

 

Last modified on 08 May 2023
Rate this item
(0 votes)

Read more about: