In September last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money and decided to give it a pass.
The project appeared well-suited for Google Cloud, whose expertise in developing AI tools that help in areas such as detecting abnormal transactions has attracted clients like Deutsche Bank, HSBC and BNY Mellon.
Google's unit anticipated AI-based credit scoring could become a market worth billions of dollars a year and wanted a foothold.
However, its ethics committee of about 20 managers, social scientists and engineers who review potential deals unanimously voted against the project at an October meeting, Pizzo Frey said.
The human managers deemed the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender.
Google has blocked new AI features analysing emotions, fearing cultural insensitivity, and it is not the only one. Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system.
The news has not been reported but it does seem that the big tech is trying to strike a balance between lucrative AI systems with a greater consideration of social responsibility.
Microsoft company's chief AI officer Natasha Crampton investigated using voice mimicry tech to restore impaired people's speech but found that there was an issue about enabling political deepfakes.
But while Big Tech might be doing the right thing for once, rights activists say decisions with potentially broad consequences for society should not be made internally alone. They argue ethics committees cannot be truly independent and their public transparency is limited by competitive pressures.
Jascha Galaski, advocacy officer at Civil Liberties Union for Europe, views external oversight as the way forward, and U.S. and European authorities are indeed drawing rules for the fledgling area.
If companies' AI ethics committees "really become transparent and independent – and this is all very utopist – then this could be even better than any other solution, but I don't think it's realistic," Galaski said.
The companies said they would welcome clear regulation on the use of AI, and that this was essential both for customer and public confidence, akin to car safety rules. They said it was also in their financial interests to act responsibly.
Among complex considerations to come, IBM told Reuters its AI Ethics Board has begun discussing how to police an emerging frontier: implants and wearables that wire computers to brains.
IBM Chief Privacy Officer Christina Montgomery said such neurotechnologies could help impaired people control movement but raise concerns such as the prospect of hackers manipulating thoughts
Tech companies acknowledge that five years ago they were launching AI services such as chatbots and photo-tagging with few ethical safeguards, and tackling misuse or biased results with subsequent updates.
But as political and public scrutiny of AI failings grew, Microsoft in 2017 and Google and IBM in 2018 established ethics committees to review new services from the start.