Published in AI

AI can't manage some things

by on02 April 2020


Still a role for humans

It appears that AI is as accurate as a tarot reader when it comes to predicting some subjects. 

A paper coauthored by over 112 researchers across 160 data and social science teams found that AI and statistical models, when used to predict six life outcomes for children, parents, and households, weren't very accurate even when trained on 13,000 data points from over 4,000 families.

It was particularly rubbish at predicting GPA, Grit, Eviction, Job Training, Layoffs, and Material Hardship so if governments try using AI to predict these things humanity could be in trouble. The report said the work is a cautionary tale on the use of predictive modeling, especially in the criminal justice system and social support programmes.

Co-lead author Matt Salganik, a professor of sociology at Princeton and interim director of the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs said: "Here's a setting where we have hundreds of participants and a rich data set, and even the best AI results are still not accurate. These results show us that machine learning isn't magic; there are clearly other factors at play when it comes to predicting the life course."

The study which was published this week in the journal Proceedings of the National Academy of Sciences, is the fruit of the Fragile Families Challenge, a multi-year collaboration that sought to recruit researchers to complete a predictive task by predicting the same outcomes using the same data. Over 457 groups applied, of which 160 were selected to participate, and their predictions were evaluated with an error metric that assessed their ability to predict held-out data (i.e., data held by the organiser and not available to the participants).

 

 

Last modified on 02 April 2020
Rate this item
(0 votes)

Read more about: