Published in News

US AI Safety Institute hires a doomer to run things

by on18 April 2024


Must start the day negatively

The US AI Safety Institute, a branch of the National Institute of Standards and Technology (NIST), has unveiled its top brass. Paul Christiano, an ex-OpenAI boffin who has gone on record as saying that there is a 50 per cent chance AI will wipe out humanity, is taking responsibility for US AI safety.

While he does not make doom-and-gloom quotes, Christiano is famed for his work on a nifty AI safety trick called reinforcement learning from human feedback (RLHF).

Some are fretting that by giving the keys to an "AI doomer," NIST might be fanning the flames of non-scientific natter, which critics reckon is pure guesswork.

NIST's staff aren't happy with the choice. A spicy VentureBeat piece last month dished the dirt from two hush-hush sources, claiming that Christiano's "AI doomer" rep has got NIST's staff in a tizzy, with some threatening to chuck their jobs. They're worried that Christiano's ties to effective altruism and "long-termism" could muddle the institute's straight-shooting rep.

NIST's bread and butter is to push science forward, jazzing up US innovation and competitiveness by honing measurement science, standards, and tech to boost economic security and our way of life. Effective altruists are all about using smarts and proof to do the best, while long-term reckon we ought to be doing loads more for the kids of tomorrow—both views being a bit more heart than hard facts.

On the Bankless podcast, Christiano let slip last year that he reckons there's a "10-20 per cent chance of AI takeover" that could see us pushing up daisies, and "overall, maybe you're looking at a 50-50 chance of doom once you've got AI systems on par with humans."

He said, "The most likely way we kick the bucket isn't some AI surprise attack—it's if we've let AI loose everywhere... [And] if heaven forbid, all these AI systems were out to get us, they'd do us in."

As the new AI safety boss, Christiano will watch for AI shenanigans. According to the Department of Commerce's press release, he's set to "design and run tests on cutting-edge AI models, focusing on model evaluations for capabilities of national security concern," guide the evaluation process, and whip up "risk mitigations to beef up frontier model safety and security."

Last modified on 18 April 2024
Rate this item
(1 Vote)