In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimised microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers actually sit on the physical silicon die.
The search engine outfit said that it used AI software to design its homegrown TPU chips that accelerate AI workloads. Basically, this meant it was using machine learning to make its other machine-learning systems run faster.
Floorplans are a pain to create as they involve lots of manual work and automation using chip design applications. Google's thought its reinforcement-learning approach would produce designs better than those made just by human engineers using industry tools
Imagine the gasps when Google announced it could churn out chip floorplans that were superior or comparable to those produced by humans in all key metrics, in just six hours.
However, Google's claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).
Top boffin Andrew Kahng spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model's inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their complete version to verify the Googlers' findings.
Eventually they created the Google code, referred to as circuit training (CT) in their study, but found it performed worse than humans using traditional industry methods and tools.
the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip's logic gates that the web giant's reinforcement learning system then optimised.
Having initial placement information can significantly enhance CT outcomes.
The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalise it for fabrication. Google argued this was necessary whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and its model deserved credit for the optimised end product.
But UCSD pointed out that there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model. Synopsys tools may have the model such a head start that the AI system's true capabilities were questionable.
Some researchers are now asking Nature to review the original paper in light of the new research.
The lead authors of Google's paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team's work could be more accurate implementation of their method. They pointed out that Prof Kahng's group obtained worse results since they didn't pre-train their model on any data at all.
The UCSD group, however, said they didn't pre-train their model because they needed access to Google’s magic proprietary data. They claimed, however, two other engineers had verified their software at Google, who were listed as co-authors of the Nature paper. Prof Kahng is presenting his team's study at this year's International Symposium on Physical Design conference.