Published in PC Hardware

Google comes up with a new way of chip floorplanning

by on10 June 2021


Fung Sui for chips

Google boffins have come up with a new way of designing the physical layout of a computer chip.

According to a paper published in the journal Nature, Google researchers applied a deep reinforcement learning approach to chip floorplanning. The result was a new technique that “automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area”.

This means a reinforcement learning method capable of generalising across chips, meaning that it can learn from experience to become both better and faster at placing new chips.

Training AI-driven design systems that generalise across chips is challenging because it requires learning to optimise all possible chip netlists (graphs of circuit components like memory components and standard cells, including logic gates) onto canvases.

The researchers’ system aims to place a “netlist” graph of logic gates, memory, and more onto a chip canvas. The design optimises power, performance, and area (PPA) while adhering to placement density routing congestion constraints. The graphs range in size from millions to billions of nodes grouped in thousands of clusters, and typically, evaluating the target metrics takes from hours to over a day.

Starting with an empty chip, the Google team’s system places components sequentially until it completes the netlist. To guide the system in selecting which parts to place first, elements are sorted by descending size; placing larger components first reduces the chance there is no feasible placement for it later.

Training the system required creating a dataset of 10,000 chip placements. The input is the state associated with the given placement, and the label is the reward for the placement (i.e., wirelength and congestion).

The researchers built it by first picking five different chip netlists, to which an AI algorithm was applied to create 2,000 diverse placements for each netlist.

The system took 48 hours to “pre-train” on an Nvidia Volta graphics card and 10 CPUs, each with 2GB of RAM. Fine-tuning initially took up to six hours, but applying the pre-trained system to a new netlist without fine-tuning generated placement in less than a second on a single GPU in later benchmarks.

In one test, the Google researchers compared their system’s recommendations with a manual baseline: the production design of a previous-generation TPU chip created by Google’s TPU physical design team. Both the system and the human experts consistently generated viable placements that met timing and congestion requirements, but the AI system also outperformed or matched manual placements in area, power, and wirelength while taking far less time to meet design criteria.

Last modified on 10 June 2021
Rate this item
(1 Vote)

Read more about: