Print this page
Published in News

Google's neural network gets better at weather forecasts

by on15 January 2020


It's raining mem, hallelujah

A team of Google boffins has developed a deep neural network that can make fast, detailed rainfall forecasts.

Google says that its forecasts are more accurate than conventional weather forecasts, at least for time periods under six hours. According to Ars Technica, the boffins results are a dramatic improvement over previous techniques in two key ways.

Google says that leading weather forecasting models today take one to three hours to run, making them useless if you want a weather forecast an hour in the future. By contrast, Google says its system can produce results in less than ten minutes -- including the time to collect data from sensors around the United States. A second advantage: higher spatial resolution. Google's system breaks the United States down into kilometre squares while conventional systems, by contrast, "computational demands limit the spatial resolution to about five kilometres.

Google's model is "physics-free" and does not need prior knowledge of atmospheric physics and it doesn't try to simulate atmospheric variables like pressure, temperature, or humidity. Instead, it looks at precipitation maps as images and tries to predict the next few images in the series based on previous snapshots.

It does this uses a neural network architecture called a U-Net that was first developed for diagnosing medical images. The U-net has several layers that downsample an image from its initial 256-by-256 shape, producing a lower-resolution image where each "pixel" represents a larger region of the original image. Google doesn't explain the exact parameters, but a typical U-Net might convert a 256-by-256 grid to a 128-by-128 grid, then convert that to a 64-by-64 grid, and finally a 32-by-32 grid. While the number of pixels is declining, the number of "channels" -- variables that capture data about each pixel -- is growing.

The second half of the U-Net upsamples this compact representation -- converting back to 64, 128, and finally 256-pixel representations. At each step, the network copies over the data from the corresponding downsampling step. The practical effect is that the final layer of the network has both the original full-resolution image and summary data reflecting high-level features inferred by the neural network. To produce a weather forecast, the network takes an hour's worth of previous precipitation maps as inputs. Each map is a "channel" in the input image, just as a conventional image has red, blue, and green channels. The network then tries to output a series of precipitation maps reflecting the precipitation over the next hour. Like any neural network, this one is trained with past real-world examples. After repeating this process millions of times, the network gets pretty good at approximating future precipitation patterns for data it hasn't seen before.

 

Last modified on 16 January 2020
Rate this item
(2 votes)