The company has now made sure that it supports 16-bit floating point arithmetic when in the past it could only do 32-bit floating point operations.
The reason for the change is part of Nvidia's improvements in its AI software. Support for the smaller floating point size helps developers cram more data into the system for modelling. The company updated its CUDA Deep Neural Network library of common routines to support 16 bit floating point operations as well.
Nvidia has been upgraded its Digits software for designing neural networks. Digits version 2, released yesterday comes with a graphical user interface, potentially making it accessible to programmers beyond the typical user-base of academics and developers who specialize in AI.
Ian Buck, Nvidia vice president of accelerated computing said the previous version could be controlled only through the command line, which required knowledge of specific text commands and forced the user to jump to another window to view the results.
Digits can now run up to four processors when building a learning model. Because the models can run on multiple processors,
Digits can build models up to four times as quickly compared to the first version.
Nvidia is a big fan of AI because it requires the heavy computational power used by its GPUs.
Nvidia first released Digits as a way to cut out a lot of the menial work it takes to set up a deep learning system.
One early user of Digits' multi-processor capabilities has been Yahoo, which found this new approach cut the time required to build a neural network for automatically tagging photos on its Flickr service from 16 days to 5 days.