The AI market will be worth an estimated $59.8 billion by 2025 -- up from just $1.4 billion last year. Nearly all major players in the IT industry have joined the race to develop AI chips and applications with the aim of establishing an early presence.
Currently mainstream products in the AI chip markets are those applied to machine learning and deep neural networks, including ASICs, GPU, FPGA and CPU chipsets. These have involved Nvidia, Intel, Qualcomm, Google, Apple, Microsoft, Amazon, Facebook, and IBM, Samsung, Huawei, Baidu and Tencent.
Nvidia is doing rather well with its GPU chipset series and has its foot on the development of a larger slice of development architectures. This includes Tesla Accelerator applied to cloud and supercomputers, Jetson fitted in robots and drones, and Drive PX adopted in automobiles. All of them share the same architecture to enable algorithm acceleration for deep learning.
Nvidia CEO Jensen Huang said that his company has kept improving the design, system architecture, compiler and algorithm of its GPU solutions, and the deep neural network velocity has been already boosted by 50 fold within three years as a result, much faster than the time frame set by the Moore's Law.
Intel is stepping up its AI blueprints ranging from network edges to datacenters, with AI chip platforms including Xeon, Xeon Phi processors and EPGA accelerators supporting the optimization of specific data load. The company has completed tests of its Lake Crest AI chipset designed to perfect neural network operation to improve deep learning efficiency.
Intel has its Myriad X, which can be applied to drones, smart cameras or AR (augmented reality) devices to sense and understand fast-changing external environments and facilitate interactions and learning.
The other force is Qualcomm with its newly released its Neural Processing Engine (NPE), with its software development kit (SDK) able to help developers optimise the AI performance on the firm's Snapdragon 600, 800 series processors and support such AI architectures as Tensoflow, Caffe and Caffe 2.
NPE can manage image recognition, scene detection, camera filtration, and photo retouching. Facebook now uses it to accelerate the execution of the AR function of photos and real-time films through smartphone apps.
Google has its Tensor Process Unit (TPU) chipset for deep learning and algorithm, targeting AI developers and open for commercial and academic users through cloud services. Huawei has recently unveiled its Kirin 970 SoC, which is claimed to be the world's first AI mobile chip fitted with neural processing unit (NPU).
The firm's new-generation flagship smartphone models Mate 10 and Mate 10 Pro, set to hit the market in October, will carry Kirin 970 chipset, marking Huawei's official entry into the AI arena. Baidu and Tencent are also actively developing customised AI chips.
AMD meanwhile has Vega to run cutting-edge artificial intelligence and machine learning tasks—the kinds that fuel Siri and Alexa and are used by corporate giants like GE to analyze “big data” streams.
While investors are certainly optimistic about AMD, there's little proof that the company can overtake NVIDIA in the AI at the moment. NVIDIA is already a self-driving car tech leader, is the go-to GPU maker for many AI servers, and has a clear understanding of its market potential in the AI space.