fbpx

Intel Is Going All-In to Dominate AI Chipset World

This market research report was originally published at Tractica’s website. It is reprinted here with the permission of Tractica.

The AI revolution started in 2012 when neural network AlexNet surpassed the accuracy of all previous classic computer vision techniques, and the industry has not looked back since. AI algorithms are compute-sensitive by nature, and the need for accelerating AI algorithms in hardware has long been recognized – with more than 100 companies jumping in with their chipsets.

Intel vs. NVIDIA

Intel is currently locked in battle with NVIDIA to become the world’s dominant AI chipset company. While NVIDIA’s focus is on discrete graphics processing units (GPUs) for AI acceleration, Intel is focusing on central processing units (CPUs) and heterogeneous computing. NVIDIA is unifying its software development environment via CUDA and Intel has embarked on enabling software unification via OpenVINO and oneAPI for different architectures. All the new chipsets are expected to be supported by the OpenVINO Toolkit at launch, making it easier for developers to switch environments.

Battle Cry

Intel has had a flurry of announcements recently, suggesting that it is taking the battle to dominate the AI chipset world to a new level. At the recent Intel AI Summit, the company announced many products that have been “several years in the making” and followed up with additional announcements soon afterwards at Supercomputing 2019. Then there are also rumors regarding Intel’s potential acquisition of AI startup Habana Labs. The announcements are as follows:

  • Focusing on the inference market, Intel announced that the next-generation Xeon (codenamed Cooper Lake) will support bfloat16. The bfloat16 is a popular format in the AI inference world that offers a larger number than the half-precision floating point used previously. Intel also announced a software package called Deep Learning Boost that uses the enhanced microarchitecture to improve convolution performance. This new microarchitecture combines three instructions into one to perform convolution, thus generating higher performance.
  • Nervana chipsets are finally hitting the market, and Intel announced early MLPerf benchmark results of its NNP-I chipset. The benchmarks were generated using its early hardware and alpha software stack. Intel expressed optimism that the numbers will get even better as the product matures.
  • Intel also announced the availability of the Nervana NNP-T, its training chipset and early customer engagement with Baidu. It demonstrated a 32-card cross-chassis system in 1U form factor. Intel claimed that it can achieve scaling to a few thousand of these cards via PCIe Express connectivity of up to a theoretical limit of 1,024 cards. The company states it has achieved up to 95% scaling with Resnet-50 and BERT, as measured on 32 cards.
  • Intel’s next-generation low power AI chipset from Movidius, codenamed Keem Bay, is expected to be released in the first half of 2020. Although no concrete details were provided on performance and power numbers, Intel announced that it will offer more than 4x the raw inference throughput of NVIDIA’s TX2, putting it at 4 TOPS. The chip is expected to consume one-third less power for the same performance. It is a low power solution with a size of 72 square millimeters.
  • Intel has also announced the DevCloud for the edge. The cloud comes pre-loaded with software and hardware consisting of Intel’s CPU, GPU, field-programmable gate array (FPGA), and application-specific integrated circuit (ASIC; Movidius). Using the cloud, developers can choose device(s) for algorithm acceleration and then decide which is the best choice for them. The cloud also offers tutorials and code samples for the developers to get started.
  • At Supercomputing 2019, Intel announced its long-awaited Ponte Vecchio GPU These new chipsets will allow Intel to create its own discrete GPU accelerator cards for the enterprise and consumer markets and will put it in head-to-head competition with NVIDIA.
  • Finally, there are rumors that Intel is in discussion with Habana to acquire its product portfolio. Habana is the only startup in the AI chipset world to have demonstrated its products for inference as well as training, and its products are qualified by Facebook.


Intel’s AI Portfolio (Source: Intel)

Battle for Supremacy: Time Will Tell

Intel has already acquired several companies, including Mobileye, Movidius, and Nervana. It also has a neuromorphic chipset called Loihi, which demonstrates that the company is keeping the long-term play in mind. With the new acquisition of Habana, Intel has signaled its willingness to go all-in to battle for supremacy in the AI chipset market.

At the AI Summit, Intel announced that it had generated $3.5 billion in AI chipset revenue in 2019, with the bulk of revenue coming from AI inference workloads. This leaves a large market that can be addressed with the company’s chipsets. Intel is ensuring that all the bases are covered via oneAPI, which supports many different software platforms. Only time will tell how this plays out.

Anand Joshi
Principal Analyst, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top