In recent years, artificial intelligence chip designs have grown in importance at the annual Hot Chips conference. However, this year AI and data center chips dominated, with many vendors unveiling next-generation AI and neural network accelerators not just for data center applications, but specifically designed for large-scale cloud data center deployments.
NVIDIA tipped research into multi-chip packaging by describing a prototype data center inferencing accelerator. Intel revealed its much anticipated Nervana neural network architecture for data center training and inferencing.
What does this mean?/Why should we care?
The public cloud IaaS market today is powered mostly by very large monolithic chip designs – Intel’s Xeon processors and NVIDIA’s V100 GPUs.
Insights/comments on the subject from Liftr:
Our Liftr Cloud Components Tracker reports that Intel Xeon processors account for 89% of unaccelerated instance types deployed worldwide by the top four public clouds (AWS, Google Cloud, Microsoft Azure, and Alibaba Cloud).
However, NVIDIA now powers over 97% of total deployed instance types with dedicated accelerators at the four tracked clouds, worldwide.
The race toward multi-chip accelerator packaging hasn’t started in earnest yet. You’ll have a front-row seat with the Liftr Cloud Components Tracker.
Check out the Liftr Cloud Components Tracker: https://liftrinsights.com/cloud-compo…
For more cloud insights check out: https://liftrcloud.com/
Follow Us On: