1/5 of all instances deployed in Q4 2024 were GCP’s own Axion
Like Microsoft, Google also took a wait-and-see attitude toward building its own Arm CPUs. And just like Microsoft, Axion launched big with a 21.2% share of all instances deployed on GCP in Q4 of 2024. In many ways, Axion has similar characteristics as Microsoft Cobalt and Amazon Graviton CPU offerings, which make sense as they are all derived from the Arm Neoverse 2 CPU.
CPU Deployments by CPU Vendor by Quarter at GCP

Google is no stranger to custom chips
TPUs are a lower cost, lower energy alternative to GPUs for AI training and inference.
While Google was late in developing an Arm based processor, they have had success in other custom chip production. Google’s first foray was the deployment of Tensor Processing Units in 2015. TPUs are a lower cost, lower energy alternative to GPUs for AI training and inference. While not as common as NVIDIA GPU’s, TPU’s represent 9.0% of the accelerated workloads on GCP (though this is down from double the percentage in 2020).
TPU Portion of Accelerated Instances at GCP

Additionally, another key aspect of Axion’s price performance advantage is a microcontroller architecture called Titanium that enhances scale out networking performance.
Price performance is the primary customer value
Of the top 10 workloads at GCP, 7 out of the 10 are represented by Arm and 5 of those 7 are represented by Axion.
Unlike its other custom chip efforts, which were specialized, this is the first time that Google has chosen to build something for general purpose use. GCP is targeting Axion- based instances towards Web Servers, Line of Business Applications, and Small to Medium Databases. Liftr tracks 36 categories of workloads across the top clouds, and these three types of workloads tend to be in the top 10 in terms of customer usage. Of the top 10 workloads at GCP, 7 out of the 10 are represented by Arm and 5 of those 7 are represented by Axion.
Top 10 GCP Workloads And Representation by Ampere or Axion

Given these high-demand customer workloads, Arm has become a major competitive advantage for other cloud providers, such as AWS. GCP also offers its Arm instances at a lower cost than similar Intel-based instances.

Therefore, Axion-based instances are 9.1% cheaper than the latest Intel General Purpose instances and 5.6% cheaper than the previous generation Intel instances. This is in line with the pricing differences observed with other cloud providers according to Liftr Insights history of data across all major cloud providers.
Sustainability is a close second
Arm-based instances can be up to 60% more energy efficient than other CPUs deployed in its cloud.
Like the other cloud providers, Google highlights that Arm-based instances can be up to 60% more energy efficient than other CPUs deployed in its cloud. However, Google also claims that its data centers are 1.5 times more efficient than the average data center, which indicates that their energy efficiency may be even better than that of non-GCP workloads.
Better late than never
While Google and Microsoft developed their Arm chips later than AWS, they both gained some Arm experience by using Ampere CPUs in the past. In addition, both have also created custom solutions in the past. These experiences combined with customer demands to lower prices for general purpose cloud instances facilitated the development of custom solutions tailored to take advantage of their own cloud data centers and infrastructure.
It is interesting to note that while Microsoft Azure has not deployed any new Ampere instances since its Cobalt launch, Liftr Insights did note additional Ampere instances deployed after its Axion launch, albeit older generations. It will be interesting to see if that continues at GCP.
In our final article of this series about Arm, we will look into how NVIDIA leverages Arm for the CPUs in its own servers and how they are deployed in the cloud.