Data center networks are the glue that connect all deployed servers in a cloud.
Network Speeds and Capabilities
However, unlike other instance type specifications, network speed and capabilities are not exposed to cloud customers as configuration options (for the most part).
Network speeds increase at a glacial pace compared to processor core counts, accelerator capabilities, and other configurable specs. So, each cloud tends to deploy one speed of network interface card, or NIC, for most instance types it deploys in a given year.
SmartNICs
The pace of physical network innovation and the nature of network standards, have caused some clouds to design SmartNICs. SmartNICs push some of the intelligence found in smart routers and switches into the network itself.
Alibaba Cloud’s X-Dragon, AWS’s Nitro and Azure’s Catapult are examples of clouds designing their own in-house SmartNICs.
InfiniBand for High-Performance and Supercomputing Clusters
Decades ago, most data centers standardized on Ethernet networking, though high-performance and supercomputing clusters may use a faster networking system called InfiniBand. AWS currently offers nine InfiniBand-enabled metal types and size through its Elastic Fabric Adaptor or EFA feature.
But, because an Ethernet or InfiniBand NIC is always attached to a server, they must be separately orderable and not a configuration option.
Network speeds are specified in billions of bits transmitted per second, or gigabits per second, abbreviated Gbps. Today most state-of-the-art cloud data centers use 10 Gbps Ethernet cabling inside a rack. (InfiniBand is at 100 Gbps per cable and higher).
Liftr Insights gathers network configuration specs for many instance types at the clouds we track in our Liftr Cloud Components Tracker.
Liftr Cloud Components Tracker: http://bit.ly/2QceXlT
Liftr Cloud Regions Map: http://bit.ly/2LGB5PV
Follow Us On: https://www.facebook.com/LiftrInsights/
https://www.linkedin.com/company/lift…