The 100GB Gorilla: New Connection Speeds Hit Data Centers

Image

March 1, 2023

The data center industry is constantly evolving, in accordance with and sometimes even exceeding Moore’s Law, the infamous prediction that capabilities will double every two years. One of the biggest cruxes of big data is speed: the faster the connection, the better the service. Increased demand and new technology are driving data centers to adopt new 40 Gbps and 100 Gbps Ethernet connections for their internal infrastructure. Green House Data aims to include 100 Gbps cabling in a new Cheyenne expansion, opening in the next 6 – 12 months. How will this new speed standard impact the data business?

The general trend in data centers is towards virtualization, in which multiple virtual servers run on a single piece of hardware. This helps maximize efficiency as the masses demand their streaming video, mobile computing and other data-intensive operations. As more of the world comes online and more businesses turn to the cloud for their infrastructure, data demand will continue to skyrocket.

Virtual servers save computing resources, but not network resources. As virtual servers increase, faster networks are necessary to keep up with demand. While some of this increased load can be mitigated by cleaning up inefficient routing paths or adjusting drivers, the amount of data will only continue to increase, necessitating new hardware and faster connections.

Currently 10 Gbps are the fastest Ethernet connections in wide use. For perspective, most homes and businesses connect to Ethernet with a Category 5 Twisted Pair cable, which can transmit up to 1 Gbps.

Data centers are beginning to adopt IEEE 802.3ba, the new standard for 40 Gbps and 100 Gbps connections–40 and 100x times faster, respectively, than the twisted pair cable mentioned above. These new connections will dramatically raise data center capacity.

100GB Connections Data Centers

Fiber optic lines transfer data by translating bits and bytes into literal flashes of light, which bounce their way down a transfer cable. In a data center, the external connections terminate in racks with connections to internal routers, which direct information to servers. These internal connections carry vast amounts of information on fiber optic lines. The new 802.3ba standard allows for multiple 10Gbps channels to be run in parallel or wavelength division multiplexing (WDM), depending on whether they are single or multimode fiber (MMF) cables. Basically, the 10Gbps capacity is stacked to become 4x or 10x faster. In most cases, MMF cables are used to provide the additional fiber strands needed to achieve 40 – 100 Gbps connections. This is called Multilane Distribution (MLD), consisting of parallel links or lanes.

An MMF cable allows multiple wavelengths of light to travel down its path because of a larger core diameter. It can be used with cheaper electronics and broadcast methods and it often easier to implement. Single-mode optical fiber (SMF) is designed to carry a single ray of light and is much narrower. Single-mode can be more efficient, because there are fewer opportunities for the data to slow down from dispersion or other factors. Wavelength division multiplexing splits multiple wavelengths into separate fibers for single-mode transfer. This allows more data to be transferred on a single cable by using different colors, or wavelengths, of the light for different pieces of information. Specialized equipment—a multiplexer and demultiplexer, placed at either end of the cable—joins or splits this mixed-light signal. Older networks can easily be upgraded to faster speeds through WDM.

Green House Data was recently approved for a managed data center services grant from the Wyoming Business Council, a portion of which is earmarked for 100Gbps circuits. Luckily, existing infrastructure can be modified for 100Gbps function with added cabling and equipment. This added equipment is no small investment (each 40Gbps port on a switch could cost as much as $2,500), but at least existing SMF or high-speed MMF (if rated at OM3 or OM4) cables can be reused. Additional ribbons and either new 24-Fiber or additional stacked 12-fiber connectors may be necessary as well.

Deployment of 40 and 100 Gbps Ethernet links within data centers has mostly started on small chunks where traffic is heaviest, or from rack to rack within the center. There are only a handful of data center providers with 100 Gbps installed today, but with demand increasing more and more rapidly, the migration is inevitable.

Read more about 100 Gbps networking:
http://gigaom.com/2011/02/22/we-will-soon-live-in-a-100-gbps-world/
http://arstechnica.com/information-technology/2013/02/100gbps-and-beyond-what-lies-ahead-in-the-world-of-networking/

Posted By: Joe Kozlowicz