Ten Gig Is Here.
Today Sonic.net installed a 10 gigabit synchronous, unmetered XGS-PON fiber optic link to my home in Redwood City California. The connection will cost me $40/month. My only gripe is that I can’t get static IPs, too! I spent the earlier part of the week wiring my home with 10G capable switches and getting a 10G copper Thunderbolt 3 interface for my notebook — which means that for the first time in my life I’ve got a direct hard line 10 gigabit connection to the internet backbone.
What does that mean for Internet experiences? What comes next?
When I was still in college way back in 1999, gigabit copper ethernet was standardized as 1000BASE-T (802.3ab); it was 10x faster than its predecessor 100BASE-T, used the same wiring configuration (four twisted pairs, e.g. CAT-5 / CAT-5 cabling) and connector types (RJ45), and very quickly became a commodity connector for servers, desktops, and laptops.
Over the following two decades, datacenter fabrics continued to innovate on network connections; moving onward to 10gbps SFP+ through to now 800 gbps modules (QSFP-DD). This means that modern datacenter network connectors are literally almost a thousand times faster than the network links experienced at home.
Meanwhile, several efforts to improve consumer LAN links have slowly moved forward, resulting in the 2.5G and 5G 802.3bz standard emerging out of the merger of the MGBASE-T and NGBASE-T alliances, launching in 2016 but with only tepid adoption to date. Much of the motivation was running faster speeds over old CAT-5 cable but CAT-6/6e is now widely deployed. Another motivation was the ability to run power over Ethernet (which at the time hadn’t been incorporated into the 10GBASE-T standard) but in 2018 with the passing of 802.3bt, this issue was resolved. So today it’s unclear whether it will make sense to have deployments of 2.5/5G gear or whether it will be about the same complexity and price point to just go directly to 10G. Personally, I’m not sure I understand the draw behind 2.5/5G at this point.
Plugging gear into multi-gig links is the next step; Thunderbolt-to-10G connectors are ~$120 and 10G is now baked into high end devices; for instance it’s a $100 additional option on the M1 Mac Mini. We can expect pro devices to start incorporating 10G connectors next year.
Stepping back — why has consumer LAN connectivity basically languished for two decades while datacenter connectivity has grown by three orders of magnitude? Two factors have gated consumer adoption of multi-gig links: WAN and WiFi speeds.
The WAN Bottleneck
In the days of residential dialup, local connections were much faster than Internet connections (56kbps vs 100mbps in the mid 90’s; literally about a thousand times faster). This led to experiences like gaming that only made sense over a local network and the development of “LAN parties” where people would bring their desktops physically together to plug into the same network to play games, exchange music and video, etc. Over the following ~15 years, Internet connections rapidly increased in speed. In the US, gigabit fiber (GPON) & cable (DOCSIS 3.1) deployments started around 2005 and accelerated with the introduction of Google Fiber. As internet connections got faster people came to rely more on services outside their local network for backup, entertainment, and sharing; the purpose of the local network became mostly to connect devices with the Internet, moreso than connecting local devices together. Consequently, improving the speed of the local network beyond the speed of the Internet connection wouldn’t yield meaningful benefit. Why get a ten gigabit home network when all you’re ever going to do on it is connect to Internet services over a one gigabit link? Nothing will be any faster for you.
So in order for consumers to be interested in moving their LAN links to >1gbps, they really will need to have access to >1gbps WAN links. It’s only in 2021 that we’re finally starting to see meaningful deployments of >1gbps speeds in the US. While Comcast announced a 2gbps offering in 2015, it only had a handful of deployments until the last year or two — and it was only in the last 12 months or so that they increased the speed to 3gbps. Earlier this year Google Fiber lit up its first 2gbps homes in Kansas City. GCI has begun its 2gbps deployments in Alaska. Altice has announced it will start rolling out 10gbps deployments in 2022 — and Sonic has begun its 10 gig XGS-PON deployments, including at my house.
So multi-gig ISPs are finally here. Hooray!
I decided to plot out how my home connectivity speeds had changed over time since I moved into a shared nerd house in San Jose in 2001 and we splurged to get a $300/mo 3mbps SDSL line. We ended up installing a 19” server rack in a closet and hosting a bunch of interesting services. 20 years later I have a connection 3,300x faster that is 7x cheaper. That’s pretty astonishing.
The WiFi Bottleneck
Most devices in a home connect to the network over WiFi. In fact, most residential deployments are WiFi-only, with very few if any devices plugged into Ethernet (despite that fact that hardlining your connectivity is a very good idea: lower latency, less jitter, faster speeds, and more airtime for your other devices). WiFi as-implemented makes reaching speeds of over 1gbps of “goodput” (actual bits transported) improbable with typical consumer devices. Wi-Fi 6 helps, but IMO the real gamechanger is Wi-Fi 6E which opens up a vast amount of new spectrum in the 6GHz band (confusingly enough this has nothing to do with the “6” in “Wi-Fi 6”).
Regular Wi-Fi works in two separate spectrum bands: 2.4Ghz (with a relatively narrow slice of 60MHz of spectrum that’s incredibly crowded, shared with everything from microwaves to Bluetooth) and 5GHz, where there’s a lot more spectrum (~735MHz worth in the US!) but with a few drawbacks. First, most of the spectrum in the US is subject to Dynamic Frequency Selection — the US military only allows use of certain bands (most of the 5GHz WiFi spectrum) meaning that both access point and clients need to be ready to vacate the band very quickly after detecting a military radar. That creates extra burden for both in both monitoring the current channel as well as monitoring their “Plan B” channel for where it would be safe to switch were radar to be detected. This is complex and drains batteries so a lot of implementations avoid or disadvantage DFS band APs in complex ways (e.g. roaming may end up partially broken, discovery of APs in DFS bands can take substantially longer, etc). So there’s less “easily usable” 5GHz spectrum than one might think; if you’re excited about “wave 2 802.11ac clients” that can use 80MHz channels, there are only three outside the DFS band in 5Ghz. And only one of those can be used at high power.
Now those of you who have looked at access points know that the marketing lingo on them insists that they are much faster than 1gbps (NetGear is advertising their “10.8Gbps” Nighthawk router), but the marketing speeds are complete nonsense — not just from NetGear but from all the vendors. They add up the theoretical maximum PHY rates for each of the bands to get a non-sensical sum capacity. To make a bad analogy this is a little like when asked how fast your car can go adding up how fast your car can drive in first gear and adding it to how fast your car can drive in second gear and so forth. “300 miles an hour!” The assumption is that you as a consumer are too stupid to understand that the number in question is a lie and that you’ll just lean hard on bigger numbers being better. The reality is that with a 2x2 80MHz client you’re very unlikely to break 1gbps in the 5GHz band.
The 6 GHz band enabled by Wi-Fi 6E brings an additional whopping 1,200MHz of spectrum to the party; this should enable practical use of 160MHz channel clients (if you really want to geek out you can check out the full MCS table); and it’s only at these extremely wide channels that we can expect a 2x2 client to be able to actually achieve >1gbps of goodput with WiFi. While there were rumors the iPhone 13 was going to include 6E support, it looks like that didn’t happen, so we’ll need to wait for at least the iPhone 14 on that front; we’re about a year or two out from a critical mass of 6E-capable clients.
Even with Wi-Fi 6E we’ll only just be nudging above a gigabit for most clients; even “premium” clients like the 2021 MacBook Pro are just using two spatial antennas (somewhat surprisingly this was a regression from the 2017 MacBook Pro which had three), so even in a year or two it will be pretty unlikely for a typical client connecting to even a high-end WiFi access point in typical conditions to get much over a gigabit.
This means that most of the benefit of the multi-gig deployments is going to come from either a very large number of very busy clients, or from clients that bother to plug in physically to the network, which will require a bit of a mentality shift for folks.
In order to cross-connect the XGS-PON device to my UniFi Dream Machine Pro at 10gbps I needed to plug the 10GBase-T into my SFP+ port which required buying an additional adapter. Sadly, the UDMP doesn’t support load-balancing multi-WAN so my hilariously overwrought plan to use both AT&T gigabit and Sonic multi-gig fiber together isn’t doable with this hardware setup. So the moment of truth involved unceremoniously unplugging AT&T to cause the UDMP to failover to my multi-gig WAN2.
Benchmarking with SpeedTest and Fast.com is showing closer to 3–6gbps down and 3–6gbps up depending on the phase of the moon and the weather — it’s not a full 10gbps yet, but it’s quite a chunk faster than 1gbps. At these speeds, a lot of factors end up coming into play that start potentially bottlenecking performance: client hardware and operating system, choice of NICs/switches/cables used to connect, and ISP health.
One surprise was that my WiFi end-to-end benchmarks ended up meaningfully faster; peak WiFi6 speeds from my iPhone 13 Pro Max had previously capped out around 450mbps down & up; now I’m seeing as much as 700mbps both ways! I don’t have a great explanation for why improved backhaul would have made this level of difference in realized WiFi speeds.
We’re on the cusp of seeing multi-gig Ethernet finally deployed in US homes after two decades of plateauing at 1gbps. 10G ethernet technology is mature and ready for deployment on typical residential CAT-6 cabling. While WiFi will get much faster with 6E, consumers will need to get used to plugging in devices for which they want top speeds, which is a change in behavior from today, where WiFi was about the same experienced speed as plugging in. People doing construction on homes should consider pulling CAT-6A cable quality to support 10G copper runs; getting a nice cable only costs a small amount more than lower quality runs. To fully future proof, consider installing conduit to each room in the house to make pulling new runs easy as new connection standards are released; it’s not hard to imagine homes in just a few years switching to 25G or even 50G optical networking, which will mean pulling new cables through the house.
But “so what?” What changes in terms of people’s experiences when they have 10gbps connectivity from their devices to the Internet? The short answer is: I don’t know yet!
I do know this: at 10gbps connectivity, the Internet can be considered to be on a “data bus” with your local computer. Nearly any device that can plug in physically via USB-C can be virtually plugged into your computer. Storage can be completely virtualized; limitless in size and almost as quickly retrieved as if it was on a hard drive attached to your computer. (Yes, NFS has been around nearly 40 years, but now we’re finally in a position to really treat remote volumes as local without much of a haircut!) As a developer the boundaries between interacting with my processes locally versus those running in the cloud become very blurry; I can burst container instantiations into the cloud and cloud processes can burst data to be processed locally. My device becomes a “peer node” on the Internet instead of having to just participate by having deeply digested information spoon-fed to it.