Breaking the Gigabit

Rahul Powar
9 min readMar 14, 2021

What the options for editing 4K video on a new M1 powered Mac mini in a domestic environment? Does WiFi6 and a remote NAS provide enough bandwidth and what are the other options? How fast is 10Gb and do we really need it? TLDR; at the end.

Home rack
I know it’s messy. In my defence I do software.

I recently upgraded a number of networking components in my home network and realised I was probably close to peak network, at least in a domestic environment.

The key components are a pair of new UniFi 6 LR access points with 4x4 MU-MIMO for my home WiFi, a UniFi 6 XG Switch that support 10Gb on copper and a custom Linux PC based network attached storage connected into this stack with a Intel X520 based 10Gb PCIe running Cat 7 via a SFP+adaptor. There is of course a bunch of other UniFi kit (yes, big fan) but for the purpose of this blog post, it is out of the signal chain.

Probably as good as WiFi gets in 2021.

I also acquired a new M1 Mac mini which is one of the first desktop options from Apple to support WiFi 6 and I happened to wanted to edit some 4K video I took on my last pre COVID-19 vacation. I was posed with a fairly simple question; I have too many gigabytes of video to load onto my tiny Mac mini’s hard drive so how should I go about accessing these files for editing? Can I load them on my 10Gb connected NAS? Will it run at ‘full speed’ if I wire in the Mac mini? Or should I just plug in a Thunderbolt 3 hard drive as most people do?

First some expectations. For the scope of 4K video editing, the OS file system cache can’t help you much, there is too much data if you are scrubbing past many gigabytes of junk video. However, a lot of access is sequential or in large blocks. So we care about the real world sequential read performance of the underlying storage if we want to set some targets for the network access overlay. Spinning disks tap out at around 200MB/s to 300MB/s, a good SSD at 1000MB/s, a good M2 at 3000MB/s and state of the art Optanes can push a bit more.

All of these are bytes per second so if we translate that into networking speak, we need roughly 8 times as many bits per second to not bottleneck data rates before we layer on the other overheads and provisos. It’s apparent that we can’t get full speed on the fastest drives without going past 10Gbps but it’s not practical as very few people are stuffing their home NAS with Optanes. And if you are, you don’t need to read this post.

First let’s benchmark the disks involved. While we could do something elaborate, we really just want a sense of how fast these disks can read locally.

Let’s just time a copy of a 4K file to /dev/null. Let’s use a sample file GX010889.MP4 which is 4000Mb. On Linux, we can test the sequential read transfer rate of the disk by clearing the system cache and copying the file to null so let’s run this on the NAS itself.

sudo sh -c “/usr/bin/echo 3 > /proc/sys/vm/drop_caches”

time cp “./2019 Video PP — Monsuar 1 2/GX010889.MP4” /dev/null

On the server, sitting on a 8TB spinning disk this takes 16.83s ± 0.3s implying a read rate of 239MB/s. Mostly what we expected from a high end spinning disk.

On the same server’s boot SSD, this takes 7.72s ± 0.003s so a solid read rate of 519 MB/s. Now this is slow but this NAS has a few old parts including this SSD which is now about 10 years old.

Note: If we let Linux cache the files, these numbers are closer to 8GB/second but let’s assume system cache is not realistically helpful for large video reads. This is why we purge them with /proc/sys/vm/drop_caches before the test.

Let’s look at the M1 Mac Mini itself. Zap the cache on macOS with…

sudo purge

… and time the same file. It’s 1.27s ± 0.07s. No surprise, this IO is faster than 10 year old hardware at 3152MB/s.

For reference, this beats the SSD on a MacBook Pro 2018 at 2927MB/s by a bit.

But let’s go back to the goal. No doubt the modern PCIe attached SSDs will be fast but they are also tiny even when upgraded. In the real world, users will be plugin in a storage device via one of the ports so what does that look like?

What these files were originally sitting on

This LaCie would be common and is what I carry around.

Under the cool industrial design, it holds 5TB on a 2.5 inch 5400 RPM Seagate spinning disk. With a Thunderbolt 3 interface, this LaCie attached drive takes 28.65s ± 0.01s for the file. This is basically a slow spinning disk so 140MB/s is actually expected and very close to the advertised sequential data rate.

Not, let’s bring on the network. The M1’s are the first in Apple’s range to support 802.11ax i.e. WiFi 6. A fast bandwidth test can be done with iperf3 by running iperf3 -s on the NAS and iperf3 -c [nas] on the Mac.

WiFi 6 connection at -56dBm signal

First sign of a bottleneck. This M1 is connected to one of the few WiFi 6 access points out there and it is tapping out at 437 Mb/s. Note this is bits (small b) so taking this at face value, the maximum transfer rate from a hard drive funneled through this network would be 55MB/s, probably closer to 50MB/s as jumbo frames aren’t an option for WiFi dropping the data transfer efficiency. This would be about 1/3 of the Thunderbolt drive’s measure performance. A previous generation 802.11ac client running the same test at similar signal strength hits 369 Mb/s i.e. an anticipated 42MB/s peak effective data transfer rate.

Let’s take a look at wired. On a physical link, we can software tweak the frame size. Switching to jumbo frames should lift the useful data rate from a 94% theoretical maximum to a 99% theoretical max so let’s ensure all devices between the M1 and the NAS are enabled for 9000 MTU frames. After tweaking adaptor settings we can verify they work by testing a large ping packet.

First make sure you can send large packets by tweaking a setting on macOS.

sudo sysctl -w net.inet.raw.maxdgram=16384

Then, we can issue a ping with a large payload.

ping -D -s 8184 [nas]

If that goes out without a “ping: sendto: Message too long”, large packets are supported between your host and the server. Testing the built in wired adaptor on the Mac give us 1 Gb/s performance as we would expect.

iperf3 via the in built 1Gb port on a M1 Mac mini

But running the math, this does not seem enough for my disks. Even at 99% effective data rates with jumbo frames, this is basically a limit of 123MB/s and slightly slower than the LaCie external disk.

Runs hotter than the Mac Mini. No joke.

If we want to go faster, we need a bit more hardware. The Mac mini does not support 10Gb networking but Thunderbolt 3 has more than enough bandwidth for it so let’s introduce a Sabrent 10Gb adaptor. Plugged in and configured for Jumbo frames, this is likely as fast as we can get to the data on the NAS.

iperf3 via the Thunderbolt 10Gb adaptor on a M1 Mac mini

Looking at the numbers, we are at the limit of home networking. For reference, this is now 22x the raw speed of WiFI 6 in my setup. So finally, does this translate into full speed access of the drive? We can simply mount the NAS drives on the Mac via the various connection options we now have, purge the caches on both hosts and run the copy tests.

Now with everything in table form:

We can consider the 140MB/s from the external LaCie hard drive a minimum speed requirement

Mostly yes. The Seagate disk is the key benchmark as that is where the data is really going to sit for me and likely anyone using this type of configuration. Full speed, or near full speed access is what we want and only the 10Gb adaptor gets us there on a Mac mini.

The SSD benchmark is a reference and shows us that both WiFi 6 and the in built gigabit network on the Mac mini is the limiting factor. On the SSD, the 10Gb connection tells a different story, it is noticeably slower across the network share than on the NAS itself and does not reach full potential even though there should be enough bandwidth. This seems to be entirely down to samba and the single core performance of the CPU. smbd was sitting at 95% serving the request. Muti channel samba could likely help but 10Gb switch ports are precious and I don’t really need to share faster than a spinning disk.

What does it all mean?

  1. WiFi, even the latest access points and clients in 2021 are nowhere near wired Gigabit performance in a real deployment. If your are relying on remote work with a NAS storing large files on any kind of drive less than 10 years old, this will be a bottleneck. As an aside, the WiFi 6 data rate limits in my test setup are in the order of the consumer internet data rates if you have fibre to the home or some other high bandwidth internet service. If you want to fully utilize your 1Gb internet, looks like you will have to run wires.
  2. Wired Gigabit with jumbo frames is within shooting distance of an external spinning disk plugged into the Thunderbolt 3 port on an Mac mini M1. In numbers, this means it is ~20% slower than the slowest locally attached option.
  3. 10Gb with an external Thunderbolt 3 adaptor for the Mac mini gives you practically full performance, remotely, from even the fastest spinning disks. This costs, but is better than plugging slower portable HDDs into the Mac for editing in every way. If your NAS is Linux and using samba, you need a fast CPU to keep up if you plan on using SSDs in your NAS and want close to full performance.
  4. You likely need 40Gb if you want to use the latest drive tech remotely at anywhere near locally attached speed. However, these drives are rarely available beyond 1TB and most motherboard don’t have support for more than a couple of M2s.
  5. Comparing the benchmarks for these components that span over a 10 year window, we might be reaching local network bandwidth and data transfer limits with current approaches. Wireless will have to get about 20x faster than it is today to compete with copper wires. So apart from wired 10Gb becoming more commonplace over the next 5 to 10 years, it is likely that these data rates remain fairly flat across wireless, wired, spinning disk, SSD and M2 technologies. If you want speed, get your house wired up with Cat 7 cables and rest easy. I will update this prediction in 2031.

It seems you, my only reader, made it to the end. If you are curious about what I was editing on my NAS, take a look at one of the most beautiful underwater habitats in the world, Raja Ampat.

Let’s break up some boring tech talk with a bucket list destination

While Raja Ampat is a rare protected reserve, across the world our seas are dying. We need to raise awareness of what we have and how easily we can lose it all. If you want to support these unique habitats, make a donation to https://oceanconservancy.org

--

--

Rahul Powar

Technologist, Entrepreneur, TCK. Founder & CEO of @redsift. Previously creator of @shazam, VP @thomsonreuters, founder & CEO of @apsmart (acquired 2012).