In my last update post I said I’d be going over how to build your own 10 gigabit network in your home. I did a Point-to-Point connection between my desktop and file server (on the cheap), but you can use this same information to build your own networks!
First of all, when I built my network connection, the first thing I had to be absolutely sure about is that it was affordable! I priced out everything for a bit over $100, including shipping.
The parts list for this project was short and sweet. I knew I had to have the network cards for each end, and settled on a pair of Mellanox ConnectX-2 PCIe 4x cards. They’re an older generation (or two), but they’ll definitely get the job done.
Next was a pair of SFP+ plugs. Here’s where you can have a couple of different options. I went with a set of multimode SPFs. I did this for 2 reasons, I have experience with fiber, and I wasn’t exactly sure how far the cable run was going to have to be for my link. Fiber can’t be interfered with by electrical signals of running it with electrical lines in the house, and my maximum distance is 300 meters… more than enough. You can also use DAC (Direct Attached Copper) for simplicity if you’re not going far with your link. These include both SFPs and the media in-between them. There are even some RJ-45 SFP+s out there, but they seem to be elusive on the gray market.
In my case, I now needed a keystone jack , and a couple of fiber jumpers, one for between my desktop and the keystone jack, and another between the keystone jack and the file server. Since I’m going for 10Gb/s on multimode fiber, I selected OM3 jumpers, as OM3 multimode fiber is rated for 10Gb/s lasers.
With all of this in hand… it was time to install everything.
With all of the hardware installed, and fibers ran, it was time to see if they actually worked. I fired up the server, and Windows Server 2008R2 found the drivers automatically for me. “Great!”. I went in and configured the Static IP address that I wanted to use with this network. It’s imperative that you use a subnet that’s not already being used by your devices. In my case, this was 192.168.0.1 and 192.168.0.2 with a 255.255.255.252 subnet mask, as I’m currently using 192.168.1.0/24 and 192.168.10.0/24 for other networks. No gateway or DNS servers are needed. The machines are not going to use this network to get out to the internet, and these connections do not need to be able to reach any other network.
Next I ran upstairs and kicked on my desktop. Windows 7 (Upgraded to Windows 10 with a fresh install, it also found the correct driver) followed suit with my server, and automatically downloaded the drivers for the card. “Even better!” I went ahead and did my static IP configuration for the desktop, and saved it.
Now to see if it’ll actually link up… I hooked the short fiber jumper from my desktop to my keystone jack, and held my breath. There are several things that can go wrong at this point. Incompatible SFPs for the cards, it’s fairly rare, but does happen. The next potential issue was that I thought I might have damaged my longer jumper pulling it through the wall, as it felt like it got caught on something on the way through. There’s also the chance that you have your transmit/receive backwards, in which case you just roll one side of one jumper to “frog” the connection.
There a few changes that need to be made to the advanced settings of the network connection to make it run as well as possible. First is to enable REALLY BIG jumbo packets. The max value you can set here is 9614, and that’s exactly what I set mine to. The other values to change are Receive Buffers (4096 is max, that’s what mine is set to.) and Send Buffers (4096 is max again).
Now let’s make sure everything’s the way it’s supposed to be…
Now… for the real test… let’s push and pull some data!
Reads from the server were disappointing and made me nervous. Sure, it’s more than 1 gig… twice as much actually… but not the numbers I wanted to see by any means, but the server is running an LSI-9460 8-i with Western Digital Red Drives in a RAID5, I guess I can’t expect a lot out of them. Oh well… let’s go ahead and push the file back to the server.
Wait?! What?! That’s what I’m talking about! Over 4Gb/s writing to the server from a single SSD!
Guess I better get a hold of LSI and figure out what my read bottleneck is!
So there it is, an easy to do 10Gb/s network. Sure, I don’t have the hardware to put 10Gb/s across it… but that doesn’t mean I won’t in the near future with the progression of M.2 and PCIe SSD storage.
EDIT: 3/14/16 :
I have figured out the bottleneck of the connection. When reading from the server, my SSD’s write speed is around 250-300MB/s. Therefore the bottleneck is not the RAID array, but the drive itself. When writing to a new RAID-0 that I’ve put in place in my desktop for storage, the read speeds from the server push upwards of 300-350MB/s.