Adventures in Homelabbing

Or how I managed to find yet another way to spend my toys money.

So I’ve basically been spending more than the past year building up a computer lab worth of hardware, and I’m only just now getting to the point where I’m posting a blog post about it.

Since starting, I’ve acquired:

  • Cisco 3750G-24PS
  • Cisco 3750G-24WS-S50
  • Cisco 3750E-24TD
  • Cisco 3750E-48TD
  • Two Cisco 1142 Wireless access points
  • HP Dl380 G7 with dual X5660 and 144 GB of ram
  • Dell R710 with dual L5640 and 80 GB of ram
  • Dell R510 with dual X5670 and 64 GB of ram
  • Dell R510 with dual L5640 and 64 GB of ram
  • Two Raspberry Pi model 3s
  • Odroid C2
  • Odroid HC2
  • 7x WD Blue 4TB drives (because one failed)
  • 4 Mellanox Connectx-2 10gig network cards
  • Enough SR optic transceivers and fiber to connect the 4 servers above to the 3750E switches via 10gig links
  • Intel 6i7KYK skull canyon i7 nuc with a 512 GB ssd and 16 GB of ram
  • What feels like a couple of miles worth of bulk ethernet cabling as well as enough keystone jacks to be annoying.
  • Several pieces of power supply equipment that include a 12v60A power supply, a 5v60A power supply, a 12-way automotive/marine fuse block, 2 C14 jacks with fuses, another mile or two of wires, and several misc connectors.
  • A WD 8TB MyBook that I shucked for the white label drive inside to add to my storage pool.

All of that and I’m still not done. I’ve got some fuses on the way so I can properly use the 12v fuse block with the Odroid HC2 I have as well as the additional HC2s I plan to order.

These projects are getting me a total amount of disk space that is easily exceeding 30TB raw, and going to grow pretty quickly once pieces and money start arriving.

One thing I’ve encountered along the way is a way to relatively easily make all the servers able to access the storage simultaneously in a way that scales well. Distributed filesystems like ceph and glusterfs. I’ll be able to add storage in small blocks and have unified access across all nodes.

With where I am right now, I’ve got the network port count available to add somewhere around 50 more nodes of 1gig before I have to add more switching to the network. That will allow me to have over 400TB raw worth of storage assuming I only go with 8tb drives in Odroid HC2s. I may get more storage per port if I use larger servers.

Front view of the 4 servers
The port end of the switches. 24 ports of PoE capable Gigabit, 2 SFP Gigabit, 4 X2 10 Gigabit, 72 ports of non-PoE gigabit. all Layer 3 routing capable.
The Skull Canyon nuc and the first of the Odroid HC2s. Some of my VMs are hosted here instead of on the servers below.
The 12v power supply that is going to be powering the first 36 odroid HC2s I’m going to be dealing with.
The two pis and the odroid C2 that are also a part of this little mess.