Monthly Archives: September 2011

My VMware vSphere Home Lab

The making of a glorious VMware Home Lab comes with so many variables and decisions that at times it felt like I was researching a million-dollar purchase.  Copious amounts of research, emailing, tweeting, and chatting went in to the finalizing of my lab, just in time for VMware vSphere 5.0!

Do I want to make this home lab out of data-center style equipment?  Certainly I don’t want servers that are decommissioned because the hardware and technology would be outdated and isn’t really comparable to our current environment.  Even if I could afford new data center style technology, the drawback to the data center equipment is size, and noise.  Scratch that. (note to Cisco, if you want to send me a UCS Blade Chassis w/ 4 blades in it, I will gladly install the power circuits, as well as suffer through the ensuing divorce…)

Giving up on building a datacenter in my house, we’re going traditional home lab… desktop size computers with lower power and low noise requirements.   Low power by virtue means low heat; which is essential in Texas where we suffered through countless 100+ degree days in a row this summer alone.  It’s hard enough to keep some parts of the house below 80, let’s not add a data center’s worth of heat load to that.  Keeping the machine ‘small’ and ‘attractive’ was also a concern because right now they sit on a buffet counter in the formal dining room.

I haven’t bothered to build my own computer in years, so my first choice was to go for a pre-built machine.  At the office our manufacturer of choice is HP, so I took a deep dive on their office class machines to see what I could find that supported 32GB of memory.  Unfortunately most of the machines that supported 32GB were expensive, large, noisy, and power hungry.  I might as well go the data center route, given all of that information. Scratch HP.

Resigned to the fact that I was going to have to roll my own system, I began bugging vTexan about home labs, and what he and his peers had implemented.  He pointed me to his blog as the end-all be all source for the perfect home lab of course…

He also sent me to a few more blogs:
– Jase McCarty – Home Lab Hosts – Just in time for vSphere 5
– Kendrick Coleman – VMware vSphere Home Lab – “The Green Machines”
– Jason Nash – vSphere Home Lab: Part 3 – Compute
– Jeramiah Dooley – PCs and Home Labs and Data Centers, Oh My… (Part 2)
– Chad Sakac – Building a Home VMware Infrastructure Lab

After reading these blogs, it became clear how elegant I could make a home lab that met modern technical requirements as well as meeting all of the limitations I had at home. I ended up doing a blend of the designs I saw along with a few ideas that Tommy (vTexan) and I had come up with to do it just a tiny bit different.  Jase McCarty had a great motherboard, it was a must have.  Jason Nash had a fantastic case, I only tweaked it a bit by buying a newer revision / model.

And, the glorious parts list:
(See the Newegg.Com Wish List “ESX System”)

Part Detail Qty Cost
Case LIAN LI PC-V352B Black 2 129.99
Motherboard TYAN S5510GM3NR 2 189.99
Processor Intel Xeon E3-1230 3.2GHz Quad 2 239.99
Memory Kingston 4GB KVR1333D3E9S/4G 8 42.99
Boot USB Lexar Echo ZX 16GB Micro 2 37.77
Cache SSD OCZ Agility 3 AGT3-25SAT3-60G 2 104.99
Power Supply OCZ ModXStream Pro 500W 2 64.99
Network Intel E144HTBLK Server I340-T4 2 265.53
Thermal Arctic Silver 5 Thermal Compound 1 11.98

At the time that I purchased this lab, the cost was $1082.23 per host.  Obviously there are some things where costs could be saved:  Case, Processor, USB, SSD, and the 4 port additional network card.  My OCD and need to have things be slightly extravagant allowed me to kick it up a notch in a few places.  Regarding the decisions on the motherboard and processor which are the most important, please see Jase McCarty’s blog because he did all of the legwork, and describes it so well I see no need to repeat it.

My personal touch to the home lab after leveraging all of the great ideas for the other expert bloggers really comes down to the USB boot drive, and the additional 4 port gigabit network card from a technical standpoint.  The case was simply aesthetic, but frankly so was the USB boot drive.   When designing the system, I knew that I wanted to boot off of either USB or SSD.  Since the case was all black and sleek-like, I wanted to put a low-profile USB card.  In comes the Lexar Echo ZX – a Micro 16GB USB drive that I could plug in on the side of the Lian Li Case, on the back of the motherboard, OR as I found out – on the USB port directly on the motherboard!  Yes, I know that I have to open the case to change this out, but I’ll cross that river when I come to it – this choice was about form, not function.


The Tyan motherboard has 3 1GB NICs on it already, one of which can also be shared for IPMI – which is great for a home lab.  Actually, now that I have it, I will almost guarantee I will never live without IPMI or some sort of remote management again.  I looked at doing an additional 2 port NIC, but then my need to really go all out kicked in again.   When thinking of replicating the office environment, I wanted to have enough NICs to really copy my Cisco UCS Blade setup, which has 6.  With a 4 port Intel Gigabit NIC I could have 6 (or 7, if I wanted to share the IPMI port) – so off I went to get the NIC.  I really tossed back and forth between the Intel E1G44HTBLK and the Intel E1G44ET2BLK when deciding on the 4port NIC (the latter was $150 more per card).
The relevant differences in regards to VMware and virtualization were that the ET2 supported VMDc and VT-d.
As far as I was concerned, neither of these were $300 worth of features I’d really leverage or have a chance to test; although over time maybe we’ll upgrade.  Keeping an eye on this one for now, but the E1G44HT was the card that suited me best today.

After all of the decisions have been made, it’s time to order.   I ended up purchasing from a blend of Amazon and Newegg, and of course that provided me with no state sales tax, and no shipping (Amazon Prime).  Two days later, the bounty of boxes inside of boxes arrived, and it was time to build.

A few pictures of me transporting my new lab home in the backseat of my rocketship…
Safe and sound on the Granite…

I will be following up shortly with posts regarding the switching, storage, and other things we’ve since done to the home lab.   Storage from Synology and Iomega, Switching from Cisco, and of course installing vSphere 5.0.


VMware VirtualCenter Command Line Options

We have been having some issues with the upstream switches in our environment, and one of the resulting joys we encountered was the enactment of our HA Isolation policy, which was set to Shut Down.   While trying to troubleshoot the switching issue we upgraded the switch stack which took about 12 minutes – and also removed connectivity to the default gateway of our 11 VM hosts.  When trying to test after the upgrade was complete, we tried to ping VirtualCenter as well as a few other hosts inside, and pounded our heads trying to figure out why all routing and switching was broken after a simple IOS upgrade.

After a bit of cussing and more digging, we found that we’d actually been up, but that every single VM was down (our VirtualCenter is a virt, as is the dependent DB).  No need to panic, we just need to dig around and figure out where they live, turn them on, and the rest will be easy.   When you’re in a hurry to fix a problem, right clicking on the vSphere Client icon in the Windows 7 taskbar to launch a new window 11 times isn’t fun or efficient.  Neither is typing in the name of each server (or IP), and neither is typing in the ridiculously cryptic root password.

That all led me to thinking about how to make it faster, and searching for VirtualCenter Command Line Options, Parameters, Switches, or whatever you might want to call them.  It wasn’t exactly easy to do, considering there is also a command line function for almost every part of well, everything VMWare.

I first found John O’Riordan’s Blog on vSphere Client Command Line Options which solved the problem of creating a nice and neat shortcut for each of the individual host servers when vCenter is down.  Although, I must admit that putting the root password into a .lnk file makes me want to reconsider my stance as a security professional.  I’ll figure out how to solve that later – for now they’re on an encrypted disk…

Now, how do I get into vCenter itself quickly?  This is where my friends over at TechTarget came to the rescue with their article on Configuring Single Sign-On (SSO) to log into VirtualCenter.

For the specific Hosts, the properties of the shortcut are as follows:

“C:\Program Files (x86)\VMware\Infrastructure\Virtual Infrastructure Client\Launcher\VpxClient.exe” -i yes –s <hostname> –u <username> –p <password>

For the VirtualCenter, the properties of the shortcut are as follows:

“C:\Program Files (x86)\VMware\Infrastructure\Virtual Infrastructure Client\Launcher\VpxClient.exe” -passthroughAuth -i yes –s <hostname>