Desktop Computer Upgrade

I last posted about my computer specs six years ago when I first built my VMWare ESXi Whitebox server.  Here’s an update to what happened to it:

From the software point of view, it was all and well for the first 2-3 years.  I had FreeNAS, Windows 7 and Windows 8 virtual machines running on it, and some lesser used Ubuntu virtual machines for playing around.  With the IOMMU capabilities of the motherboard, I even was able to get the GPU accessible by the Windows virtual machines to use it as a desktop and even play some games on it.

But there were some problems with the setup:  Although PCI passthrough through IOMMU allowed my Windows virtual machines to access the hardware, the reliability wasn’t perfect.  The main annoyance was that restarting the virtual machine would put the GPU in an unusable state, requiring a full restart of the physical machine.  Other than that certain hardware components virtualized together sometimes caused random issues.

The breaking point was in mid-2015 when one of the drives corrupted and I wasn’t able to boot my virtual machines.  Due to a couple factors including not having disk redundancy, the proprietary nature of VMFS (the filesystem used by VMWare on the disk) and the VMDK (the filesystem of the virtual disk), recovering data was difficult, if not impossible (I technically still haven’t completed the recovery process).  Luckily I had some data backups so I didn’t lose all my data.  Later that year I bought a Synology NAS which has taken care of my data storage since then, and I took backing up more seriously following the 3-2-1 backup strategy.  The incredible usefulness and utility of the Synology I’ve found over the past few years can be a whole other article!

Anyway. VMWare ESXi was a fun experiment when I had the time to fiddle and troubleshoot it.  Upon the rebuild I also tried KVM and Xen hypervisors to see if they had any better hardware virtualization with Windows guests, but couldn’t get anything working or stable.  Since the Synology NAS took care of my storage needs, I decided the way to go was just to rebuild the computer as a Windows desktop.

Over the years I’ve upgraded parts of the hardware, but up until last week the core of the system (CPU, motherboard, memory) has stayed exactly the same over the last 6 years.  Here’s the original spec list, with the upgraded hardware in bold, and removed hardware stricken out.

  • AMD Phenom II X6 1055T Thuban 6-Core 2.8GHz Processor @ $122.17
    • AMD FX-8350 8-Core 4.0GHz Processor @ $153.90 (2018)
  • ASRock 990FX EXTREME3 Motherboard (ATX, AM3+, DDR3, SATA3) @ $156.60 (2012)
  • Mushkin Enhanced Blackline Frostbyte PC3-12800 8GB 2x4GB Memory Kit @ $44.99 (2012)
  • Gigabyte Radeon HD 5450 Low Profile Video Card @ $14.99 (2012)
    • ASUS Radeon HD 7790 (2013)
  • Coolermaster Elite 350 Black ATX Case with 500W PSU @ $49.69 (2012)
    • Seasonic Gold 550W PSU @ $112 (2016)
  • Western Digital Caviar Green 2TB WD20EARS
    • Samsung 840 EVO 250GB SSD (2014)
  • Trendnet Gigabit Network Adapter TEG-PCITXR

I was and still am very happy with this build, considering the core of the build has lasted me thus far.  I think AMD provides a great performance and value to price ratio.  I hope the motherboard lasts just as long for the new processor!

Thanks for reading!

 

VMware ESXi Scratch Space

If you installed VMware ESXi on a USB stick like I did, the “scratch space” (used for storing logs and debug information) is stored on a RAM disk.  This takes up 512MB of memory that could otherwise be provisioned to virtual machines.  In addition, it does not persist across reboots, which explains why I was never able to find any logs after a crash. Also I was seeing random “No space left on device” errors when I was trying to run the munin monitoring script for ESXi.

The solution to this is to simply create a folder on a disk, and configure ESXi to use it.

  1. Login to the console or SSH to the host.
  2. Go into one of your datastores in /vmfs/volumes/
  3. Create a directory for the scratch space.
  4. Login to the vSphere Client.
  5. In the Host device, go to the Configuration tab, then find the Software category on the left menu and click Advanced Settings
  6. In the Configuration parameters window, find ScratchConfig on the left.
  7. For the “ScratchConfig.ConfiguredScratchLocation” box, enter the path to the folder you created in step 3.
    ESXi Advanced Settings Window
  8. Reboot the host.

It’s as simple as that!

References:

Updating VMWare ESXi from 5.1 to 5.5

In the previous post in the series of my “VMWare Adventures”, I was having problems with the hardware passthrough feature with ESXi 5.1 (read the previous post if you want a recap on what ESXi and passthrough are).  With the recent release of ESXi 5.5, and favourable comments in the communities, I decided to give the upgrade a shot.

Continue Reading

USB Controller Passthrough with VMWare ESXi 5.1

Earlier last year I built myself a VMWare ESXi whitebox computer.  VMWare ESXi is a light operating system which allows multiple virtual computers (referred to as virtual machines or VM) to be run inside of one computer (called the host) at the same time.  For example, I usually have three VMs running on my box including a FreeNAS file server, Ubuntu, and Windows 8.

One of the features of ESXi (and other hypervisors) is that you can pass through physical devices such as a video card and USB devices into the VMs.  That way, you could interact with one of the VMs running inside the host, with a monitor and physical devices much like a real computer.  This is great because you can use the host like a real computer, while running other operating systems on it at the same time.

Simplified virtualized computer diagrams
A simplified diagram showing how hardware can be passed through to virtual machines through the hypervisor.

This setup worked well with ESXi 5.0, as long as you had chosen the hardware that supports device passthrough — see my original post for the hardware I am using.  As far as I know, any update and patches in the ESXi 5.0 series have working device passthrough.

However, with ESXi 5.1, USB controller passthrough has been broken.  On the majority of builds, when you select a USB controller for passthrough, the selection will vanish once you reboot.  Or even worse, it will randomly crash the hypervisor.

The only patch of ESXi 5.1 that seems to allow some sort of USB controller passthrough is build 1021289, corresponding to ESXi510-201303001. You can find this patch on the VMWare patch portal, and I have some notes in my previous blog post about updating ESXi.  The catch is that the USB controllers will schedule to disable passthrough for themselves upon the next reboot.

esxi-5.1-passthrough-issue
When you enable passthrough devices in the configuration, they will work after you reboot. However, they will be configured to disable themselves after the subsequent reboot.

This wasn’t really a problem for me because I rarely reboot the host machine.  Just remember before you shutdown or reboot the host, re-enable the passthrough devices in the DirectPath I/O Configuration screen, and they will work the next time ESXi boots up.

So in short, for reliable device passthrough in VMWare ESXi, stick with the 5.0 series.  If you have to upgrade to ESXi 5.1, install the ESXi510-201303001 patch, and remember to re-enable the passthrough devices before rebooting the host.  Keep track of this thread on the VMWare Community, as it has been a long running thread documenting issues with device passthrough on ESXi 5.1.

Updating VMware ESXi

Back in January I built a VMware ESXi 5 whitebox as my home server.  I updated the hypervisor today and I thought I’d record the process so that I can refer back to it later.  The blog post I found most useful was from VMware Front Experience.  If you’re looking for the detailed procedures, I’d suggest you look at that post.

Upgrading from 5.0 to 5.1

  1. The upgrade file can be found here on the VMware download site. For an upgrade from 5.0 to 5.1, the file to download is: VMware-ESXi-5.1.0-799733-depot.zip.
  2. After downloading the file, scp it to the ESXi host, onto one of the data stores.
  3. SSH into the ESXi host, and run the command:
    esxcli software profile install -d /vmfs/volumes/datastore1/VMware-ESXi-5.1.0-799733-depot.zip -p ESXi-5.1.0-799733-standard
    You can also run esxcli software profile update .... The difference is described in the blog post I referenced above.
  4. When the update completes, reboot the server. When you bring up the VMs again the first time, vSphere Client might ask you whether you moved or copied the VMs since the UUID changed. Select “I moved the VMs”.

Rolling back to ESXi 5.0

ESXi 5.1 wasn’t working too well for me. I was having problems passing through my USB controllers to the VMs. I decided to roll back to 5.0, and luckily VMware makes rolling back easy.

  1. When you reboot the host, press SHIFT+R when the hypervisor first boots up (you’ll see the cue at the bottom right of the screen
  2. Type ‘y’ to confirm rolling back the hypervisor
  3. The hypervisor will boot up with the old version.

Patching 5.0

So what I ended up doing was just patching the hypervisor to the latest build.

  1. The latest patches can be found on the VMware patch download portal.
  2. After downloading the file, scp it to the ESXi host, onto one of the data stores.
  3. SSH into the ESXi host, and run the command:
  4. esxcli software vib install -d "/vmfs/volumes/datastore1/patches/ESXi500-201209001.zip"
  5. When the update completes, reboot the host if required.

I apologize if this blog post is a little terse, it is mainly a reference for myself. If you want further information, please check out the pages below: