VMWare ESXi 5 Whitebox

I have 2 old computers (Pentium III and Celeron computers circa early 2000’s) that I currently use as servers for file storage, backups, and testing.  I thought it was about time to consolidate these servers I had, up the performance, and set up a flexible test environment for my coding endeavours.

VMWare’s free ESXi hypervisor piqued my interests earlier last year.  It’s comparable to XenServer but apparently has better support for Windows virtual machines.  Being a bare-metal hypervisor, it should give better performance than a usual virtual machine sitting on top of a full-blown operating system.  So I set my eyes on building an inexpensive but powerful ESXi whitebox that would take over the roles of my old computers.

I did a lot of research on ESXi and compatible components from various sites, blogs and forums.  I learned that ESXi was quite picky in what hardware it would run on.  I definitely wanted to buy the correct components that would work with ESXi 5, aiming to get everything under $500.

This is what I came up with (prices after price matching/rebates):

  • AMD Phenom II X6 1055T Thuban 6-Core 2.8GHz Processor @ $122.17
  • ASRock 990FX EXTREME3 Motherboard (ATX, AM3+, DDR3, SATA3) @ $156.60
  • Mushkin Enhanced Blackline Frostbyte PC3-12800 8GB 2x4GB Memory Kit @ $44.99
  • Gigabyte Radeon HD 5450 Low Profile Video Card @ $14.99
  • Coolermaster Elite 350 Black ATX Case with 500W PSU @ $49.69
  • Western Digital Caviar Green 2TB WD20EARS
  • Trendnet Gigabit Network Adapter TEG-PCITXR

This selection got me well within my $500 budget even after taxes.  The hard disk and network adapter were components I already had.


Here are a few reasons why I chose the specific components.

  • AMD processors seemed to give the most bang for the buck.
  • The ASRock motherboard has IOMMU (for direct passthrough of hardware to the virtual machine), AM3+ support (for upgrading the CPU down the line), and SATA3 (for faster disk performance).
  • The video card was the cheapest I could find. Again as an ESXi host, there isn’t much to be shown on the screen.

It was my first time doing a computer build (yes! first!) so I wanted to get it right.  In addition to the instructions that came with the motherboard and CPU, I enlisted some help from YouTube videos just to be sure.  Here are some in specific:

The build was easier than expected.

Then came the installation of ESXi itself.  It was quite uneventful so I won’t bore you with the details, but here are the important bits I found:

  • Grabbed the download from VMWare’s site
  • Loaded up the ISO onto a USB stick using UNetbootin (I had no CD drive in the new computer)
  • Installed ESXi, choosing the same USB stick as the install destination.  (This allows full utilization of your hard disk as your VM data store).
  • As expected, the integrated network adapter wasn’t recognized, so I found one lying around (the Trendnet TEG-PCITXR) and stuck it into the machine and luckily it worked.  I believe ESXi 5 is a lot more lenient with network adapters than its predecessors, and it saved me from purchasing an expensive Intel card. Yay!
    Update [Dec 1, 2013]: After my upgrade to ESXi 5.5, I found that the on-board network adapter (BCM57781) was recognized by ESXi.

Luckily the hardware worked out.  Now on to installing the individual virtual machines.

Update [Apr 10, 2012]: Here is the vSphere Client summary page for the host.

I also ended up purchasing an Intel network card (Intel Gigabit CT Desktop Adapter @ $27.00 after price match) so that I could free up my entire PCI bus to be able to be passed through to a virtual machine if I wanted to (you must pass through the entire PCI bus to VMs, however, you can pass through individual PCIe devices).
Update [Dec 1, 2013]: After my upgrade to ESXi 5.5, I found that the on-board network adapter (BCM57781) was recognized by ESXi.

DirectPath I/O is working as expected.  I have passed through the display card and the USB3 controller to one of my Windows VMs so that the box can act as a workstation.

Passing through the video card is working, although there was a problem with Adobe Flash hanging on videos with a green screen. I noticed that others with the same card had the same problem while searching online.  The workaround is to disable Flash from using hardware acceleration, but this is not ideal as this increases CPU usage.  I’m still looking for a real solution to this, not sure if it’s just this video card I’m using.

Looking forward I probably will also get more memory in the box since Windows VMs take up quite a bit.

Related

20 thoughts on “VMWare ESXi 5 Whitebox

  1. I found you setup really interesting ,
    I like to know if you were able to make everything work FT , VT-D.
    I want to buy the same setup as your. (2 like this).
    Can you post some picture of vsphere client (summary screen of the host).
    And by the way if everything is working, you will be the only one i found that his setup is low cost and can make FT VT-D

    Thanks 😉

    1. Hi Benoit,

      I only have 1 box in my setup, so I am not able to try out HA or FT (if I understand it correctly; I’m still a ESXi newbie). However, the “VT-d” (or in this case, AMD’s IOMMU) is working. I updated the original post with some followup with the screenshot you requested. Passthrough I/O is working great for me.

      Having the build low-cost was one of the main factors I was looking for. The tradeoff is that some components aren’t the “cutting edge”. For example, I think the processor is at the end of production as the Bulldozer processors come out. (correct me if I’m wrong; I’m no hardware expert either). Bulldozers should be able to be used with the motherboard I chose though, as they use AM3+ which the ASRock supports.

  2. Just a notice regarding video card:
    In order to reduce energy consumption and free up PCI-e slot. I purchased for 5 bucks 10-year old PCI (not PCI-E) video card. Both Win 7 and ESXi works. The resolution is suboptimal if you want to do something in Win7, but, as I said, it works.

  3. Hows that asrock holding up? See many complain about them but i think they are pushing the hardware more then they should

  4. Nice setup.

    Does it run 64bit guest easy? If you don’t mind me asking what guestVM’s are you running in this box.

    Would you say this setup is good for personal 24/7 use?

    1. Yep, 64-bit guests are supported.

      The ones I have active 24/7 are:
      FreeNAS (FreeBSD) 64-bit
      Ubuntu 12.04 32-bit
      Windows 7 32-bit

      Other ones I have but don’t actively use are:
      Windows 8 RC 64-bit
      Windows Vista 64-bit
      Windows 2008 Server R2 64-bit
      Windows XP 32-bit
      Windows 2000

      Yep, I’d consider this setup ideal for light personal use as a home server, and development/test platform. It probably doesn’t boast top performance compared to other builds, but as a home server I think it works quite fine.

  5. Based on your post I purchased similar hardware. You mentioned you passthrough the 5450 video card. Is that the only video card in the system?

    I have an older nvidia card in the x16 slot closest to the cpu, then the 5450 in the next slot over. That boots such that the older card displays the esxi local screen. When I had the cards swapped around the other way then the esxi local screen was on the 5450. I was hoping to use the older card for esxi and passthrough the 5450. I was scared of something not working someday and then having the only video card passed through to a vm leaving me no way to fix things. I haven’t been able to get the 5450 to work passed through in Ubuntu. It recognizes and installs the driver ok, but I can’t run the ATI catalyst software.

    1. Yes, that’s the only video card I have in the system. Right now, ESXi boot screen shows up on it, then mid way during the boot (I guess when the passthrough gets initialized) the screen freezes until a VM starts up and takes over the device.

      I haven’t tried passing through the video card to Linux, I’ve only used it on my Windows 7 VM.

  6. It is quite good site I was looking for. I’m wondering adding more NIC cards such as 4 ports so that I can work with switches and create network segments. do you recommend any multi-ports nic card?
    thank you!

  7. Hey Dennis,

    i work many years with ESXi, i prefer ESXi 4.1. 5 is not realy better in stable, but have a better managed pathtrue.
    A hint for you 😉 if you can get the Hardware-ID from your on board NIC you can add it to ESXi. You just need to add the Dev-ID to your vendor and bake a custom OEM.tgz. Then you are able to install ESXi with the modified vendor and many Storage Controller and NIC’s that dosn’t seams to work… work perfektly ;).

    For more informations you can have a look at this site:
    http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php

    This is a whitebox for 4.x but also help for 5 sometimes.

    For aditional NIC’s optional i realy can recommend the Low Profile “Pro/1000” Desktop cards, MT/GT. This are cards that are based on the E1000 chipset which is recommended from VmWare. And ESXi realy like this cards. 😉

    In germany you get a Low Profile “Pro/1000” for arround 10$.

    Now i hope you forgive me my bad english. ^^

    1. I forget my recommended build:

      Motherboard: MSI 1366 X58 Pro-E ~ 75$
      Processor: Intel i7 920 @ 2,66GHz ~ 200$
      or (for large wallet): Intel i7 980 (not X) BX80613I7980 @ 3,33 GHz ~ 500$
      RAM: 4GB G.Skill NT Series DDR3-1333 DIMM CL9 Single x6 (24gb) ~ 70$

      NIC: Intel network card EXPI9301CTBLK 1 Port 10/100/1000Mbit/s PCIe x1 BULK ~ 20$
      Controller (low budged): Promise SATA300 TX4 ~ 50$
      Controller (for large wallet): LSI MegaRAID SAS8344ELP (256MB) ~ 300$
      + Battery (for LSI): ~ 150$

  8. hi there,

    i know i’m a little bit late with my comment cause i have allmost the same setup like you 🙂

    Motherboard: Gigabyte GA990XA-UD3 rev 3.0
    Processor: Phenom II T1055T
    RAM: 32GB G.Skill Ares PC1600 CK8 (non-ECC 🙁 )
    NICS: 3x Intel Pro 1000 PT Dual Port
    Controller: at the Moment OnBoard Controller
    Multimedia Card: Terratec Cynergy 2400i DT Dual DVB-T. (allready in passthrough)

    I’m trying to passthrough a nvidia card. My Video Disk Recording Ubuntu works without problems.

  9. Hi, interesting in building my own lab based off the hardware in the above post.

    Have you tried nesting virtual ESXi or Hyper-V?

    Thanks

  10. hi man I also have the GA970DS3 and the IOmmu in the bios could be activated. First I woundered why there was no passthrough possible in ESXI 5.1, but it’s just a BIOS switch in the advanced options. For solving some mysteries around Citrix XA6.5, HDX, Desktop Experience and the CUDA usage of HDX I wait for a GTX 650 or GT640 (both have 384 CUDA cores but the GTX has DDR5 and the GT640 DDR3)

    CPU is the X6 1045 that was the cheapest CPU with fully separated cores…. the Vishera 8350 may outperform this, but can’t ever beat the price per real core. The vishera architecture has 4×2 Cores, and 2 cores share some ressoures while the X6 has 6 real and independant cores. And that counts the most on VMs because you don’t have to care for even or odd vCPU amounts. Ram is 32 GB Gskill ripjaws Z DDR3 1866 (for unbelievable 126 Euro @ Jan 2013), but downclocked to 1600 for having even multiplicators everywhere and the CPU runs either at 2,4 or 2,8 GHz. The onboard ethernet. Realtek 8168, is correctly recognised, I have also two Intel 82574 PCIE 1x cards. All of the SATA ports of the GA970 show up….

    The GA970 is in my opinion one of the best and most affordable basis for a whitebox intended for ESXi 5.1. In the older versions the hardware support was pretty restricted so I had to use a Promuse Ultra TX133 to have legacy IDE disks (yep the Controller is SCSI for ESX 3.5) and another Promise SATA (also this is SCSI for ESX 3.5) and a good old Intel PCI network card (nothing else worked….) Some people think 990 based boards are better, but they only have more PCIE 16 slots, maybe up to 5 of them.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Exit mobile version