If you installed VMware ESXi on a USB stick like I did, the “scratch space” (used for storing logs and debug information) is stored on a RAM disk. This takes up 512MB of memory that could otherwise be provisioned to virtual machines. In addition, it does not persist across reboots, which explains why I was never able to find any logs after a crash. Also I was seeing random “No space left on device” errors when I was trying to run the munin monitoring script for ESXi.
The solution to this is to simply create a folder on a disk, and configure ESXi to use it.
Login to the console or SSH to the host.
Go into one of your datastores in /vmfs/volumes/
Create a directory for the scratch space.
Login to the vSphere Client.
In the Host device, go to the Configuration tab, then find the Software category on the left menu and click Advanced Settings
In the Configuration parameters window, find ScratchConfig on the left.
For the “ScratchConfig.ConfiguredScratchLocation” box, enter the path to the folder you created in step 3.
Earlier last year I built myself a VMWare ESXi whitebox computer. VMWare ESXi is a light operating system which allows multiple virtual computers (referred to as virtual machines or VM) to be run inside of one computer (called the host) at the same time. For example, I usually have three VMs running on my box including a FreeNAS file server, Ubuntu, and Windows 8.
One of the features of ESXi (and other hypervisors) is that you can pass through physical devices such as a video card and USB devices into the VMs. That way, you could interact with one of the VMs running inside the host, with a monitor and physical devices much like a real computer. This is great because you can use the host like a real computer, while running other operating systems on it at the same time.
This setup worked well with ESXi 5.0, as long as you had chosen the hardware that supports device passthrough — see my original post for the hardware I am using. As far as I know, any update and patches in the ESXi 5.0 series have working device passthrough.
However, with ESXi 5.1, USB controller passthrough has been broken. On the majority of builds, when you select a USB controller for passthrough, the selection will vanish once you reboot. Or even worse, it will randomly crash the hypervisor.
The only patch of ESXi 5.1 that seems to allow some sort of USB controller passthrough is build 1021289, corresponding to ESXi510-201303001. You can find this patch on the VMWare patch portal, and I have some notes in my previous blog post about updating ESXi. The catch is that the USB controllers will schedule to disable passthrough for themselves upon the next reboot.
This wasn’t really a problem for me because I rarely reboot the host machine. Just remember before you shutdown or reboot the host, re-enable the passthrough devices in the DirectPath I/O Configuration screen, and they will work the next time ESXi boots up.
So in short, for reliable device passthrough in VMWare ESXi, stick with the 5.0 series. If you have to upgrade to ESXi 5.1, install the ESXi510-201303001 patch, and remember to re-enable the passthrough devices before rebooting the host. Keep track of this thread on the VMWare Community, as it has been a long running thread documenting issues with device passthrough on ESXi 5.1.
Back in January I built a VMware ESXi 5 whitebox as my home server. I updated the hypervisor today and I thought I’d record the process so that I can refer back to it later. The blog post I found most useful was from VMware Front Experience. If you’re looking for the detailed procedures, I’d suggest you look at that post.
Upgrading from 5.0 to 5.1
The upgrade file can be found here on the VMware download site. For an upgrade from 5.0 to 5.1, the file to download is: VMware-ESXi-5.1.0-799733-depot.zip.
After downloading the file, scp it to the ESXi host, onto one of the data stores.
SSH into the ESXi host, and run the command: esxcli software profile install -d /vmfs/volumes/datastore1/VMware-ESXi-5.1.0-799733-depot.zip -p ESXi-5.1.0-799733-standard You can also run esxcli software profile update .... The difference is described in the blog post I referenced above.
When the update completes, reboot the server. When you bring up the VMs again the first time, vSphere Client might ask you whether you moved or copied the VMs since the UUID changed. Select “I moved the VMs”.
Rolling back to ESXi 5.0
ESXi 5.1 wasn’t working too well for me. I was having problems passing through my USB controllers to the VMs. I decided to roll back to 5.0, and luckily VMware makes rolling back easy.
When you reboot the host, press SHIFT+R when the hypervisor first boots up (you’ll see the cue at the bottom right of the screen
Type ‘y’ to confirm rolling back the hypervisor
The hypervisor will boot up with the old version.
So what I ended up doing was just patching the hypervisor to the latest build.
I have 2 old computers (Pentium III and Celeron computers circa early 2000’s) that I currently use as servers for file storage, backups, and testing. I thought it was about time to consolidate these servers I had, up the performance, and set up a flexible test environment for my coding endeavours.
VMWare’s free ESXi hypervisor piqued my interests earlier last year. It’s comparable to XenServer but apparently has better support for Windows virtual machines. Being a bare-metal hypervisor, it should give better performance than a usual virtual machine sitting on top of a full-blown operating system. So I set my eyes on building an inexpensive but powerful ESXi whitebox that would take over the roles of my old computers.
I did a lot of research on ESXi and compatible components from varioussites, blogs and forums. I learned that ESXi was quite picky in what hardware it would run on. I definitely wanted to buy the correct components that would work with ESXi 5, aiming to get everything under $500.
This is what I came up with (prices after price matching/rebates):