Replacing a failed disk in RAID on an OVHcloud dedicated server

Yesterday morning, I woke up to some Discord messages saying that one of my websites was offline. Not a great way to start off the morning! I tried to first ascertain the scope of the issue—turns out none of my services were accessible at all from that one dedicated server. Panic started to set in a bit… did I forget to pay my monthly server bill? Did the server get hacked? Or did the server burn down?

Continue Reading

FreeNAS and ZFS

Since I built my home server back in 2012, I’ve had a FreeNAS virtual machine running on it as the file server of my home network.  For the past two years, I’ve been using it for the simplest of tasks (serving files).  But over the past week, I’ve started looking deeper at some of the cool things FreeNAS and ZFS can do.  The descriptions of each of these are going to be brief; they can probably be expanded to a full blog post, which I may do if I have time.  However, until then, if your interest has been piqued, you will have to do some additional research on your own.

One of the configuration screens on the FreeNAS web interface

First, let me briefly introduce what FreeNAS is.  FreeNAS is a system based on FreeBSD that primarily provides a network-attached storage (NAS) service for your network.  It uses the ZFS file system, which as you’ll see in a bit has quite a number of interesting features.  FreeNAS comes with a web interface where you can easily configure everything.

So with that introduction out of the way, let’s get into FreeNAS and ZFS!

CIFS, AFP, and NFS

In English, FreeNAS supports file sharing with Windows, Mac, and *nix computers. That was a pro for me because I have all flavours of operating systems on my computers at home.

The support for AFP (Apple Filing Protocol) includes Time Machine endpoints, which is something worth discussing in its own section below.

Networked Time Machine backups

It’s easy to setup Time Machine on your Mac using an external hard drive. However, unless you actually plug the drive in, there’s no backup opportunity.  For me, sometimes I use my computer in my living room, other times in my bedroom.  Sometimes my backup drive isn’t where I am working and I’m too lazy to go get it.  There must be a better solution.

If you’re willing to spend a couple hundred dollars for an AirPort Time Capsule, it will allow you to make backups over the network, even over Wi-Fi.  I actually bought one to try, but I returned it within a week because what it did really didn’t justify the cost.

Fortunately, FreeNAS has the option of enabling Time Machine endpoint functionality on AFP shares.  Now, whenever I’m at home within Wi-Fi range, my Mac will automatically make Time Machine backups.  Hands-free backups!  Awesome!  And with multiple Time Machine targets in OS X Mountain Lion, I’m able to have a backup to the NAS as well as a backup to an external drive whenever I get that plugged in.

My Time Machine configuration

One note is that networked backups are somewhat more finicky. The backup target gets corrupted more often than the one on the external drive.  However, I’ve noticed using OS X Mavericks and the latest FreeNAS builds that generally it is a lot more stable than when I first configured it.  Mostly, remembering to stop any current backups before turning off the computer will reduce the chance of corruption.

For more information about this, check out the following links:

Snapshots and replication

Now let’s talk about some of the features of ZFS, which FreeNAS nicely gives us access to through the web configuration.

At some point in time you may want to ensure that your data on the NAS is backed up at a remote location (because even the most complex RAID setups won’t save you in the case of a fire or flood).  This is where snapshots and replication comes into play.

ZFS snapshots are light weight and only store changed blocks.  This means that they are fast (don’t require downtime) and don’t take up much extra space on the disk.  Then, snapshots can be sent to another host with a ZFS volume over SSH.

FreeNAS’s web interface makes setting up automatic snapshots and replication very easy.  In addition, the replication target is quite flexible because all that is required is a host that has ZFS and SSH.  That means, it’s not necessary to be locked in to using a FreeNAS system as a replication target.  In fact, using packages from ZFS on Linux, most 64-bit Linux distributions can be used as replication targets.  I went with Debian as that was the easiest to setup for my particular case.

For more information about this, check out the following links:

FreeNAS plugins

I came across the plugins last week when I was diving a bit deeper into FreeNAS.  I haven’t explored this fully yet.

FreeNAS is primarily a file server.  However, as it’s a computer that’s almost always running, it makes sense to have other services run off of it so that a separate application server isn’t necessary.  This is where plugins come in.

The list of plugins is available on the FreeNAS wiki.  There is also an article on installing the plugins.

The plugins that I’m interested in are btsync (BitTorrent Sync), owncloud, and the media plugins.  Right now, I have a separate virtual machine that serves these applications while storing the data on the NAS.  Using plugins, this extra virtual machine may not be necessary!

Final comments

If you take a look at Wikipedia article on ZFS, there are a lot of interesting features in ZFS.  It supports its own type of software RAID to protect against drive failures.  It’s possible to encrypt and compress data sets as well.  This is just scratching the surface on what ZFS can do.

ZFS is a robust, reliable, and practical file system to be used as network-attached storage.  Combined with the web interface provided by FreeNAS, this functionality is able to be unleashed for usage in diverse environments.

Are you using FreeNAS or ZFS in an interesting configuration?  Have other tips for other users?  Post your ideas in the comments!

USB Controller Passthrough with VMWare ESXi 5.1

Earlier last year I built myself a VMWare ESXi whitebox computer.  VMWare ESXi is a light operating system which allows multiple virtual computers (referred to as virtual machines or VM) to be run inside of one computer (called the host) at the same time.  For example, I usually have three VMs running on my box including a FreeNAS file server, Ubuntu, and Windows 8.

One of the features of ESXi (and other hypervisors) is that you can pass through physical devices such as a video card and USB devices into the VMs.  That way, you could interact with one of the VMs running inside the host, with a monitor and physical devices much like a real computer.  This is great because you can use the host like a real computer, while running other operating systems on it at the same time.

A simplified diagram showing how hardware can be passed through to virtual machines through the hypervisor.

This setup worked well with ESXi 5.0, as long as you had chosen the hardware that supports device passthrough — see my original post for the hardware I am using.  As far as I know, any update and patches in the ESXi 5.0 series have working device passthrough.

However, with ESXi 5.1, USB controller passthrough has been broken.  On the majority of builds, when you select a USB controller for passthrough, the selection will vanish once you reboot.  Or even worse, it will randomly crash the hypervisor.

The only patch of ESXi 5.1 that seems to allow some sort of USB controller passthrough is build 1021289, corresponding to ESXi510-201303001. You can find this patch on the VMWare patch portal, and I have some notes in my previous blog post about updating ESXi.  The catch is that the USB controllers will schedule to disable passthrough for themselves upon the next reboot.

When you enable passthrough devices in the configuration, they will work after you reboot. However, they will be configured to disable themselves after the subsequent reboot.

This wasn’t really a problem for me because I rarely reboot the host machine.  Just remember before you shutdown or reboot the host, re-enable the passthrough devices in the DirectPath I/O Configuration screen, and they will work the next time ESXi boots up.

So in short, for reliable device passthrough in VMWare ESXi, stick with the 5.0 series.  If you have to upgrade to ESXi 5.1, install the ESXi510-201303001 patch, and remember to re-enable the passthrough devices before rebooting the host.  Keep track of this thread on the VMWare Community, as it has been a long running thread documenting issues with device passthrough on ESXi 5.1.

Updating VMware ESXi

Back in January I built a VMware ESXi 5 whitebox as my home server.  I updated the hypervisor today and I thought I’d record the process so that I can refer back to it later.  The blog post I found most useful was from VMware Front Experience.  If you’re looking for the detailed procedures, I’d suggest you look at that post.

Upgrading from 5.0 to 5.1

  1. The upgrade file can be found here on the VMware download site. For an upgrade from 5.0 to 5.1, the file to download is: VMware-ESXi-5.1.0-799733-depot.zip.
  2. After downloading the file, scp it to the ESXi host, onto one of the data stores.
  3. SSH into the ESXi host, and run the command:
    esxcli software profile install -d /vmfs/volumes/datastore1/VMware-ESXi-5.1.0-799733-depot.zip -p ESXi-5.1.0-799733-standard
    You can also run esxcli software profile update .... The difference is described in the blog post I referenced above.
  4. When the update completes, reboot the server. When you bring up the VMs again the first time, vSphere Client might ask you whether you moved or copied the VMs since the UUID changed. Select “I moved the VMs”.

Rolling back to ESXi 5.0

ESXi 5.1 wasn’t working too well for me. I was having problems passing through my USB controllers to the VMs. I decided to roll back to 5.0, and luckily VMware makes rolling back easy.

  1. When you reboot the host, press SHIFT+R when the hypervisor first boots up (you’ll see the cue at the bottom right of the screen
  2. Type ‘y’ to confirm rolling back the hypervisor
  3. The hypervisor will boot up with the old version.

Patching 5.0

So what I ended up doing was just patching the hypervisor to the latest build.

  1. The latest patches can be found on the VMware patch download portal.
  2. After downloading the file, scp it to the ESXi host, onto one of the data stores.
  3. SSH into the ESXi host, and run the command:
  4. esxcli software vib install -d "/vmfs/volumes/datastore1/patches/ESXi500-201209001.zip"
  5. When the update completes, reboot the host if required.

I apologize if this blog post is a little terse, it is mainly a reference for myself. If you want further information, please check out the pages below:

Exit mobile version