FreeNAS and ZFS

Since I built my home server back in 2012, I’ve had a FreeNAS virtual machine running on it as the file server of my home network.  For the past two years, I’ve been using it for the simplest of tasks (serving files).  But over the past week, I’ve started looking deeper at some of the cool things FreeNAS and ZFS can do.  The descriptions of each of these are going to be brief; they can probably be expanded to a full blog post, which I may do if I have time.  However, until then, if your interest has been piqued, you will have to do some additional research on your own.

FreeNAS web interface
One of the configuration screens on the FreeNAS web interface

First, let me briefly introduce what FreeNAS is.  FreeNAS is a system based on FreeBSD that primarily provides a network-attached storage (NAS) service for your network.  It uses the ZFS file system, which as you’ll see in a bit has quite a number of interesting features.  FreeNAS comes with a web interface where you can easily configure everything.

So with that introduction out of the way, let’s get into FreeNAS and ZFS!


In English, FreeNAS supports file sharing with Windows, Mac, and *nix computers. That was a pro for me because I have all flavours of operating systems on my computers at home.

The support for AFP (Apple Filing Protocol) includes Time Machine endpoints, which is something worth discussing in its own section below.

Networked Time Machine backups

It’s easy to setup Time Machine on your Mac using an external hard drive. However, unless you actually plug the drive in, there’s no backup opportunity.  For me, sometimes I use my computer in my living room, other times in my bedroom.  Sometimes my backup drive isn’t where I am working and I’m too lazy to go get it.  There must be a better solution.

If you’re willing to spend a couple hundred dollars for an AirPort Time Capsule, it will allow you to make backups over the network, even over Wi-Fi.  I actually bought one to try, but I returned it within a week because what it did really didn’t justify the cost.

Fortunately, FreeNAS has the option of enabling Time Machine endpoint functionality on AFP shares.  Now, whenever I’m at home within Wi-Fi range, my Mac will automatically make Time Machine backups.  Hands-free backups!  Awesome!  And with multiple Time Machine targets in OS X Mountain Lion, I’m able to have a backup to the NAS as well as a backup to an external drive whenever I get that plugged in.

Time Machine
My Time Machine configuration

One note is that networked backups are somewhat more finicky. The backup target gets corrupted more often than the one on the external drive.  However, I’ve noticed using OS X Mavericks and the latest FreeNAS builds that generally it is a lot more stable than when I first configured it.  Mostly, remembering to stop any current backups before turning off the computer will reduce the chance of corruption.

For more information about this, check out the following links:

Snapshots and replication

Now let’s talk about some of the features of ZFS, which FreeNAS nicely gives us access to through the web configuration.

At some point in time you may want to ensure that your data on the NAS is backed up at a remote location (because even the most complex RAID setups won’t save you in the case of a fire or flood).  This is where snapshots and replication comes into play.

ZFS snapshots are light weight and only store changed blocks.  This means that they are fast (don’t require downtime) and don’t take up much extra space on the disk.  Then, snapshots can be sent to another host with a ZFS volume over SSH.

FreeNAS’s web interface makes setting up automatic snapshots and replication very easy.  In addition, the replication target is quite flexible because all that is required is a host that has ZFS and SSH.  That means, it’s not necessary to be locked in to using a FreeNAS system as a replication target.  In fact, using packages from ZFS on Linux, most 64-bit Linux distributions can be used as replication targets.  I went with Debian as that was the easiest to setup for my particular case.

For more information about this, check out the following links:

FreeNAS plugins

I came across the plugins last week when I was diving a bit deeper into FreeNAS.  I haven’t explored this fully yet.

FreeNAS is primarily a file server.  However, as it’s a computer that’s almost always running, it makes sense to have other services run off of it so that a separate application server isn’t necessary.  This is where plugins come in.

The list of plugins is available on the FreeNAS wiki.  There is also an article on installing the plugins.

The plugins that I’m interested in are btsync (BitTorrent Sync), owncloud, and the media plugins.  Right now, I have a separate virtual machine that serves these applications while storing the data on the NAS.  Using plugins, this extra virtual machine may not be necessary!

Final comments

If you take a look at Wikipedia article on ZFS, there are a lot of interesting features in ZFS.  It supports its own type of software RAID to protect against drive failures.  It’s possible to encrypt and compress data sets as well.  This is just scratching the surface on what ZFS can do.

ZFS is a robust, reliable, and practical file system to be used as network-attached storage.  Combined with the web interface provided by FreeNAS, this functionality is able to be unleashed for usage in diverse environments.

Are you using FreeNAS or ZFS in an interesting configuration?  Have other tips for other users?  Post your ideas in the comments!

Secure your Mac’s infrared port against random Apple Remotes

If you have a MacBook with an infrared receiver, did you know your Mac could be open to other people controlling your computer?  By default, Mac OS will recognize the signal of any Apple Remote.  Although the effect is relatively harmless (they will probably be able to randomly play some tracks on iTunes), it can range from being annoying if you were studying in the library and your friend happened to prank you, to embarrassing if you happened to be doing a presentation.

Most people do not need to allow any Apple Remote to control their computer.  Why would you want other people’s Apple Remotes to control your computer?  Here is a tutorial for securing your infrared port so that only your own Apple Remote can control your computer.

If you have an Apple Remote…

The icon showing a paired Apple Remote.
The icon showing a paired Apple Remote.

You can pair your remote with your computer by pressing and holding the Menu and Next (right) buttons for several seconds, while pointing the remote to the infrared receiver (on the MacBook Pro unibody models, the port is beside the power/sleep light on the front edge of the computer).  The pairing logo will show up in the middle of your screen when the pairing is complete.

If you don’t have an Apple Remote…

You can disable the infrared port so that nobody with a random Remote can control your computer.

  1. Open System Preferences → Security & Privacy.
  2. If the preferences are locked, you will need to click on the lock at the bottom left and enter your password.
  3. Click the Advanced… button at the bottom right.
  4. Check “Disable remote control infrared receiver.”
Security & Privacy - Advanced Options
The advanced options of the Security & Privacy system preferences panel.

Hopefully this tutorial will help you avoid annoying or embarrassing situations when people try to prank you with their own Apple Remote.

Featured image by Julien Gong Min on Flickr.



The Heartbleed vulnerability has been all over the news this past week. As usual, the media sometimes twists the facts, sometimes intentionally, other times inadvertently. For example, I’ve heard Heartbleed being called a virus, or being framed as something that was deliberately created to be malicious.  Also, from reading people’s comments on the online news articles and blog posts, it seems that many people don’t really understand what Heartbleed is or does.  From my point of view as a software developer, I would like to provide some information and resources that I believe are true and report the facts (but as I’m not an expert in the field of encryption/security, you may also want to take these with a grain of salt).

Heartbleed explained

What Heartbleed is simply a software bug. Sure, there are bugs in nearly all, if not all, software out there (obviously we developers try not to introduce bugs, but we humans are unfortunately imperfect 🙁 ), but what makes this particular bug newsworthy?

  1. This particular bug is a vulnerability, which allows a malicious attacker to gain information that should not be accessible.
  2. The bug is in a library (called OpenSSL) that is used in a number of programs that in turn are run on a large number of computers worldwide.
  3. The vulnerability has been out in the wild for two years.
  4. There’s no trace left behind by a malicious attacker exploiting this vulnerability.

I came across this XKCD comic last night. I think it’s a pretty simple way to understand what the Heartbleed vulnerability allows a malicious attacker to do.

Heartbleed Explanation - XKCD comic
Heartbleed Explanation – XKCD comic

The comic illustrates the case where the victim is the “server” and the malicious attacker is the “client.”   This is the case that most people are concerned with, as it is likely that servers running the exploitable software are easier to find and will probably have more “interesting” data in the memory.  The data could potentially be usernames and passwords, credit card information, or encryption keys, but on the other hand it could also be just bogus data that happened to also be in memory.  The data that the attacker could gain really depends on what happens to be in adjacent memory at that time.

However, the vulnerability exists both ways (if the software on the “client” is using a vulnerable version of OpenSSL).  You could be owning a device or running a program on your computer that might allow a “server”, which has been maliciously programmed, to read memory off of your device using the same exploit.  For example,  Android 4.1.1 devices are susceptible to Heartbleed.

Although web servers are the most common targets being mentioned, there are other services that could possibly be affected by Heartbleed including FTP servers and mail servers.

If you are interested in the nitty gritty details behind how the exploit works, CloudFlare has an article on the low-level details (just disregard the fact that they say that private keys aren’t accessible because they were disproved on that point).  For higher level information on Heartbleed, the site has very clear information and a nice FAQ.  Troy Hunt also has an informative FAQ about Heartbleed.

What to do about it

For end users

Since there is no trace when an attacker exploits Heartbleed compounded by the fact that Heartbleed has been vulnerable for over two years, it’s not possible to determine exactly what data has been compromised.  In addition, if encryption keys were gleaned from Heartbleed, it is possible for even more data to be compromised by decrypting historic logs (if they exist in the hands of the attacker).

So for end users, the precautionary recommendation is to change your passwords after the services that were affected have been patched.  Mashable has a running list of the status of popular web services that you can use to determine whether to change your password.  In case you use a service that isn’t listed there, you can check it yourself on Filippo Valsorda’s test site.  However, keep in mind that not only web services are affected.  There are recommendations not to login to services that are still known to be vulnerable because when you login there is a chance that your credentials will be placed in memory, which is susceptible to be read.  In addition, ensure that all the software and operating systems you are running are up to date.

For system administrators, developers and service providers

Obviously, ensuring that OpenSSL is up to date or patched is top priority. Troy Hunt provides some additional advice in his blog post.

Heartbleed and the goto fail and GnuTLS bugs

Heartbleed isn’t related to the Apple goto fail or the GnuTLS bug we’ve seen in the past couple months.  The goto fail and GnuTLS bugs are susceptible to man-in-the-middle attacks where a malicious intruder can pretend to be the trusted service you’re communicating with and intercept messages between you and the service.  Heartbleed on the other hand allows attackers to read parts of the computer’s memory that they should not have access to.

OpenSSL and open source projects

OpenSSL is an open-source project with eleven volunteer developers, maintaining one of if not the most used SSL/TLS libraries, probably on their own time.  I think they should be respected for taking on the heavy responsibilities of this project.

Open-source projects allow external developers to read the source code and even submit improvements and contributions.  Depending on the project, there are different procedures to getting contributions accepted, usually including a code review process where the core maintainers ensure that the contributions work as intended and meet the standards of the project (kind of like how a newspaper editor goes over the articles of his writers before they get published).  Since humans aren’t 100% perfect, bugs and mistakes unfortunately happen, as much as we try not to allow them.

While it is possible to order security audits of software, for open-source projects that usually don’t generate any profit, it is difficult to come up with the money.  I remember when we got a security audit for MyBB, it was in the order of thousands of dollars.


There is a lot of information about the Heartbleed vulnerability on the news and media, and from reading the comments on many of the blog posts and news articles I have read, many people don’t really understand what Heartbleed is and its implications.  I hope that this article sheds a little bit of light on that, and provides more resources for those who want to dig a little deeper in understanding it.

A real MacBook Pro

My MacBook Pro

I bought my MacBook Pro back in 2009.  It was a Mid-2009 (2nd generation) version with a Core 2 Duo with the basic 2GB of memory and 250GB hard disk drive.  I chose Mac because of many reasons; here are some of them, ordered by what I thought most important first:

  1. Solid construction:  The unibody construction was a huge factor.  The size was quite slim and easily portable.  The aluminum exterior felt solid.  Since getting the laptop, I’ve only dropped it once.  The hard drive died as a result (expected); was not a big deal to replace it.
  2. Battery Life:  The battery life exceeded the average of other laptops of comparable performance and price.  I didn’t end up using the advertised 7 hours most of the time but 3-5 hours was good enough for me.
  3. Compatibility with Unix/Linux:  The Mac operating system is based on Unix.  As a computer science student, being able to easily compile and run *nix programs, navigate around in the Terminal, and connect to remote *nix servers was a definite plus.
  4. Compatibility with Windows:  This doesn’t seem to be well known, but Mac easily allows you dual-boot into Windows with its Bootcamp software to run any Windows programs natively.  I also used VirtualBox to setup a virtual machine running Windows for the programs that don’t need native performance or external inputs.
  5. Plug and play with projectors/monitors: From using an external monitor at home, to plugging into monitors and projectors at school and places I volunteer, it had to be good to go without much hassle.  For the most part, Mac OS X delivered this although I still prefer the more detailed options available back with Snow Leopard and Lion.
My MacBook Pro
My MacBook Pro, now running OS X Mavericks

The MacBook Pro is an awesome work horse, able to do pretty much do everything I threw at it: homework, programming various things, projecting, editing photos and videos.  It has been my primary computer for the past 4.5 years, and having replaced the battery last year I think it could probably last for another couple of years.  I’ve slowly upgraded the hardware to max out 8GB of memory and settled with a Seagate 750GB Momentus XT Solid State Hybrid Drive, which fit the bill of having a large storage space while having slightly better performance with the flash cache.

I had hoped that Apple would be producing this line of MacBook Pros for a bit longer so that when my current one dies, I’d be able to replace it with another.  However, after surfing the Apple Store recently, I realized that my presumption may not hold true much longer.  There’s only one model of the 2nd generation MacBook Pro left and it hasn’t been updated since 2012.  The rest of the MacBook Pro lineup consists of retina display models.

MacBook Pro listing on the Apple Store
Apple Store with the 13″ MacBook Pro being the last 2nd generation MacBook Pro model still standing.

MacBook “Pro” with Retina Display

I wouldn’t really call the new MacBooks “Pro”.  In my opinion, the current Retina MacBook Pros should just be called “MacBook with Retina Display”.

  1. No network port: How am I supposed to setup a router or debug network issues if I have to have Wi-Fi first?  Also, for transfers, a cabled connection is a lot more reliable and, for most access points, faster than Wi-Fi.
  2. No optical disk drive: How am I supposed to read/write CDs and DVDs with installation media to setup older computers?  How am I supposed to play DVDs? — a lot of educational media is still on DVDs, if not tape!
  3. No user upgradable parts:  There’s no way to replace a stick of memory if one has gone bad.  There’s no way to upgrade your memory if you need more.  There’s no way to buy a larger hard drive if you run out of space.  You have to consider how much memory and space you’ll need up front, and pay Apple’s premium for that specific configuration.
  4. No infrared sensor: I use the infrared sensor with the Apple Remote for a quick remote when giving presentations.  The Apple remote is a lot cheaper, smaller and convenient than other remotes out there, for small-scale presentations.

Yes, I realize that down the road (even currently) probably people don’t need a network port, optical disk drive, or upgradable parts, but those people are probably not “Pro” users.  That is exactly the reason why I think the current Retina MacBook Pro should be renamed as “MacBook”, and that the MacBook Pro lineup continue to be refreshed.

A real Macbook Pro

What I would consider a real MacBook Pro would be one with the 2nd generation hardware (retaining the ethernet port and optical disk drive) updated with an Intel Haswell processor and retina display.  Now that would be something I would find worth buying.  Basically, if Apple took the old MacBook Pro line and refreshed it with a new processor and a retina display, that would be the perfect computer for me, and I’m sure I wouldn’t be the only one buying it.