Simply Complicated

HomeGithubTwitter

My development set up

At work, I am involved in a broad range of projects, ranging from the occasional dable into an old legacy site, original developed for PHP 4, to fresh development on a brand new PHP 5.5 code base using Silex. I am also involved in a fairly large number of Magento-based websites (which until recently did not support anything more modern than PHP 5.3). And occasionally I write some Python, mostly in the form of small Flask applications.

I do most of my development on Macbook Pro, which comes bundled with Apache 2 and a version of PHP 5.x (depending on the version of OS X). MySQL is easily installed using Homebrew (brew install mysql).

I used to create a virtual host for each project I worked on and that'd be it, which worked great for a while. This common set up does come with a few caveats:

  • It is not particularly straight forward to run different versions of PHP at the same time (although phpbrew does a good job at making that easier)
  • Always running Apache 2 and MySQL, even when not doing development work, seemed to drain my battery. I also found it too cumbersome to remember which services were running, then manually stop them all when not needed.
  • Any other project-specific dependencies (gearman, redis, etc.) are hard to isolate to a per-project level
  • Our live websites are hosted on linux, which may cause inconsistencies with sites developed on OS X.

  • An OS upgrade to a major new version has a tendency to break a whole load of things (Apache 2.2 to 2.4 migration, anyone?)

Not satisfied with the default set up, I wanted the following from my dev set up:

  • Needs to be trivial to turn it "on" and "off". When sat in a client meeting taking notes, I don't want my battery drained by services I don't need at the time
  • Needs to run linux, to match our live hosting environments
  • OS X upgrades should have minimal effect on the set up
  • Need to provide a way of isolating each project and its dependencies, if so required
  • Need to be able to run many projects in isolation at the same time
  • Local DNS resolution, so each project/website can be accessed via a memorable url
  • All services/sites need to be accessible in full from my Macbook, on their own, local subnet
  • If desired, services should be exportable on the "public" ip, public being the office LAN of my Macbook, for easy previewing of work by co-workers and project managers.

To support the myriad of various development environments, I decided that an approach based on a virtual machine would be best.

VMWare Fusion, Ubuntu and containers

My development VM ('host VM' from now on) consists of an Ubuntu 14.04 installation, which runs the following linux containers (LXC):

  • db
  • magento
  • mailcatcher
  • play
  • sentry
  • trusty-base
  • client-1-project
  • client-2-project
  • client-3-project

Ubuntu provides a set of utilities to manager containers (by default, they're an apt-get install lxc away from being useable). Digital Ocean provide a good tutorial on how to get started.

Networking: the host VM

The default set up will create a new veth adapter for each container created, which gets added to the -also created by default- lxcbr0 bridge. Out of the box, dnsmasq is configured to provide DNS forwarding and DHCP to give each container some form of connectivity.

As I wanted more control over the DNS resolution and didn't particularly need DHCP, I manually created a br0 bridge, by editing /etc/network/interfaces on the host VM to look like this:

auto lo
iface lo inet loopback

auto br0
iface br0 inet static
    address 192.168.84.3
    network 192.168.84.0
    netmask 255.255.255.0
    gateway 192.168.84.2
    bridge_ports eth0
    bridge_fd 0
    bridge_maxwait 0

After that, I changed /etc/default/lxc-net to:

USE_LXC_BRIDGE="false"

in VMWare, the host VM is configured with a single adapter using NAT. By default, VMWare Fusion does not offer a nice UI for changing the IP range this NAT-ed adapter operates on. After some digging, it turns out these settings can be changed by editing Library/Preferences/VMware\ Fusion/vmnet8/nat.conf to:

## NAT gateway address
ip = 192.168.84.2
netmask = 255.255.255.0

Due to how VMWare's NAT adapter works (differs from VirtualBox, more on that later), the IP configured here is what will be the default gateway ip for the host VM and all the containers. My Macbook will reside on 192.168.84.1 to make it part of the local virtual network. The host VM resides on 192.168.84.3, as per the configuration above.

Networking: the containers

Each container then has a /etc/network/interfaces file that looks like this:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
    address 192.168.84.10
    network 192.168.84.0
    netmask 255.255.255.0
    gateway 192.168.84.2
    dns-nameservers 192.168.84.3

All containers can now communicate with eachother, as well as with the host on the private subnet. The "outside world" is accessible via my Macbook's internet connection by using 192.168.84.2 as a default gateway.

Networking: DNS resolution

As I was already familiar with BIND, I figured I may as well use what I know already. I have configured BIND with a local zone (say, bas.dev), as well as two forwarders to resolve "real" DNS queries recursively. I use Google's public DNS as forwarders (8.8.8.8 and 8.8.4.4), because they're available from anywhere in the world, which comes in handy when I take my Macbook to different locations. Any public DNS that allows recursive resolving will work just fine.

I can now do things like this from the host VM (or any of the containers):

bas@devbox:~$ ping magento.bas.dev
PING magento.bas.dev (192.168.84.17) 56(84) bytes of data.
64 bytes from 192.168.84.17: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 192.168.84.17: icmp_seq=2 ttl=64 time=0.051 ms

The one thing that doesn't work yet is DNS resolution for this local zone from my Macbook itself. My Macbook will use whatever DNS server is provide to it via DHCP, which in turn will know nothing about my local dev zone.

Luckily, in OS X, it is rather simple to send DNS requests for a certain suffix to a non-system name server. All I had to do was create /etc/resolver/bas.dev with the following content:

nameserver 192.168.84.3

This will cause OS X to use 192.168.84.3 to resolve any queries for *.bas.dev.

Project files

I have all my projects for #work organised in separate directories inside ~/work on my MacBook. As I like to use PHPStorm as my IDE of choice, I need my projects to be accessible via the local file system. Otherwise, indexing and file scanning performance is horrifying!.

There are a few options available to grant my containers selective access to the project files needed to run the various applications:

  • Approach one: Export ~/work via NFS on my Mac, then use NFS inside the host VM to mount the work directory.
  • Approach two: Use VMWare's shared folders functionality.

At first, I opted for approach one, as shared folders had let me down in the past, performance wise. NFS worked great for a few months, but there's one annoyance I could not sort: attribute caching. NFS will cache file attributes like 'last modified' and 'last accessed' for a short while, to improve performance.

In PHP, you mainly debug using the Edit-Save-Refresh approach. The delay in changes being picked up became unworkable quite quickly. I tried turning of attribute caching, but performance dropped to unacceptable levels while working on large (or Magento-based) projects.

As Magento consists of a lot of files that need to be stat'd on every request, pages took 10 to 20 seconds to load with attribute caching turned off (compared to the 1-ish second with attribute caching). I did not like this, at all.

As a last resort attempt, I decided to give shared folders another go. My previous experience with poor shared folders performance was based on my experience with Virtual Box. It turns out that VMWare does a much better job when it comes to shared folders and performance was on par with NFS with attribute caching turned on. On top of that, no more waiting for modified files to be picked up!

In both approaches, I mounted ~/work on the host VM. To give the individual containers access, I used various bind mounts. Adding a bind-mount to a container is a matter of adding an fstab-entry to /var/lib/lxc/<container-name/fstab.

For example, my Magento container is configured according to /var/lib/lxc/magento/fstab':

/mnt/hgfs/work  /var/lib/lxc/magento/rootfs/work    none    bind    0 0

Which means that inside the magento container, /work will point to ~/work on my mac.

The database

As most of my projects use a MySQL database, I decided to create a db container for MySQL, which then serves all databases for all projects. I purposely did not want to run a MySQL installation per project, as MySQL is not exactly light on the system resources. As we're not dealing with a publicly accessible database, I figured that the (lack of) security would be adequate.

Each project will still have its own set of credentials, limiting access to just the project-specific database.

Magento

Due to the number and similarities between all the Magento projects that I work on, it seemed overkill to create a separate container for each Magento installation. Instead, I opted for the simpler approach of having one Magento container which hosts all my Magento dev sites.

To give each Magento installation its own, unique DNS name, I create a CNAME entry in my BIND zone on the host VM along the lines of:

myproject       IN  CNAME   magento.bas.dev.

Bespoke projects

When a project (usually bespoke projects) requires an unusual or large set of external dependencies, I create a separate container for said project. Services that'd make a project classify for its own container would be things like [Gearman], [Solr], [Elastic Search], [Redis], etc.

As services like Solr have the potential to consume a lot of resources, I simply turn the container off (lxc-stop -n <name>) when I am not working on that project.

Mailcatcher and emails

A common task in web applications is to send an email. With that comes the need to test the email sending functionality in development. According to MailCatcher's site, Mailcatcher

Catches mail and serves it through a dream. MailCatcher runs a super simple SMTP server which catches any message sent to it to display in a web interface.

I have installed Mailcatcher on its own container (which unfortunately involces a fair amount of playing whack-a-mole on various gem install errors) and run it as follows:

mailcatcher --ip 0.0.0.0 --http-port 80 --smtp-port 25

By default, mailcatcher does not listen on the default SMTP and HTTP ports, presumably for security reasons. As I am running it in its own container, on my own local virtual network, port 25 and 80 seemed perfectly acceptable.

Before Mailcatcher, I used to run exim4 in relay mode. Any message sent by my dev sites would simply be sent to my #work email address. This mostly worked, except for that my inbox would occasionally flood with debug emails. Or emails would get flagged as spam and I'd never see them.

I now run exim4 on each of my containers in 'smarthost' mode, with mailcatcher.bas.dev as the smart host. This means that any emails sent by any of my dev containers never leaves my local network. My inbox stays clean and test emails can be viewed at http://mailcatcher.bas.dev in Mailcatcher's UI.

Handy!

Another added benefit is that all my existing email-sending code simply works, because exim4 still handles the local mail-sending part of the job. If you have previously installed exim4, you can run dpkg-configure exim4-config to reconfigure exim4 to use the mailcatcher container as the smart host.

Demoing of local projects

Sometimes, the need arises to demo a newly developed feature to a colleague or a project manager, before the feature can be deployed to live hosting. As all my projects run on local containers, none of them can be accessed by anyone else on the office network.

To work around this, I have set up a port mapping in VMWare that maps port 80 on my Macbook's LAN ip address to 192.168.84.3, which you may recall as the ip of the host VM. On the host VM, I then run Nginx in reverse proxy mode, proxying select dev sites.

From my previous dev set ups, I already had a wildcard DNS entry set up in the office that points *.bas.dev to my Macbook's LAN ip address. As my local dev domains also end in .bas.dev, as long as I set up a reverse proxy virtualhost for a dev site, everyone in the office can access said site the same way I access them locally.

I opted for Nginx instead of Apache because of its light footprint.

To create a port-mapping in VMWare Fusion, you have to edit Library/Preferences/VMware\ Fusion/vmnet8/nat.conf) again. Then, under the the [incomingtcp] heading, add the following:

[incomingtcp]
80 = 192.168.84.3:80

Any other port mappings you may want to make from your Macbook to your host VM (or any of the containers!) can be added in a similar manner.

Why not VirtualBox?

Why did I use the non-free VMWare fusion instead of the free VirtualBox? One reason and one reason only: shared folder performance. VirtualBox's shared folder performance appeared to be an order of magnitude slower than VMWare's.

This may not be an issue in a small to medium sized project, but with a large software package like Magento, the difference is significant: 1 second page generation times in VMWare vs 10-20 seconds in VirtualBox.

Why not Vagrant?

I have tried [Vagrant] on more than one occasion, but every time I've had a bad experience. Be it Ruby misbehaving or VirtualBox not responding. I also did not like the idea of having to run multiple VMs (one per project), as I frequently switch between projects.

What's next?

Right now, I am very happy with my set up. I have isolated environments available to me for every project, can run multiple versions of PHP, multiple projects that use Gearman and Solr, all by simply starting my dev VM (which by the way, is incredible fast to simply pause & resume using VMWare).

The downsides? Each of the containers is configured by hand and therefore relatively hard to reproduce. When another developer has to join the project, getting set-up takes some effort. I may look at Docker and Packer in the future to address these issues.