Packer is a very specialized tool for packaging cloud images. The concepts used in packer are simple but powerful, and seems to be the best tool for building images in an automated fashion. Basically the tool takes an existing cloud image, starts up a cloud vm with that image, runs various provisioners that you define on the vm, burns and image, and then shuts everything down. The end result is a new image that can be used for cloud deployments.

The documentation on the site is better than what I could write here, so I will focus on a working example and the issues I ran into to supplement the official documentation.

Chef Example with Multiple Builder

The example I’m going to show will upload a tarball of chef code, install chef, and then do a chef-solo run to build up the image. Note that packer does have a native chef-solo provisioner so you may choose to use that instead. I chose this method of running chef because I wanted to know exactly how everything was installed and run to ease debugging.

One feature of packer that may be interesting to you is that it can build multiple types of machine images. In this example I’ll show how I build the following image types: Amazon (instance based), Amazon (ebs based), Rackspace (Open Stack), and Digital Ocean. Support for other image types are available (VMWare, Virtual Box) and Google Compute Engine appears to be added in the next version.

Packer is a simple binary that takes in a json file with build instructions and then builds an image. Unlike Chef or Puppet Packer really just acts on this one file and assumes you will be uploading assets and running scripts via provisioners. So the below packer json is all that packer needs to run, and you are expected to write the files to be executed on the machine.

The basic working example that I have started with is below:

  1 {
  2   "builders": [
  3     {
  4       "access_key": "",
  5       "account_id": "",
  6       "ami_name": "blog-instance-",
  7       "bundle_upload_command": "sudo EC2_AMITOOL_HOME=/opt/ec2-ami-tools/ec2-ami-tools- -n /opt/ec2-ami-tools/ec2-ami-tools- -b  -m  -a  -s  -d  --batch --retry",
  8       "bundle_vol_command": "sudo EC2_AMITOOL_HOME=/opt/ec2-ami-tools/ec2-ami-tools- -n /opt/ec2-ami-tools/ec2-ami-tools- -k  -u  -c  -r  -e /* -d  -p  --batch --no-filter",
  9       "instance_type": "m1.small",
 10       "region": "us-east-1",
 11       "s3_bucket": "",
 12       "secret_key": "",
 13       "source_ami": "ami-955b79fc",
 14       "ssh_username": "ubuntu",
 15       "tags": {
 16         "Application": "blog",
 17         "OS_Version": "Ubuntu",
 18         "Release": "13.04"
 19       },
 20       "type": "amazon-instance",
 21       "x509_cert_path": "",
 22       "x509_key_path": "",
 23       "x509_upload_path": "/tmp"
 24     },
 25     {
 26       "type": "amazon-ebs",
 27       "access_key": "",
 28       "secret_key": "",
 29       "region": "us-west-2",
 30       "source_ami": "ami-8c0663bc",
 31       "instance_type": "m1.small",
 32       "ssh_username": "ubuntu",
 33       "ami_name": "blog-ebs-"
 34     },
 35     {
 36       "type": "openstack",
 37       "username": "",
 38       "password": "",
 39       "provider": "rackspace-us",
 40       "region": "ORD",
 41       "ssh_username": "root",
 42       "image_name": "blog-",
 43       "source_image": "80fbcb55-b206-41f9-9bc2-2dd7aac6c061",
 44       "flavor": "2"
 45     },
 46     {
 47       "type": "digitalocean",
 48       "client_id": "",
 49       "api_key": "",
 50       "image_id": 350076,
 51       "region_id": 4,
 52       "size_id": 66,
 53       "droplet_name": "blog-",
 54       "snapshot_name": "blog-image-"
 55     }
 56   ],
 57   "provisioners": [
 58     {
 59       "inline": [
 60         "sleep 30"
 61       ],
 62       "type": "shell"
 63     },
 64     {
 65       "destination": "/tmp/chef-setup.tar.gz",
 66       "source": "chef-setup.tar.gz",
 67       "type": "file"
 68     },
 69     {
 70       "destination": "/tmp/encrypted_data_bag_secret",
 71       "source": "",
 72       "type": "file"
 73     },
 74     {
 75       "inline": [
 76         "sudo mkdir -p /opt/chef-solo/website",
 77         "cd /opt/chef-solo/website",
 78         "sudo tar -zxf /tmp/chef-setup.tar.gz -C /opt/chef-solo/website",
 79         "sudo rm /tmp/chef-setup.tar.gz",
 80         "sudo /opt/chef-solo/website/util/ /opt/chef-solo/website/chef/solo-nodes/blog-prod.json",
 81         "sudo docker stop $(sudo docker ps -q)",
 82         "sudo service docker stop",
 83         "rm /tmp/encrypted_data_bag_secret"
 84       ],
 85       "type": "shell"
 86     },
 87     {
 88       "inline": [
 89         "sudo apt-get install -y ruby1.8 ruby1.8-dev make unzip build-essential rsync",
 90         "sudo mkdir -p /opt/ec2-ami-tools",
 91         "sudo chown -R ubuntu /opt/ec2-ami-tools",
 92         "wget -r 3 -O /opt/ec2-ami-tools/",
 93         "cd /opt/ec2-ami-tools",
 94         "sudo unzip",
 95         "sudo rm",
 96         "sudo chown -R root /opt/ec2-ami-tools",
 97         "sudo sync"
 98       ],
 99       "only": ["amazon-instance"],
100       "type": "shell"
101     }
102   ],
103   "variables": {
104     "chef_key_location": "",
106     "aws_access_key": "",
107     "aws_account_id": "",
108     "aws_s3_bucket": "",
109     "aws_secret_key": "",
110     "aws_x509_cert_path": "",
111     "aws_x509_key_path": "",
113     "rackspace_user": "",
114     "rackspace_password": "",
116     "digitalocean_client_id": "",
117     "digitalocean_api_key": ""
118   }
119 }

Executing the example can be done on the command line using the command:

1 packer build -var-file="$HOME/.packer/env.json" -only=amazon-ebs build.json

The above command assumes you have an env.json file at the specified path, and that you only want to run the amazon-ebs builder for this execution. See the packer documentation for all the command line details.

The focus for this post is the packer build and not the chef, but to understand what the chef code is doing you may want to dig further. If so the core of the work is done in the script which installs and runs chef, and the chef-setup.tar.gz tarball which is basically my website-docker repo. This should hopefully look like a pretty basic chef solo run. Note that all of this code is really more for my learning and you should probably use with caution.

Included in this small example are some items I’ll call out further since it took me a bit of time to get right.


In the example above you will see many references to variables. In the example I use variables to allow the packer builder be checked into source control with out exposing any sensitive information. The env.json file referenced on the command line contains all the sensitive information and looks something like this:

 1 {
 2   "chef_key_location": "/Users/cknudsen/projects/.../chef.key",
 3   "aws_access_key": "*************",
 4   "aws_secret_key": "*************",
 5   "aws_account_id": "*************",
 6   "aws_x509_cert_path": "/Users/cknudsen/.../aws_cert.pem",
 7   "aws_x509_key_path": "/Users/cknudsen/.../aws_key.pem",
 8   "aws_s3_bucket": "*************",
10   "rackspace_user": "*************",
11   "rackspace_password": "*************",
13   "digitalocean_client_id": "*************",
14   "digitalocean_api_key": "*************"
15 }

When packer is run all the items that look like `````` will be substituted at run time with the values in the env.json file. Also note that the values shown in this file must match the values in the variables section of the file used for building.

Digital Ocean Values

The Digital Ocean packer documentation is pretty good, however the main issue I had was understanding what to enter for the image_id, region_id, and size_id values. The documentation references the API and the Tugboat gem to find these values, however I found it much easier to pull these ids out of the html of the Digital Ocean site. Basically go to build a machine manually, inspect the html for the different button options, and then us the IDs found there as the values for the respective packer values. In my tests this matched up the to the expected values every time and was much faster to lookup.

Rackspace (openstack) Values

The openstack documenation is actually not very good in describing what the different values should be. Once you get the correct values in place everything works great, but getting to that point can be frustrating.

Currently the openstack builder only works for rackspace and is not a general purpose openstack builder yet. This was what I wanted, but is obviously not great for other open stack deploys. Once you know this knowing the values to enter is easier.

To figure out what the source_image and flavor values should be I had to use the rackspace api, and inspect the raw json returned. To start you need to get a auth token which can be done with a post to the below url:

Post json to:

Post data:

1 {
2   "auth": {
3      "passwordCredentials":
4         { "username":"USERNAME", "password":"PASSWORD"}
5   }
6 }

This should give you a and an needed for following calls, as well as a bunch of urls that should be used for future api calls. This initial call also shows all the different region values than can be used. I’m going to use ORD which is the Chicago DC. Next you will look up the source_image and flavor via the below api calls:

  • For the source_image use the the url:
  • For the flavor use the the url:

For both calls use the headers:

  • X-Auth-Project-Id ==
  • X-Auth-Token ==

The responses these calls will show you what source_image and flavor values that can be used.

Note that the Rackspace API documentation can be used if the above calls don’t work for you.

AWS Instance Backed Builder

The instance based AWS builder is much tricker than the other builder listed here. As you can see in the example you will need to install your own ec2-ami-tools for the builder to work. This can be done by uploading the zip and then installing the packages required to run the tools. These tools will then be used to package your AMI. Note that you will need Ruby 1.8 as newer versions are not yet supported by the ami-tools, and I also had to override the bundle_upload_command and bundle_vol_command values of the builder so that the proper EC2_AMITOOL_HOME environment variable was set. Once all this was done I was able to successfully build an instance backed AMI.

One big caveat to the above is that I still have some issues with this builder depending on what is being installed. I’m still trying to track down what exactly breaks it, but this builder has been the slowest and most fragile for me. I’ve resorted to using ebs backed images for now due to some of these issues. I hope to update this post with a fix once I am consistently building instance back AMIs.


Packer is definitely a tool worth checking out if you want to build cloud images. It is being rapidly developed, has pretty good documentation, and is an easy to understand tool with a focused goal.