Scalable Tools: Murder: a bittorent based, file transfer tool for faster deployments

Deploying to one server can be done with a single script doing a bunch of ssh/scp commands. If you have a few more servers, you can run it in a loop sequentially or fork the processes to do them in parallel. At some point though, it will get unmanageable especially if you have the challenge of updating multiple datacenters at the same time. So how does a company like twitter do their releases ?

Murder is an interesting P2P/Bittorent based tool which twitter uses to distribute files during its own software updates. Here are some more details.

Murder is a method of using Bittorrent to distribute files to a large amount 
of servers within a production environment. This allows for scalable and fast
deploys in environments of hundreds to tens of thousands of servers where
centralized distribution systems wouldn't otherwise function. A "Murder" is
normally used to refer to a flock of crows, which in this case applies to a
bunch of servers doing something.

In order to do a Murder transfer, there are several components required to be 
set up beforehand -- many the result of BitTorrent nature of the system. Murder
is based on BitTornado.

- A torrent tracker. This tracker, started by running the '' 
script, runs a self-contained server on one machine. Although technically this 
is still a centralized system (everyone relying on this tracker), the 
communication between this server and the rest is minimal and normally 
acceptable. To keep things simple tracker-less distribution (DHT) is currently 
not supported. The tracker is actually just a mini-httpd that hosts a 
/announce path which the Bittorrent clients update their state onto.

- A seeder. This is the server which has the files that you'd like to deploy 
onto all other servers. For Twitter, this is the server that did the git diff. 
The files are placed into a directory that a torrent gets created from. Murder 
will tgz up the directory and create a .torrent file (a very small file 
containing basic hash information about the tgz file). This .torrent file lets 
the peers know what they're downloading. The tracker keeps track of which 
.torrent files are currently being distributed. Once a Murder transfer is 
started, the seeder will be the first server many machines go to to get 
pieces. These pieces will then be distributed in a tree-fashion to the rest of 
the network, but without necessarily getting the parts from the seeder.

- Peers. This is the group of servers (hundreds to tens of thousands) which 
will be receiving the files and distributing the pieces amongst themselves. 
Once a peer is done downloading the entire tgz file, it will continue seeding 
for a while to prevent a hotspot effect on the seeder.


1. Configure the list of servers and general settings in config.rb. (one time)
2. Distribute the Murder files to all your servers: (one time)
    cap murder:distribute_files
3. Start the tracker: (one time)
    cap murder:start_tracker
4. Create a torrent file from a remote directory of files (on seeder):
    cap murder:create_torrent tag="Deploy20100101" files_path="~/files"
5. Start seeding the files:
    cap murder:start_seeding tag="Deploy20100101"
6. Distribute the files to all servers:
    cap murder:peer tag="Deploy20100101" destination_path="/tmp/out"

Once completed, all files will be in /tmp/out/Deploy20091015/ on all servers.

More info here

Automated, faster, repeatable, scalable deployments

While efficient automated deployment tools like Puppet and Capistrano are a big step in the right direction, its not the complete solution for an automated deployment process. This post will explore some of the less discussed issues which are as important for automated, fast, repeatable scalable deployments. 

Rapid Build and Integration with tests

  • Use Source control to build an audit trail: Put everything possible in it, including configurations and deployment scripts.
  • Continuous Builds triggered by code check-ins can detect and report problems early image
    • Use tools which provide targeted feedback about build failures. It reduces noise and improves over all quality faster
    • Faster the build happens after a check-in, better are the chances for bugs to get fixed quickly. Delays can be costly since broken builds could impact other developers as well
    • Build smaller components (fail fast)
  • Continuous integration tests of all components can detect errors which may not be caught a build time.

Automated database changes

Can database changes be automated ? This is probably one of the most interesting challenges for automation, especially if the app requires data migrations which can’t be rolled back. While it would be nice to have only incremental changes introduced into each deployment (which are guaranteed to be forward and backward compatible), there might be some need for non-trivial changes once in a while. As long as there is a process to separate the trivial from non-trivial changes, it might be possible to automate most of the database changes through an automation process.

Tracking which migrations have been applied and which are pending is a very application specific problem for which there are no silver bullets.


Configuration management

Environment-specific properties

Its not abnormal to have different sets of configuration for dev and production. But creating different build packages for different target environments is not the right solution. If you need to change properties between environments pick a better way to do it.

  • Either externalize the configuration properties to a file/directory location outside your app folder, such that repeated deployments don’t overwrite properties.
  • Or, update the right properties automatically during deployment using a deployment framework which is capable of that.
Pushing at deployment time or pulling at run time

In some cases pulling new configuration files dynamically after application startup might make more sense. This is especially true for applications on an infrastructure like AWS/EC2. If applications were already deployed on the base OS image, then it will come up automatically when the system boots up. Some folks keep only minimal information in the base OS image, and use a datastore like S3 to download the latest configuration from. In a private network where using S3 is not possible, you could replace it with some kind of shared store like SVN/NFS/FTP/SCP/HTTPetc.

Deployment frameworks

3rd Party frameworks
  • Fabric – Fabric is a Python library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
  • Puppet -  Put simply, Puppet is a system for automating system administration tasks.
  • Capistrano – It is designed with repeatability in mind, letting you easily and reliably automate tasks that used to require login after login and a small army of custom shell scripts.  ( also check out webistrano )
  • Bcfg2 – Bcfg2 helps system administrators produce a consistent, reproducible, and verifiable description of their environment, and offers visualization and reporting tools to aid in day-to-day administrative tasks.
  • Chef – Chef is a systems integration framework, built to bring the benefits of configuration management to your entire infrastructure.
  • Slack – slack is an evolution from the usual "put files in some central directory" that is fairly common practice.
  • Kokki – System configuration management framework influenced by Chef
Custom or Mixed frameworks

The tools listed above are not the only set of tools available. Simple bash/sh scripts, ant scripts, even tools like cruisecontrol and hudson can be used for automated deployments. Here are some other interesting observations 

  • Building huge monolithically applications are thing of the past. Understanding how to break them up into self-contained, less inter-dependent components is the challenge.
  • If all of your servers get the same exact copy of application and configuration, then you don’t need to worry about configuration management. Just find a tool which deploys files fast.
  • If your deployments have a lot of inter-dependencies between components then choose a tool which gives you a visual interface of the deployment process if required.
  • Don’t be shy to write wrapper scripts to automate more tasks.
Push/Pull/P2P Frameworks

Grig has an interesting post about Push vs Pull where he lists the pros/cons of both the systems. What he forgot to mention is P2P which is the way twitter is going for its deployment. P2P has advantages from both Push and Pull architecture but comes with its own set of challenges. I haven’t seen an opensource tool using P2P yet, but I’m sure its not too far out.

Outage windows

Though deployments are easier with long outage windows, thats something hard to come by. In an ideal world one would have a parallel set of servers which one could cut over to with a flip of a switch. Unfortunately if user data is involved this is almost impossible to do. The next best alternative is to do “rolling updates” in small batches of servers. The reason this could be challenging is because the deployment tool needs to make sure the app really has completed initialization before it moves on to the next set of servers.

This can be further complicated by the fact that at times there are version dependencies between different applications. In such cases there needs to be a robust infrastructure to facilitate discovery of the right version of applications.


Deployment automation, in my personal opinion, is about the process, not the tool. If you have any interesting observations, ideas or comments, please feel free to write to me or leave a comment on this blog.