A few of us joined in at the new Twitter office in downtown SF (right next to Moscone Center) and were for the first time shown what Twitter is doing about â€œTwitter Annotationsâ€. We probably created the first set of 3rd party applications around this new API. During the Hackathon I spent some time to wear my â€œScalable web architectureâ€ hat to think what I could learn from this experience which Iâ€™ve summarized below.
Twitter annotations from a developerâ€™s view point is just an extension to existing APIs which now allows posting of additional structured content along with â€œtweetâ€. The content stays within the context of the tweet and will be retweeted/shared automatically with the main tweet. Twitter has some recommendations on how the annotations should be structured, for example they were talking about â€œtypeâ€ which sounded very much like Open Graphâ€™s â€œtype/categoryâ€ concept, with the difference that Twitter has left the field open for any kind of â€œtypeâ€ users want. Facebook, if I remember right, had strongly recommended users to use a small set of â€œcategories/typesâ€ which they published. Twitter accepted these annotations in multiple formats of which I tried the â€œsimpleâ€ and â€œJSONâ€ protocols. The â€œJSONâ€ way was the most recommended/used medium of annotation during the whole hackathon. While annotation structure (using JSON) did allow multiple â€œtypesâ€ in the same tweet, there were a few limitations which were slightly constricting. The first big one was that the current implementation allowed only 512 bytes in the annotations field. The second limitation was that the structure, while its JSON, it only supported a few levels of depth in the structured annotation. This was extremely restrictive for the use case I was trying to hack up.
There were a few things I learnt during the whole 32 hour experience. The first one was that twitter had actually hosted these half baked APIs on http://api.twitter.com and http://www.twitter.com, which Iâ€™m glad to say is still accessible using my account from outside twitterâ€™s buildings. Of course the hackers(we) had to be white-listed to get access to use it, but from an operations view point this is extremely gutsy since one bad ACL code fragment could expose a lot of uncooked APIs to the whole world. This approach of testing is not new to Twitter and is frequently used for A/B testing in newer (more agile) organizations around the world.
The second was the fact that while Cassandra is in use at twitter, they donâ€™t use it as the primary datastore for all the tweets (yet). They have other uses for it which they didnâ€™t elaborate. The version of Cassandra they use is close to 0.6.2 which just got released. It also looked (from my discussions with one engineer) like cassandra treated rack-awareness and datacenter-awareness in a slightly different way. In the previous documentations I read, they both were the same for all practical purposes. In other words, I need to research this a little more since optimizations in this area can boost Cassandra’s performance across datacenters.
The third was that while Twitter uses cutting edge tools for a lot of different things, they donâ€™t have service discovery nailed yet. They are playing with zookeeper, and I believe they will use that eventually, but its not there yet. This by itself is amazing because without service discovery, the management of configuration and rolling out configuration changes becomes centralized which has its own advantages and disadvantages. At the organization I work, we are playing with cassandra as a service publication/discovery tool for monitoring and consuming services. The short discussion I had with twitter folks about using cassandra in such a way validated the work work Iâ€™m doing with cassandra. But Iâ€™m still puzzled why others are not thinking about cassandra (or other eventually-consistent datastore) for service discovery. It sounded like Zookeeper might be an overkill for my organization, but I should take a look at it again.
The fifth and final Iâ€™d like to write about is the hackthon itself. Its amazing how Twitter organized this hackathon, got a group of hackers to play with their new APIs and gave them ability to demo their hacks to the likes of Paul Graham and Ron Conway. In return they got very interesting product-ideas and use-cases for a feature which is still unpolished and unreleased. But more importantly they also got a bunch of hackers to intentionally and unintentionally break the feature and discover some serious and some very annoying bugs. They also got feedback on what does and doesnâ€™t resonates with developers. In a way this is similar to what some other organizations (including Google) already do with their alpha/beta program, but nothing beats the velocity of hacking up 10 to 20 almost-ready products around a brand new feature in less than 32 hours.
P.S.: Iâ€™m terribly sorry for spamming my twitter followers who were bombarded with twitter test messages for two days. Next time Iâ€™ll pick a test account 🙂