Cassandra as a communication medium â€“ A service Registry and Discovery tool
Few weeks ago while I was mulling over what kind of service registry/discovery system to use for a scalable application deployment platform, I realized that for mid-size organizations with complex set of services, building one from scratch may be the only option.
I also found out that many AWS/EC2 customers have already been using S3 and SimpleDB to publish/discover services. That discussion eventually led me to investigate Cassandra as the service registry datastore in an enterprise network.
Here are some of the observations I made as I played with Cassandra for this purpose. I welcome feedback from readers if you think Iâ€™m doing something wrong or if you think I can improve the design further.
- The biggest issue I noticed with Cassandra was the absence of inverted index which could be worked around as I have blogged here. I later realized there is something called Lucandra as well which I need to look at, at some point.
- The keyspace structure I used was very simpleâ€¦ ( I skipped some configuration lines to keep it simple )
- Using an â€œOrderPreservingPartitionerâ€ seemed important to do â€œrange scansâ€. Order Preserving partitioner keeps objects with similar looking keys together to allow bulk reads and writes. By default Cassandra randomly distributes the objects across the cluster which works well if you only have a few nodes.
- I eventually plan to use this application across two datacenters. The best way to mirror data across datacenters in Cassandra is by using â€œRackAwareStrategyâ€. If you select this option, it tells Cassandra to try to pick replicas of each token from different datacenters/racks. The default algorithm uses IP addresses to determine if two nodes are part of the same rack/datacenter, but there are other interesting ways to do it as well.
- Some of the APIs changed significantly between the versions I was playing with. Cassandra developers will remind you that this is expected in a product which is still at 0.5 version. What amazes me, however, is the fact that Facebook, Digg and now Twitter have been using this product in production without bringing down everything.
- I was eventually able to build a thin java webapp to front-end Cassandra, which provided the REST/json interface for registry/discovery service. This is also the app which managed the inverted indexes.
- Direct Cassandra access from remote services was disabled for security/stability reasons.
- The app used DNS to loadbalance queries across multiple servers.
- My initial performance tests on this cluster performed miserably because I forgot that all of my requests were hitting the same node. The right way to tests Cassandraâ€™s capacity is by loadbalancing requests across all Cassandra nodes.
- Also realized, that by default, the logging mode was set to â€œDEBUGâ€ which is very verbose. Shutting that down seemed to speed up response times as well.
- Playing with different consistency levels for reading and writing was also an interesting experience, especially when I started killing nodes just to see the app break. This is what tweeking CAP is all about.
- Due to an interesting problem related to â€œeventual consistencyâ€, Cassandra doesnâ€™t completely delete data which was marked deletion or was intentionally changed. In the default configuration that data is kept around for 10 days before its completely removed from the system.
- Some documentation on the core operational aspects of Cassandra exist, but it would be nice if there were more.
Cassandra was designed as a scalable,highly available datastore. But because of its interesting self-healing and â€œRackAwareâ€ features, it can become an interesting communication medium as well.