REST APIs for cloud management and the Database.com launch

I found the top two stories on scalebig last night to be interesting enough for me to dig a little deeper. The one which surprised me the most was William Vambenepe’s post about why he thinks that REST APIs doesn’t matter in context of cloud management. While REST might be ideal for many different things, including web based applications which are accessed mostly by the browsers, Amazon chose to avoid REST for most of its infrastructure management APIs.

Has this lack of REStfulness stopped anyone from using it? Has it limited the scale of systems deployed on AWS? Does it limit the flexibility of the Cloud offering and somehow force people to consume more resources than they need? Has it made the Amazon Cloud less secure? Has it restricted the scope of platforms and languages from which the API can be invoked? Does it require more experienced engineers than competing solutions?

I don’t see any sign that the answer is “yes” to any of these questions. Considering the scale of the service, it would be a multi-million dollars blunder if indeed one of them had a positive answer.

Here’s a rule of thumb. If most invocations of your API come via libraries for object-oriented languages that more or less map each HTTP request to a method call, it probably doesn’t matter very much how RESTful your API is.

The Rackspace people are technically right when they point out the benefits of their API compared to Amazon’s. But it’s a rounding error compared to the innovation, pragmatism and frequency of iteration that distinguishes the services provided by Amazon. It’s the content that matters.

And the other big news was of course the launch of a new cloud datastore by salesforce at Database.com. database.comInterestingly, you should notice, that they decided to brand it with its own website instead of making it part of its existing set of services. Its possible they did it to distance this new service from an impression that its only useful for applications which need other salesforce services. For more in-depth technical information continue reading here.

The infrastructure promises automatic tuning, upgrades, backups and replication to remote data centers, and automatic creation of sandboxes for development, test and training. Database.com offers enterprise search services, allowing developers to access a full-text search engine that respects enterprise security rules

In terms of pricing, Database.com access will be free for 3 users, and up to 100,000 records and 50,000 transactions per month. The platform will $10 per month for each set of 100,000 records beyond that and another $10 per month for each set of 150,000 transactions beyond that benchmark. The enterprise-level services will be an additional $10 per user per month and will include user identity, authentication and row-level security access controls.

Other references: Dreamforce 2010 – Database.com Launch Interview with Eric Stahl Read more

The Cloud: Watch your step ( Google App engine limitations )

Any blog which promotes the concept of cloud infrastructure would be doing injustice if it doesn’t provide references to implementations where it failed horribly. Here is an excellent post by Carlos Ble where he lists out all the problems he faced on Google App engine (python).  He lists 13 different limitations, most of which are very well known facts, and then lists some more frustrating reasons why he had to dump the solution and look for an alternative.

The tone of the voice is understandable, and while it might look like App-Engine-bashing, I see it as a great story which others could lean from.

For us, GAE has been a failure like Wave or Buzz were but this time, we have paid it with our money. I’ve been too stubborn just because this great company was behind the platform but I’ve learned an important lesson: good companies make mistakes too. I didn’t do enough spikes before developing actual features. I should have performed more proofs of concept before investing so much money. I was blind.

Cloud is not for everyone or for all problems. While some of these technologies take away your growing pain points, they assume you are ok with some of the limitations. If you were surprised by these limitations after you are neck deep in coding, then you didn’t do your homework.

Here are the 13 points issues he pointed out. I haven’t  used Google App engine lately, but my understanding is that App engine team have solved, or on the path of solving (or reducing pain) some of these issues.

  • Requires Python 2.5
  • Cant use HTTPS
  • 30 seconds to run
  • URL fetch gets only 5 seconds
  • Can’t use python libraries compiled in C
  • No “LIKE” operators in datastore
  • Can’t join tables
  • “Too many indexes”
  • Only 1000 records at a time returned
  • Datastore and memcache can fail at times
  • Max memcache size is 1MB

Presentation: “OrientDB, the database of the web”

I knew there was something called “OrientDB”, but didn’t know much about it until I went through these slides. Here is what I learned in one sentence. Its a easy to install NoSQL(schemaless) datastore, with absolutely no configuration required, supports ACID transactions, it can be used as a document store, a graph store and a key value store, it can be queried using SQL-like and JSON syntax, supports indexing and triggers and its been benchmarked to do 150000 inserts using commodity hardware.  That’s a lot of features.

Cassandra: What is HintedHandoff ?

Nate has a very good post about how Cassandra is different from a lot of other distributed data-stores. In particular he explains that every node in a Cassandra cluster are identical to every other node. After using cassandra and a few months I can tell you for a fact that its true. It does come at a price though. Because its so decentralized, if you want to make a schema change, for example, configuration files of all of the nodes in the cluster need to be updated all at the same time. Some of these problems will go away when 0.70 finally comes out.

While is true that Cassandra doesn’t have a concept of single “master” server, each node participating in the cluster do actually act as masters and slaves of parts of the entire key range. The actual % size of the key range owned by an individual node depends on the replication factor, on how one picks the keys and what partitioner algorithm was selected.

The fun starts when a node, which could be the master for a range of keys, goes down. This is how Nate explains the process..

Though the node is the "primary" for a portion of the data in the cluster, the number of copies of the data kept on other nodes in the cluster is configurable. When a node goes down, the other nodes containing copies, referred to as "replicas", continue to service read requests and will even accept writes for the down node. When the node returns, these queued up writes are sent from the replicas to bring the node back up to date

And this is from the Cassandra wiki

If a node which should receive a write is down, Cassandra will write a hint to a live replica node indicating that the write needs to be replayed to the unavailable node. If no live replica nodes exist for this key, and ConsistencyLevel.ANY was specified, the coordinating node will write the hint locally. Cassandra uses hinted handoff as a way to (1) reduce the time required for a temporarily failed node to become consistent again with live ones and (2) provide extreme write availability when consistency is not required.

A hinted write is NOT sufficient to count towards ConsistencyLevel requirements of ONE, QUORUM, or ALL. Take the simple example of a cluster of two nodes, A and B, and a replication factor of 1 (each key is stored on one node). Suppose node A is down while we write key K to it with ConsistencyLevel.ONE. Then we must fail the write: recall from the API page that "if W + R > ReplicationFactor, where W is the number of nodes to block for on write, and R the number to block for on reads, you will have strongly consistent behavior; that is, readers will always see the most recent write."

Thus if we write a hint to B and call the write good because it is written "somewhere," there is no way to read the data at any ConsistencyLevel until A comes back up and B forwards the data to him. Historically, only the lowest ConsistencyLevel of ZERO would accept writes in this situation; for 0.6, we added ConsistencyLevel.ANY, meaning, "wait for a write to succeed anywhere, even a hinted write that isn’t immediately readable."

Mike Perham has a related post on the same topic. He goes further and explains that because there could be scenarios where writes are not immediately visible due to a disabled master node, its possible that master could get out of sync with the slaves in the confusion. There is a process called “anti-entropy” which cassandra uses to detect and resolve such issues. Here is how he explains

The final trick up Cassandra’s proverbial sleeve is anti-entropy. AE explicitly ensures that the nodes in the cluster agree on the current data. If read repair or hinted handoff don’t work due to some set of circumstances, the AE service will ensure that nodes reach eventual consistency. The AE service runs during “major compactions” (the equivalent of rebuilding a table in an RDBMS) so it is a relatively heavyweight process that runs infrequently. AE uses a Merkle Tree to determine where within the tree of column family data the nodes disagree and then repairs each of those branches.

OpenTSDB – Distributed time series database

Ever since I saw a demo of this tool, I’ve been on the edge, waiting for it to be opensourced so that I could use it.  The problem its trying to solve is a real pain-point which most webops folks would understand.

Yesterday folks at stumbleupon finally opened it up. Its released under LGPLv3 license. You can find the source here and the documentation here.

At StumbleUpon, we have found this system tremendously helpful to:

  • Get real-time state information about our infrastructure and services.
  • Understand outages or how complex systems interact together.
  • Measure SLAs (availability, latency, etc.)
  • Tune our applications and databases for maximum performance.
  • Do capacity planning.

Storage options on app engine

For those who think google app engine only has one kind of datastore, the one built around “bigtable”, think again. Nick Johnson goes into details of all the other options available with their pro’s and con’s in his post.

App Engine provides more data storage mechanisms than is apparent at first glance. All of them have different tradeoffs, so it’s likely that one – or more – of them will suit your application well. Often, the ideal solution involves a combination, such as the datastore and memcache, or local files and instance memory.

Storage options he lists..

  • Datastore
  • Memcache
  • Instance memory
  • Local Files

Read more: Original post from Nick

Is auto-sharding ready for auto-pilot ?

James Golick makes a point which lot of people miss. He doesn’t believe auto-sharding features NoSQL provides is ready for full auto-pilot yet, and that good developers have to think about sharding as part of design architecture, regardless of what datastore you pick.

If you take at face value the marketing materials of many NoSQL database vendors, you’d think that with a horizontally scalable data store, operations engineering simply isn’t necessary. Recent high profile outages suggest otherwise.

MongoDB, Redis-cluster (if and when it ships), Cassandra, Riak, Voldemort, and friends are tools that may be able to help you scale your data storage to varying degrees. Compared to sharding a relational database by hand, using a partitioned data store may even reduce operations costs at scale. But fundamentally, no software can use system resources that aren’t there.

At the very least one has to understand how auto sharding in a NoSQL works, how easy is it to setup, maintain, backup and restore. “Rebalancing” can be an expensive operation, and if shards are separated by distance or high latency, some designs might be better than others.

Why Membase Uses Erlang

It’s totally worth it. Erlang (Erlang/OTP really, which is what most people mean when they say “Erlang has X”) does out of the box a lot of things we would have had to either build from scratch or attempt to piece together existing libraries to do. Its dynamic type system and pattern matching (ala Haskelland ML) make Erlang code tend to be even more concise than Python and Ruby, two languages known for their ability to do a lot in few lines of NorthScalecode.

The single largest advantage to us of using Erlang has got to be its built-in support for concurrency. Erlang models concurrent tasks as “processes” that can communicate with one another only via message-passing (which makes use of pattern matching!), in what is known as the actor model of concurrency. This alone makes an entire class of concurrency-related bugs completely impossible. While it doesn’t completely prevent deadlock, it turns out to be pretty difficult to miss a potential deadlock scenario when you write code this way. While it’s certainly possible to implement the actor model in most if not all of the alternative environments I mentioned, and in fact such implementations exist, they are either incomplete or suffer from an impedance mismatch with existing libraries that expect you to use threads.

The complete original post can be found here: blog.membase.com

MongoDB: Migration from Mysql at Wordnik

I had the opportunity to listen to Tony Tam at MongoSF talking about why and how they moved Wordnik from Mysql to MongoDB.  Both the Slides and the Video of the talk are attached to the end of this post.

Wordnik is a new kind of “word” repository which is much more “current” than traditional dictionaries. In addition to showing words which have not yet reached mainstream, they give tons of other related information and statistics about any given word.logo_347x88

They had started looking for alternatives to Mysql after they hit 4billion rows in MyISAM and started noticing locks on tables exceeding 10s of seconds during inserts. They wanted to insert 1500 “values” per second and fell short of it even with elaborate workarounds.

The core requirement for a replacement was very simple. The datastore they were about to replace was RO-centric at the rate of 10000 reads for every write, they had almost no application logic in database (no stored procedures), there were no PrivateKey/ForiegnKey to worry about and consistency wasn’t an absolute must.

Of all the NoSQL solutions they tried, and they eventually decided to go with MongoDB.

After repeated failures of using virtual servers, the final MongoDB server configuration was a non-virtualized 32GB server instance with 2×4 cores connected to external FC/SAN storage for high disk throughput. Since the datastructure they were planning to move from Mysql was not heavily relational, they were able to construct a hierarchical representation of the same data for MongoDB.

To make the migration easy, they made extensive effort to make the application work with both Mysql and MongoDB. They wanted the ability to change the backend datastore with just a flip of the switch. To avoid too much complications, they decided to freeze writes to Mysql during migration without disabling the website entirely, so users were able to use the application with minimal impact (since the application didn’t require too much writes to begin with).  The 4billion rows migration took one whole day. During the data migration process they were able to execute up to 100000 inserts per second [ I hope I got my notes right 🙂 ]

At the end of the migration they slowly moved frontend application servers one node at a time to MongoDB datastore. They did find some issues (which they expected), and continued tweaking the setup until they got a stable, well tuned system, ready for all the write load.

Things learnt from the move

  • MongoDB’s read was so good that memcached layer wasn’t needed anymore
  • “Key” names for each object in a row takes up space. If you use key name which is 10 bytes long, then by 1000 inserts you would have wasted 10000 bytes just on the key name. Based on Tony’s estimate MongoDB used up 4 times the storage they had used in Mysql.
  • Writing queries to read and write data is much more readable. JSON is much more structured than some of the SQL queries.
  • MongoDB is a resource intensive system. It needs a lot of CPU, memory and disk. Sharding should be investigated for larger datastores. [ From what I heard auto-sharding is not exactly fully production ready though… but things could have changed since I researched this ]
  • Every “document” in MongoDB has a 4MB size limit by default.

All 20 of the MongoSF slides are up which you can find here.

MongoDB : The bit.ly implementation

I was at the MongoSF conference few weeks ago, and 10gen just hosted one at NY as well.I was taken aback by the simplicity and the hope MongoDB provides.

I’ll have a more detailed post about what I think about it in a few days, but until then chew on these slides from bit.ly who also uses it to power its backend datastore.