Reddit has a very interesting post about what not to do when trying to build a scalable system. While the error is tragic, I think its an excellent design mistakes to learn from.
Though the post lacked detailed technical report, we might be able to recreate what happened. They mentioned they are using MemcacheDB datastore, with 2GB RAM per node, to keep some data which they need very often.
Dont be confused between MemcacheDB and memcached. While memcached is a distributed cache engine, MemcacheDB is actually a persistent datastore. And because they both use the same protocol, applications often use memcached libraries to connect to MemcacheDB datastore.
Both memcached and MemcacheDB rely on the clients to figure out how the keys are distributed across multiple nodes. Reddit chose MD5 to generate the keys for their key/value pairs. The algorithm Reddit used to identify which node in the cluster a key should go to could have been dependent on the number of nodes in the system. For example one popular way to identify which node a key should be on would be to use the â€œmoduloâ€ function. For example key â€œkâ€ could be stored on the node â€œnâ€ where â€œn=k modulo 3â€. [ If k=101, then n=2 ]
Though MemcacheDB uses BDB to persist data, it seems like they heavily relied on keeping all the data in RAM. And at some point they might have hit the upper limit on what could be cached in RAM which caused disk i/o which resulted in slower response times. In a scalable architecture one should have been able to add new nodes and the system should have been able to scale.
Unfortunately though this algorithm works beautifully during regular operation, it fails as soon as you add or remove a node (when you change n). At that point you canâ€™t guarantee that all the data you previously stored on node n would still be on the same node.
And while this algorithm may still be ok for â€œmemcachedâ€ cache clusters, its really bad for MemcacheDB which requires â€œconsistent hashingâ€.
Reddit today announced that they have increased RAM on these MemcacheDB servers from 2GB to 6GB, which allows 94% of their DB to be kept in memory. But they have realized their mistake (they probably figured this out long time back) and are thinking about how to fix it. The simplest solution of adding a few nodes requires re-hashing their keys which would take days according to their estimate. And of course just adding nodes without using some kind of â€œconsistent hashingâ€ is still not a scalable solution.
I personally learnt two things
- Dont mix MemcacheDB and memcached. They are not designed to solve the same problem.
- Donâ€™t just simply replace memcached with MemcacheDB without thinking twice
There are many different products out there today which do a better job at scaling, so I wonâ€™t be surprised if they abandon MemcacheDB completely as well.