Private clouds: By Amazon

A few days ago I blogged about how VMware is going to do a huge push into “private clouds” around the VMware 2009 conference. But little did we know that Amazon had something up its sleeve as well. It has announced it today.

AWS now supports creation of Virtual Private Cloud with private address space (including RFC 1918) which could be locked down by a VPN connection to only your organization only. You still get most of the benefit of Amazons cheap hardware pricing but you get to lock down the infrastructure for security reasons.

Regardless of how you see it, this is huge for IT and the developer community. Some may love it, and I’m sure some will be pretty angry at Amazon for trying to commodities security and making it look as if network security was as simple as that.

With VMware’s announcements next week, there is no doubt in my mind that the next one year at least there will be a significant push towards “private clouds”.

EC2 for everyone. And now includes 64bit with 15GB Ram too.

 

Finally it happened. EC2 is available for everybody. And more than that they now provide servers with 7.5GB and 15GB of RAM per instance. Sweet.  

For a lot of companies EC2 was not viable due to high memory requirements of some of the applications. Splitting up such tasks to use less memory on multiple servers was possible, but not really cost and time efficient. The release of new types of instances removes that road block and would probably invoke significant curiosity from memory crunching application developers.

$0.10 – Small Instance (Default)

    1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform

$0.40 – Large Instance

    7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform

$0.80 – Extra Large Instance

    15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform

      Data Transfer

      $0.10 per GB – all data transfer in

      $0.18 per GB – first 10 TB / month data transfer out
      $0.16 per GB – next 40 TB / month data transfer out
      $0.13 per GB – data transfer out / month over 50 TB

      Scaling Smugmug from startup to profitability

      Smugmug.com, a 5 year old company with just 23 employees has 315000 paying customers and 195 million photographs. CEO & “Chief Geek” Don MacAskill has a nice set of slides where he talks about its 5 year journey during which it went from small startup to a profitable business. The talk was given during Amazon’s “Startup project” so it talks mostly about how it uses AWS (Amazon Web services).

      Other than wonderful fun loving employees who are also “super heroes” in his eyes, he talks about the how they are doubling the storage requirements on an yearly basis. They already have about 300 TB in use, and as of today all of that is on Amazon’s S3. Don estimates that based on his estimates, for the storage they are using, they are saving about 500K per year which is pretty big for a small operation like theirs.

      The Smugmug architecture has evolved over time. Internally it can serve images in 3 different ways.

      1. It could either “proxy” the request, where the app server would get the object from storage and serve it to the client.
      2. It could do a “redirect” where the app server could redirect the user to the right resource.
      3. And finally it could just serve the images directly from the storage using REST based APIs.

      This flexibility allowed Smugmug try different ways of using internal and external/S3 storage. Interestingly even though they wanted to get away from self-managed internal storage, they noticed that S3 storage wasn’t very cheap when you take into account the bandwidth utilization. After various permutation and combinations, they managed to setup a system where they continued to use S3 for primary storage, but kept 10% of the hottest objects locally on Smugmug’s own servers to minimize bandwidth utilization on S3 service. This allowed them to get away with not buying 95% of the storage drives they were originally supposed to buy. 5% of 300TB is still 15TB, which is smaller but not small enough unfortunately.

      Though storage management is easy with S3, it can at times make things difficult. Permission management was one of those things where they had to sacrifice speed/performance in favour of using “Proxy” mechanism which is more robust/reliable way of serving protected objects.

      On reliability, Don mentioned that having multiple single points of failures is not really helpful if you want to provide near 100% availability. With S3 in picture, not only do they have to worry about connectivity from customer to Smugmug and Smugmug’s own servers, they also have to worry about connectivity to Amazon and availability of Amazon services round the clock. Hence they had to design their app ground up to handle failures gracefully. For example, write failures to S3 is handled by recording the change locally which is then synced asynchronously. At other times when things break, its designed to either try again, or alert the right folks so that action can be taken.

      While talking about nice-to-have-features, he mentioned that S3 shouldn’t be confused with CDN. Its not a distributed caching service and it doesn’t have global locations like a CDN has. Regardless, he said, S3 probably should have limited Cache or Stream ability which will boost performance and add value to an already invaluable service.

      On the topic of Smugmug’s future, Don mentioned they are flirting with the possibility of using EC2 in the near future. EC2 would probably be used as image processing computation nodes. Since the EC2 servers are located in the Amazon facility, it will save them bandwidth of transferring data from S3. And since they can turn off server instances on demand (and not pay for them while its offline) it will probably cut down their operating costs to maintain image processing servers as well.

      At the end he mentioned a few services which he would like to see Amazon offer in future. Two important ones which he mentioned (and which I think are critical) are the absence of some kind of Database, and Loadbalanacer APIs.

      PDF Slides of the same presentation here (34 slides in this one)

      Scaling Powerset using Amazon’s EC2 and S3

      The first thing most doc-com companies do before going public is setup an infrastructure to provide the service. And though it might sound straight forward to most of you, it can be a very expensive affair. To come up with the right kind of infrastructure for any new service a few key architectural decisions have to be made.

      1. Design the infrastructure and architecture to handle the traffic peaks (not average)
      2. Design with long term scaling in mind
      3. Design power and cooling infrastructure to support the servers
      4. Hire Hardware+Systems+network support staff ( more if 24/7 operations are required)
      5. Add buffers to support failures and short term grow requirements
      6. Take into account lead times of ordering and procuring hardware (which can be weeks if not months)..
      7. And a few others.. which I won’t bore you with here.

      The point is that the initial capital investment can be in millions even before the first customer starts using the service. And once the capital investment is made, it is very difficult to scale down the operations if the plans change.
      Powerset Inc is a secretive search startup with ambitions of out-smarting Google in its own turf. Based in San Francisco, this search startup is working on building a better search engine using natural language processing capabilities to understand the users question a little better before answering it. And just like any other search company its technology is a CPU hungry beast just waiting to be unleashed. Powerset could have gone the way most dot-com companies have gone, but instead they decided to try out Amazon’s EC2 (Elastic Cloud Computing) and S3(Simple Storage Service) to augment their computational needs.

      Powerset has been repoted to be testing a 400 server instance EC2 cluster with Hadoop running Map/Reduce and HDFS (Hadoop Distributed File system). This does get a little tricky on EC2 because of absence of persistent storage (OS is re-initialized after every reboot). So they use a combination of HDFS and remote copying process to sync the data to their local network. Since there is no charge to move data from EC2 to S3, they have been thinking about implementing a native HDFS and S3 interface to move data around within Amazon’s network itself.

      EC2 is charged on a per instance per hour usage basis, which means Powerset can bring new nodes online during heavy demand and shut off unused nodes at a flick of a switch at night. Powerset guys also built their own EC2 image configured to automatically join the HDFS cluster after every boot up. In an event of a node failure, Hadoop can take care of data replication, and EC2 takes care of replacing the failed node with a new one.

      Amazon EC2 costs 10 cents an hour per instance. If you have to run a 400 node cluster for 1 month thats only about 30000. Based on the performance benchmarks, it looks like the actual CPU throughput from each of the EC2 instance is roughly equivalent of 1Ghz PIII. 72 dollars a month for that kind of server is not too cheap, but just like car leasing, atleast u don’t have to pay upfront and manage it.

      So lets do the math. A regular AMD 64bit dual core, 2 cpu server with about 8GB of ram costs about 10000 USD which excluds the cost of hosting, power, cooling and maintainance. Based on some comments on Amazon forum this is about 2 to 3 times faster than the EC2’s infrastructure.If you had to replace the CPU computation power of this new hardware with 8 to 12 server instances on EC2, you would have spent about 700 to 800 dollars a month. It will take a company using EC2 infrastructure close to 12 months before they would have to pay 10000 towards EC2 computational services for the same amount of computation power. And remember that 10000 amount didn’t take into account colocation, power, cooling and general server administration which can be significant as well. Also remember that 12 months is actually 12 months of actual computational usage.. which could over a period of 2 to 4 years depending on how often the instances are used.

      However, I also have to point out that there are a few things to look out for. The maximum physical memory available is about 1.7GB which is relatively tiny if you are used to 8 to 16 GB of ram on a 64 bit hardware. And though CPU/Memory might scale horizontaly for some applications, cross-server communication can be extreemly expensive for some. Unless your application is designed to scale horizontally with under 1.7 GB of ram, I would seriously advice you against using EC2 until you figure out how to change that.

      I’ve blogged about both S3 and EC2 before and it continues to facinate me. Success of companies like Slideshare.net, and the decision of companies like Powerset to use AWS is something which I’ll watch closely over time.

      References
      Links about Hadoop, and how to use it on Amazon EC2/S3

      Other References about Powerset and Amazon