Amazon Route 53 : Programmable DNS is finally here

Managing DNS has been considered as an art by many. If you manage your own DNS records, and run your own external DNS servers, I’m sure you have some stories to share. Unfortunately unlike most other Amazon Web Services infrastructure on the internet, DNS screw-ups can get very costly, especially because caching policies can tend to keep your mistakes alive long after you have rolled back your changes.

The unforgiving nature of DNS has forced most, except a few hardcore sys-admins, from avoiding the DNS hell and choosing a managed service to do it for them. Domain name registrars like network solutions, mydomain and godaddy already provide these DNS services, but I can’t recall any of them providing APIs to make these changes automatically. DynDNS does provide an API to change DNS mappings, but it costs15 bucks a year for a single host. There might be others which I’m not aware off, but the bottom line is that there is no standard, and its not cheap.

altCustomers on AWS today unfortunately have the same problem. And not surprisingly they too prefer to use 3rd party service providers to monitor, setup and manage DNS records for them.  Today AWS is announcing a new service “Amazon Route 53” which, technically, isn’t a significant breakthrough. But considering the number of users already on AWS, the demand for such a service this would be one of the biggest game changing events in the DNS world in the last decade.

The service is pretty cheap, about 12 bucks a year and gives a complete set of APIs to create, delete, modify maintain and query DNS records on this new service.

To make transition simple they even have migration tools from bind to Amazon Route 53 ready to go. Here are pointers to some more documentation if you want to get down and dirty with it.

But stop thinking of this as a simple DNS service. I can see a whole range of interesting applications being built over this service in next few months. The simplest application which I think would be built over this is a service like “Dynamic DNS service”, which would be cheap to build with Route 53 doing most of the grunt work for you.

Here is how Jeff Barr introduced this service

Today we are introducing Amazon Route 53, a programmable Domain Name Service. You can now create, modify, and delete DNS zone files for any domain that you own. You can do all of this under full program control—you can easily add and modify DNS entries in response to changing circumstances. For example, you could create a new sub-domain for each new customer of a Software as a Service (SaaS) application. DNS queries for information within your domains will be routed to a global network of 16 edge locations tuned for high availability and high performance.

Route 53 introduces a new concept called a Hosted Zone. A Hosted Zone is equivalent to a DNS zone file. It begins with the customary SOA (Start of Authority) record and can contain other records such as A (IPV4 address), AAAA (IPV6 address), CNAME (canonical name), MX (mail exchanger), NS (name server), and SPF (Sender Policy Framework). You have full control over the set of records in each Hosted Zone.

Here is some more info from Werner Vogels

Amazon Route 53

Amazon Route 53 is a new service in the Amazon Web Services suite that manages DNS names and answers DNS queries. Route 53 provides Authoritative DNS functionality implemented using a world-wide network of highly-available DNS servers. Amazon Route 53 sets itself apart from other DNS services that are being offered in several ways:

A familiar cloud business model: A complete self-service environment with no sales people in the loop. No upfront commitments are necessary and you only pay for what you have used. The pricing is transparent and no bundling is required and no overage fees are charged.

Very fast update propagation times: One of the difficulties with many of the existing DNS services are the very long update propagation times, sometimes it may even take up to 24 hours before updates are received at all replicas. Modern systems require much faster update propagation to for example deal with outages. We have designed Route 53 to propagate updates very quickly and give the customer the tools to find out when all changes have been propagated.

Low-latency query resolution The query resolution functionality of Route 53 is based on anycast, which will route the request automatically to the DNS server that is the closest. This achieves very low-latency for queries which is crucial for the overall performance of internet applications. Anycast is also very robust in the presence of network or server failures as requests are automatically routed to the next closest server.

No lock-in. While we have made sure that Route 53 works really well with other Amazon services such as Amazon EC2 and Amazon S3, it is not restricted to using it within AWS. You can use Route 53 with any of the resources and entities that you want to control, whether they are in the cloud or on premise.

We chose the name "Route 53" as a play on the fact that DNS servers respond to queries on port 53. But in the future we plan for Route 53 to also give you greater control over the final aspect of distributed system naming, the route your users take to reach an endpoint. If you want to learn more about Route 53 visithttp://aws.amazon.com/route53 and read the blog post at the AWS Developer weblog.

Here are some of the other comments on this service

AWS Cloudwatch is now really open for business

In a surprise move Amazon today released a bunch of new features to its cloudwatch service, some of which, till now, were provided by third party service providers.

  • Basic Monitoring of Amazon EC2 instances at 5-minute intervals at no additional charge. AWS cloudwatch
  • Elastic Load Balancer Health Checks -Auto Scaling can now be instructed to automatically replace instances that have been deemed unhealthy by an Elastic Load Balancer.
  • Alarms – You can now monitor Amazon CloudWatch metrics, with notification to the Amazon SNS topic of your choice when the metric falls outside of a defined range.
  • Auto Scaling Suspend/Resume – You can now push a "big red button" in order to prevent scaling activities from being initiated.
  • Auto Scaling Follow the Line -You can now use scheduled actions to perform scaling operations at particular points in time, creating a time-based scaling plan.
  • Auto Scaling Policies – You now have more fine-grained control over the modifications to the size of your AutoScaling groups.
  • VPC and HPC Support – You can now use AutoScaling with Amazon EC2 instances that are running within your Virtual Private Cloud or as Cluster Compute instances.

The unbiased private vs AWS ROI worksheet

One of the my problems with most cloud ROI worksheets is that they are heavily weighted for use-cases where resource usage is very bursty. But what if your resource requirements aren’t bursty ? And what if you have a use case where you have to maintain a small IT team to manage some on-site resources due to compliance and other issues ? 

image

In his latest post, Richard shares his worksheet for everyone to play with.

 

 

How to setup Amazon Cloudfront ( learning with experimentation )

I have some experience with Akamai’s WAA (Web applications archive) service, which I’ve been using in my professional capacity for a few years now. And I’ve have been curious about how  cloudfront compares with it. Until a few weeks ago, Cloudfront didn’t have a key feature which I think was critical for it to win the traditional CDN customers. “Custom origin” is an amazing new feature which I finally got to test last night and here are my notes for those who are curious as well.

My test application which I tried to convert was my news aggregator portal http://www.scalebig.com/. The application consists of a rapidly changing front page (few times a day) ,  a collection of old pages archived in a sub directory and some other webpage elements like headers, footers, images, style-sheets etc.

  • While Amazon Coudfront does have a presence on AWS management console, it only supports S3 buckets as origins.
  • Since my application didn’t have any components which requires server side processing, I tried to put the whole website on an S3 bucket and tried to use S3 as the origin.
  • When I initially set it up, I ended up with multiple URLs which I had to understand
    • S3 URL – This is the unique URL to your S3 bucket. All requests to this URL will go to Amazons S3 server cluster, and if your objects are marked as private, anyone can get these objects. The object could be a movie, an image, or even an HTML file.
    • Cloudfront URL  – This is the unique Cloudfront URL which maps to your S3 resource through the cloudfront network. For all practical purposes its the same as the first one, except that this is through the CDN service.
    • Your own domain name – This is the actual URL which end users will see, which will be a CNAME to the cloudfront URL.
  • So in my case, I configured the DNS entry for www.scalebig.com to point to DNS entry Cloudfront service created for me (dbnqedizktbfa.cloudfront.net).
  • First thing which broke is that I forgot that this is just an S3 bucket, so it can’t handle things like “sparsed html” to dynamically append headers/footers. I also realized that it can’t control cache policies, setup expiry, etc. But the worst problem was that if you went to “http://www.scalebig.com/” it would throw an error. It was expecting a file name, so http://www.scalebig.com/index.html would have worked.
  • In short I realized that my idea of using S3 as a webserver full of holes.
  • When I started digging for options to enable “custom origin” I realized that those options do not exist on the AWS management console !!. I was instead directed to some third party applications to do this instead. (most of them were commercial products, except two)
  • I finally created the cloudfront configuration using Cloudberry S3 Explorer PRO which allowed me to point Cloudfront to a custom domain name (instead of an S3 resource).
  • In my case my server was running on EC2 with a public reserved IP.  I’m not yet using AWS ELB (Elastic loadbalancer).
  • Once I got that working, which literally worked out of the box, the next challenge is to setup the cache controls and expiries working. If they are set incorrectly, it may stop users from getting latest content. I setup the policies using “.htaccess”. Below I’ve attached a part of the .htaccess I have for the /index.html page which is updated many times a day. There is a similar .htaccess page for rest of the website which recommends a much longer expiry.
  • Finally I realized that it is possible that I might have to invalidate parts of the caches at times (could be due to a bug). Cloudberry and AWS management console didn’t have any option avaliable, but apparently “boto” has some APIs which can work with Amazon cloudfront APIs to do this.

# turn on the module for this directory
ExpiresActive on
# set default
ExpiresDefault "access plus 1 hours"
ExpiresByType image/jpg "access plus 1 hours"
ExpiresByType image/gif "access plus 1 hours"
ExpiresByType image/jpeg "access plus 1 hours"
ExpiresByType image/png "access plus 1 hours"
ExpiresByType text/css "access plus 1 hours"
ExpiresByType text/javascript "access plus 1 hours"
ExpiresByType application/javascript "access plus 1 hours"
ExpiresByType application/x-javascript "access plus 1 hours"
ExpiresByType application/x-shockwave-flash "access plus 1 hours"

Header set Cache-Control "max-age=3600"

AddOutputFilterByType DEFLATE text/html text/plain text/xml application/javascript text/javascript  application/x-javascript text/css

Here is how I would summarize the current state of Amazon cloudfront.

  • Its definitely ready for static websites which don’t have any server side execution code.
  • Cloudfront only accepts GET and HEAD requests
  • Cloudfront ignores cookies, so server can’t set any. (Browser based cookie management will still work, which could be used to keep in-browser session data)
  • If you do want to use serverside code, use iframes, jsonp, javascript widgets or some other mechanism to execute code from a different domain name (which is not on cloudfront).
  • While Cloudfront can log access logs to an S3 bucket of your choice, I’ll recommend using something like Google Analytics to do log analysis.
  • I’ll recommend buying one of the commercial third party products if you want to use Custom Origin and would recommend reading more about the protocols/APIs before you fully trust a production service to Cloudfront.
  • I wish Cloudfront starts supporting something like ESI, which could effectively make an S3 bucket a full fledged webserver without the need of having a running EC2 instance all the time.
  • Overall Cloudfront has a very long way to go, in the number of features, to be treated as a competitor for Akamai’s current range of services.
  • And if you look at Akamai’s current world wide presence, Cloudfront is just a tiny blip.  [ Cloudfront edge locations ]
  • But I suspect that Cloudfront’s continuous evolution is being watched by many and the next set of features could change the balance.

I’m planning to leave http://www.scalebig.com/ on Cloudfront for some time to learn a little more about its operational issues. If you have been using Cloudfront please feel free to leave comments about what important features, you think, are still missing.

Cloud economics: Not really black and white..

While some of the interest in moving towards public cloud is based on sound economics, there is a small segment of this movement purely due to the “herd mentality”.

The slide on the right is from a Microsoft publication shows that larger networks may be less economical on the cloud (at least today).

Richard Farley, has been discussing this very topic for few months now. He observed that a medium sized organization which already has a decent IT infrastructure including a dedicated IT staff to support it has a significantly smaller overhead than what cloud vendors might make it look like.

Here is a small snippet from his blog. If you are not afraid to get dirty with numbers read the rest here.

Now, we know we need 300 virtual servers, each of which consumes 12.5% of a physical host.  This means we need a total of 37.5 physical hosts.  Our vendor tells us these servers can be had for $7k each including tax and delivery with the cabinet.  We can’t buy a half server, and want to have an extra server on hand in case one breaks.  This brings our total to 39 at a cost of $273k.  Adding in the cost of the cabinet, we’re up to $300k.

There are several non-capital costs we now have to factor in.  Your vendor will provide warranty, support and on-site hardware replacement service for the cabinet and servers for $15k per year.  Figure you will need to allocate around 5% of the time of one of your sys admins to deal with hardware issues (i.e., coordinating repairs with the vendor) at a cost of around $8k per year in salary and benefits.  Figure power and cooling for the cabinet will also cost $12k per year.  In total, your non-capital yearly costs add up to $35k.

One thing which posts doesn’t clearly articulate, is the fact that while long term infrastructure is cheaper to host in private cloud, its may still be more economical to use public cloud for short term resource intensive projects.

AWS cloudfront grows up… a little. Now allows Custom origins.

 Cloudfront has come a long way from its humble beginnings. Here is what Jeff had to say when he announced that its out of “beta”….

    1. First, we’ve removed the beta tag from CloudFront and it is now in full production. During the beta period we listened to our customers and added a number of important features including Invalidation, a default root object, HTTPS access, private content, streamed content, private streamed content,AWS Management Console support, request logging, and additional edge locations. We’ve also reduced our prices.
    2. There’s now an SLA (Service Level Agreement) for CloudFront. If availability of your content drops below 99.9% in any given month, you can apply for a service credit equal to 10% of your monthly bill. If the availability drops below 99% you can apply for a service credit equal to 25% of your monthly bill.

While all this is a big step forward, its probably not enough for the more advanced CDN users to switch over yet.

Here are a couple of issues which stuck out in the Developer Guide.

  • Query parameters are not used to generate cache key. So while it looks like it can pull content from an elastic loadbalancer, it still acts like a giant S3 accelerator.
  • Doesn’t support HTTP/1.1 yet. So if you have multiple domains on the same IP, this solution isn’t for you.

Building your first cloud application on AWS

Building your first web application on AWS is like shopping for a car at pepboys, part by part. While manuals to build one might be on aisle 5, the experience of having built one already is harder to buy.

Here are some interesting logistical questions, which I don’t think get enough attention, when people discuss issues around building a new AWS based service.

  1. Picking the right Linux distribution: Switching OS distribution may not be too simple if your applications need custom scripts. Picking and sticking with a single distribution will save a lot of lost time.
  2. Automated server builds: There are many ways to skin this cat. Chef, Puppet, Cfengine are all good… Whats important is to pick one early in the game.
  3. Multi-Availability Zone support: Find out if multi availability zone support is important. This can impact over all architecture of the solution and tools used to implement the solution.
  4. Data consistency requirements: Similar to the Multi-AZ support question, its important to understand the data consistency tolerance of the application before one starts designing the application.
  5. Datastore: There are different kinds of datastores available as part of AWS itself (SimpleDB, S3 and RDS). If you are planning to keep your options open about moving out of AWS at some point, you should think about picking a datastore which you could move out with you with little effort. There are many NoSQL and RDBMS solutions to choose from.
  6. Backups: While some think its a waste of time to think about backups too early, I suspect those who don’t will be spending way too much time later. The long term backup strategy is integral part of disaster recovery planning, without which you shouldn’t think of going live.
  7. Integration with external data sources:  If this application is part of a larger cluster of application which is running somewhere else, think about how data would be sent back and forth. There are lots of different options depending on how much data is involved (or how important protection of that data is)
  8. Monitoring/Alerting: Most standard out of the box monitoring tools can’t handle dynamic infrastructure very well. There are, however,  plugins available for many existing monitoring solutions which can handle the dynamic nature of infrastructure. You could also choose to use one of the 3rd party monitoring services if you’d rather pay someone else to do it.
  9. Security: You should be shocked to see this on #9 on my list. If your service involves user data, or some other kind of intellectual property, build multi-tiered architecture to segment different parts of your application from targeted attacks. Security is also very important while picking the right caching and web server technologies.
  10. Development: Figure out how developers would use AWS. Would they share the same AWS account, share parts of the infrastructure, share datastore, etc. How would the developer resources be monitored so that unintentional uses of excessive resources could be flagged for alerting.

Are there other subtle issues which I should have listed here ? Let me know.

 

James Hamilton: Data center infrastructure innovation

Summary from James’ keynote talk at Velocity 2010 James Hamilton

  • Pace of Innovation – Datacenter pace of innovation is increasing.  The high focus on infrastructure innovation is driving down the cost, increasing reliability and reducing resource consumption which ultimate drives down cost.
  • Where does the money go ?
    • 54% on servers, 8% on networking, 21% on power distribution, 13% on power, 5% on other infrastructure requirements
    • 34% costs related to power
    • Cost of power is trending up
  • Clouds efficiency – server utilization in our industry is around 10 to 15% range
    • Avoid holes in the infrastructure use
    • Break jobs into smaller chunks, queue them where ever possible
  • Power distribution – 11 to 12% lost in distribution
    • Rules to minimize power distribution losses
      • Oversell power – setup more servers than power available. 100% of servers never required in a regular datacenter.
      • Avoid voltage conversions
      • Increase efficiency of conversions
      • High voltage as close to load as possible
      • Size voltage regulators to load and use efficient parts
      • High voltage direct current a small potential gain
  • Mechanical Systems – One of the biggest saving is in cooling
    • What parts are involved ? – Cooling tower, heat exchanges, pumps, evaporators, compressors, condensers, pumps… and so on.
    • Efficiency of these systems and power required to get this done depends on the difference in the desired temperature and the current room temperature
    • Separate hot and cold isles… insulate them (don’t break the fire codes)
    • Increase the operating temperature of servers
      • Most are between 61 and 84
      • Telco standard is 104F (Game consoles are even higher)
  • Temperature
    • Limiting factors to high temp operation
      • Higher fan power trade-off
      • More semiconductor leakage current
      • Possible negative failure rate impact
    • Avoid direct expansion cooling entirely
      • Air side economization 
      • Higher data center temperature
      • Evaporative cooling
    • Requires filtration
      • Particulate and chemical pollution
  • Networking gear
    • Current networks are over-subscribed
      • Forces workload placement restrictions
      • Goal: all points in datacenter equidistant.
    • Mainframe model goes commodity
      • Competition at each layer rather than vertical integration
    • Openflow: open S/W platform
      • Distributed control plane to central control

More on Amazon S3 versioning (webinar)

If you missed the AWS S3 versioning webcast, I have a copy of the video here. And here are the highlights..

image

  • You can enable and disable this at the bucket level
  • They don’t think there is a performance penalty of turning versioning (but it was kind of obvious S3 would be doing slightly extra work to figure out which is the latest version of any object you have)
  • There isn’t any additional cost for using versioning. But you have to pay for extra copy of each object.
  • MFA (multi factor authentication) to delete objects is not mandatory when versioning is turned on. It needs to be turned on. This was slightly confusing in the original email I got from AWS.
  • If you are planning to use this, please watch this video. There is a part where they explain what happens if you disable versioning after using the feature. This is something you might like to know about.
  • They use GUID for versioning of each object
  • You can iterate over objects and figure out how many versions you have for each object, but currently its not possible to find all objects which have versions older than X date. This is important if you are planning to garbage collection (cleaning up older copies of data) for a later time.

More References

Versioning data in S3 on AWS

One of the problem with Amazon’s S3 was the inability to take a “snapshot” of the state of S3 at anyAmazon Web Services given moment. This is one of the most important DR (disaster recovery) steps of any major upgrade which could potentially corrupt data during a release. Until now the applications using S3 would have had to manage versioning of data, but it seems Amazon has launched a versioning feature built into S3 itself to do this particular task. In addition to that, they have made it a requirement that delete operations on versioned data can only be done using MFA (Multi factor authentication).

Versioning allows you to preserve, retrieve, and restore every version of every object in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects any time you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.

The way AWS Blog describes the feature, it looks like a version would be created every time an object is modified and each object in S3 could have different number of copies depending on the number of times it was modified.

This kind of reminds me of SVN/CVS like versioning control system and I wonder how long it will take for someone to build a source code versioning system on S3.

BTW, data requests to a versioned object is priced the same way as regular data, which basically means you are getting this feature for free.

References