The pain of Load balancing applications

Introduction


Loadbalancing may mean many different things to different people but its all about distributing load. For me its an architecture of how some network services can be scaled by adding multiple servers performing the same tasks.

If you had a popular website with static content, and if your server couldn't keep up with the request, all you had to do was setup multiple web servers and use round-robin DNS entries to divide the load into multiple servers. For dynamic web applications like search engines this plays a significant role because the number of users per node can support is much lower.
Over time, as applications grew more complex and as web companies found customers outside US they found out the hardware that the only way to optimize network performance was by going local. Loadbalancing POP (points of presence) around the world provide a snappy user experience which has been important and drawing more customers.

While, static content on web servers can easily be replicated to servers around the world, some web applications were required to maintain state of user actions. The loadbalancer have been trying to attack this particular problem for the last few years. Among the many odd ways of doing this, one was by associating source-IP to a web server. Unfortunately  some ISPs switch source-IP in between sessions which proved to be disastrous for some applications. Others used cookies and session identifiers in URL to solve the problem.
Loadbalancing is rocket sciences, but its not the the faint of heart either. This article is collection of my past and present thoughts on loadbalancing architectures which I've worked with or read about.

Under the hood


Though loadbalancers sound simple, under the hood they are a complicated beast. Todays loadbalancers have so many features that it sometimes overshadows the complexities of application its supposed to loadbalance. Its also important to note that loadbalancers are not just designed for web applications anymore. Its an ideal hardware to use loadbalance databases, ldap servers, terminal servers and other custom behind the scene custom applications.

Firewalls and Application Gateways


The internal design and implementation of a modern Loadbalancer is very close to a basic firewall. While the firewall is designed to block all illegal traffic, it also does limited Network and Port address translations. A good firewall is more than a packet filter in the sense that it actually keeps state of whats going on between a user and the client. From the moment a session is initiated, assuming that its allowed by the acls (access control lists), it creates session record where it logs the traffic protocol, source and target addresses and port numbers. Subsequent packets are tagged and allowed through or rejected based on what sessions are valid.

HTTP is a relatively trivial protocol when compared to more complex protocols like FTP and SNMP. UDP and ICMP in particular are complicated beasts because they were never designed to be "stateless", which is one of the basic requirements for it to be firewalled and tracked easily. UDP, ICMP and other complex communication protocols have forced firewall vendors to come out with custom hacks to deal with the different problems.

Depending on whom you talk to, a firewall which can talk to two different networks, inspect and validate sessions using "deep packet inspection" could be called an "application gateway", because they probably have sufficient intelligence to understand, and create responses and respond to requests for that application protocol. Most modern firewalls can be called an HTTP gateway because they can understand and respond to HTTP requests.

TCP/IP basics


To understand what an "application gateway" does its important to understand how TCP/IP works.

  • Resolve the address
    Address resolution is the first step for every successful TCP/IP connection. A client/server cannot communicate with just a name.

  • SYN
    Next step for the client is to send the the first TCP/IP packet with "SYN" flag set. This is like a "hello" packet telling the server that the client is interested to talk. One more information in this packet which the server needs to know is the port number on which the client is interested to talk on. For most web requests its set to 80 or 443.

  • SYN-ACK
    If the server wants to talk on that port, and if it has resources it will reply back with a packet in which "SYN" and "ACK" is set. When the client gets this packet it means that the server is alive and the service is running on that particular port.

  • ACK
    The client at this point can "ACK" the previous packet which the server sent and can, if it wants, send data too.


An "HTTP Gateway" does two important things with this knowledge. First it talks the the browser and does the TCP/IP handshake to understand where the user wants to go. This is important to understand, because even though the browser assumes it is connected to web server, its actually being terminated on the firewall. Once the gateway decodes the HTTP request and knows where it has to go to (and whether that request is allowed) it will initiate a second TCP/IP connection with the webserver at the backend server using a second set of handshake packets. Thats the point when the browser is really connected to the server.
A loadbalancing appliance, for the most part, works just like this. The only thing significantly different with the loadbalancer is its ability to send traffic to multiple servers without the user knowing about it.

Basic Loadbalancing terminologies


Terminologies I will use in the rest of the documentation are based on my experience with Cisco CSS and Radware WSD/CT100s. I've noticed that the vendors take great liberties at creating new terminologies which can easily confuse the admin.

  • Service endpoint
    A "service" in CSS is defined to be an endpoint which can provide service. An example of such a endpoint would be a server with IP 192.168.1.4 running a TCP service on port 80. If you want to loadbalance a couple of read-only oracle servers you might have them providing service on port 1531 instead. In most cases a client won't ever directly connect to this endpoint. The only exception is when the loadbalancer is doing a DNS based loadbalancing in which case the client will directly connect to the service end point.
    This terminology is a little fudgy in Radware WSD. By default WSD assumes one wants to loadbalance all available services on all ports of the servers and doesn't force the user to select a port number on which service is running. This might be a good thing when you have multiple servers providing multiple services, but I personally avoid this for reasons which I'll explain later.

  • Content Rule end point
    A "content rule" in CSS is defined to be an endpoint to which an actual user would connect to. In case of TCP/IP based service, it would probably include IP and port number of the where the requests should come to. If this is a DNS based loadbalancer, it probably would be running a DNS server on port 53 using UDP/TCP.

  • Session persistence
    The feature which allows the LB to track users sessions to direct them to the same server for subsequent requests is what I call "session persistence". Again, there are many different ways of doing this depending on what application server and loadbalancer you use.

  • Timeouts
    This is one of the most critical parameter which will play a big role in how your application works. While timeouts allow clients and application/server/networking components to understand when to give up, they also play a big role in freeing up critical resources which can otherwise slowdown the application. But setting it too low or setting up different timeouts in different parts of network and application components can break your application in unexpected ways.

  • Keepalives
    The work keepalive has different mean in different context. If you are a networking guru you would know that keepalive can be used in some protocols to keep connections alive through firewalls which would otherwise shutdown the connection due to inactivity. If you are a web guru, you would be thinking about the keepalives in HTTP protocol which allows browser to send multiple requests to the server without renegotiating TCP/IP all over again. Unfortunately Cisco CSS also uses this terminology to check service availability.

  • Layer 4 loadbalancing
    Most of the early loadbalancers did loadbalancing and session persistence based on the source IP and port number. In a perfect world where every user has his or her own IP which doesn't change with time this is a perfect solution. However for our world, where ISPs like AOL change proxy server without telling the user and where 100 to 1000 users can be NATing with the same source IP, this solution doesn't work.

  • Layer 7 loadbalancing
    This is what most loadbalancing applications use to persist and distribute sessions to multiple servers. This requires the loadbalancer to inspect the HTTP packet to look at the various HTTP header parameters to make a decision. Common HTTP parameters which get inspected are the HOST string, the REQUEST_URI and Cookies.

  • Load distribution Algorithms
    One of the trickiest problems for a loadbalancer is finding the most optimal server a new user should go to. Unlike round robin DNS, which gives the same weightage to each of the servers, some algorithms have the capability to send more traffic to some servers if they are faster/newer than others, or send less traffic to some nodes if they are very busy or have a lot of active online sessions. Some of the common algorithms I've come across are



  • Round Robin

  • Weighted Round Robin

  • Least Users

  • Weighted Least Users

  • Least Traffic

  • Weighted Least Traffic


  • DNS based loadbalancing
    The mechanism of distributing load at DNS query time is called DNS Round Robin. The loadbalancing appliance usually does some kind of check to see which web servers are available. Based on the Load distribution algorithm it will send a list of available nodes in the order of priority as part of the DNS query response to the customer.

  • Global Loadbalancing
    This terminology is generally reserved for appliances which trying to loadbalance customers to various different points of presence around the world/country. The appliance does some kind of polling to find out which POP is closest and most responsive to the customer before it sends the client to that POP. The implementation of how Global Loadbalancing is done may vary, but DNS is one of the popular mechanisms for directing users.

    Design Recommendations



    • N, N+1 or 2*N Configuration
      Whenever resources are procured and deployed, always plan for one extra. This is the only way you can provide continuous service without degrading quality. You don't have to keep it running, but just have it available as a standby. Loadbalancing solutions which understand the significance of standby server and know when to use it could reduce the number of annoying phone alerts at 3am on Sunday morning.

    • Health Monitoring
      Almost all loadbalancers will claim to have some mechanism of detecting web server failure. But if you have a complex web application which relies on a host of other components to service customer requests, then make sure that the health monitoring module can accurately poll node health. For example, there are time when requesting a "/index.html" page may come back with "200 OK", but "/login.aspx?username=xyz&pass;=xyz" might throw a stack trace because LDAP was not available. Also remember that the frequency of health checks can degrade your applications response time as well.

    • Maintaining State
      Applications which maintain state information within session-memory are very picky about session persistence. Most loadbalancers can be configured to extract session identifiers from URL or from Cookies. If you know how your application sends session identifiers to the end user, make sure the Loadbalancer supports it.Unfortunately though cookies are simple to implement on the application server, they can sometime become a complicated beast for networking devices. Here are the problems I've dealt with in the past

      • Cookies need to be enabled
        Applications which maintain session require Cookies to be enabled in the browser. URL rewriting is another way to send session identifier, however its considered less secure because most proxy servers log GET/POST requests which will include the session identifier. If you are using SSL this is not a problem, however bookmarks can get ugly

      • Cookie size is limited
        If you have a lot of cookies, or forget to delete cookies from users browsers, then they will add up to the point after which cookies cannot be part of the HTTP header. Whats more tricky is the fact that some Loadbalancers don't even read the complete cookie header. Which means that if the session cookie is at the end of a long list of cookies, some loadbalancers might actually ignore it.

      • Cookies+Java over SSL
        If your application uses HTTPS and have Java applets communicating over SSL, this is one bug to look out for. We have seen instances where Java applets insert the HTTP cookie headers into HTTPS header section instead of HTTP header. The work around is do the HTTP->HTTPS packet encapsulation yourself. If this bug does show up in your network, the responsibility of extracting cookie from HTTPS packet and inserting it into HTTP packet belongs to the SSL engine you are using. For us Radware seems to do the trick so we were never able to break the application in-house. However, some clients outside our company were using proxy servers which were remove extra information in SSL header which broke our application

      • Set-Cookie bug
        One of the very early session persistence bugs we noticed in a couple of loadbalancers I tested in late 2000 was the one where the "Set-Cookie" HTTP header from the server was ignored by the loadbalancer. This meant that the there was a very good possibility that the first HTTP request the client sent with the Cookie set, would be different from the original server which sent a Set-Cookie request to the client.

      • Keep-alive bug
        Keep-alives are designed to optimize network throughput by allowing clients to send multiple HTTP requests over the same TCP channel. Unfortunately some loadbalancers ignore all Cookies except the one in the first HTTP request. The logic of this implementation is simple. Once a client is connected to the servers, there is no reason to check the cookies anymore. The problem however shows up when the client is using a proxy server. Some "intelligent" proxy servers can multiplex multiple client requests in the same Keepalive channel which can play havoc with the sessions if the loadbalancer doesn't decode them.



    • Inactivity Timeouts
      Inactivity timeout of an established TCP/IP connection can be a problem if delays over a minute are normal with your web application. We have faced a number of timeout related issues in our network. Six of the most common components which can timeout your TCP connection early are..

      • Proxy servers

      • Firewalls

      • Loadbalancers

      • SSL Accelerators

      • Web server

      • Application server



    • Session Timeouts
      Session timeouts also are important. In most cases these are the only two components which actually worry about "sessions" over multiple TCP connections.

      • Loadbalancers

      • Application server



    • Recommended Optimizations

      • Use Multiple domains
        If you have a site with lot of images, CSS files or Javascripts embedded in them, I strongly recommend you to distribute your files over multiple "hosts". The reason is simple. There is a limitation on how many objects can be downloaded per host for both IE and Firefox. If you spread your files over 2 hosts, your browser will open twice the number of threads to download. For most customers who don't have too many images, this is not a problem. But a website heavy on AJAX should consider this.

      • Latency
        Every request has a latency associated with it. If you are have an option of setting up multiple datacenters, look for latency instead of distance from the customers location. If buying a leased pipe from customers location to your data center is possible, that would be closest to the perfect solution you can achieve for. The only thing greater than that is moving the data center to customers location.
        If you can't do either of these, think about using services like Akamai who cache and serve the object from a server nearest to the customer.

      • Caching
        Caching is a great feature. If a customer already has an image file, there aren't many good reasons why that image should be requested for again and again. Setup caching on your web server. On Apache it can be done using mod_expires. If you have a dynamic web application, try to set it up such that dymanic content is not negatively affected due to the caching feature.

      • Compression
        Many of you are not even aware that many websites (if not most) already do data compression on the fly. If you have application which are bandwidth intensive, enabling compression can probably speed up the the UE(User Experience) and save you a bunch of money at the same time. However remember that there is a computational expense at the server end to compress content on the fly. If the servers are very loaded, think about deploying a cluster of SSL accelerators which can take over the load.

      • Keepalives

      • Browser threads



    • SSL Accelerators

      • Compatibility
        If your application might require SSL acceleration at somepoint, design your architecture assuming that you need one rightaway. SSL is a CPU intensive process which is usually not done by the loadbalancer. However there are a few which do.The decision to buy a loadbalancer with or without SSL within it purely depends on the traffic one is expecting over time. Because the throughput of a Loadbalancer is usually much higher than that of a SSL accelerator, a solution were Loadbalancer and SSL are in the same box might be more expensive to scale than a solution where SSL and LB are different components in the network.If you plan to seperate your LB and SSL infrastructure one addtional issue you would have to deal with is thier compatiblity. The devices we initially selected for LB and SSL did work together very well, untill we switched on VRRP when all hell broke lose. Unless you have a lot of time and resources it might be better off to go with combination of solutions which have been implemented before instead of picking a new pair of vendors.

      • One-Arm or In-Line configuration
        When you design the network diagram, another question you will be asking yourself is whether you want to deploy SSL in "One-Arm" or "In-Line" configuration. The "In-Line" configuration is a configuration where all requests go through SSL loadbalancer before they hit the Loadbalancer. The "One-Arm" configuration is where all traffic hit the Loadbalancer which then makes the decision on whether to send the traffic to the SSL box. If you are a financial site which does all its work over SSL, you might like to investigate "In-Line" configuration, but for the rest of us "One-Arm" might be more suitable.






    technorati tags:

  • Comments

    Unknown said…
    Caesars Casino Review (2021) - Get $10 Free with No Deposit
    Caesars Casino Review · 1. https://septcasino.com/review/merit-casino/ Claim your $10 출장안마 free bonus and 바카라사이트 receive up to $20 in casino credits (30 Free 바카라 사이트 Spins) · 2. Play Slots at bsjeon.net Caesars Casino.

    Popular posts from this blog

    Chrome Frame - How to add command line parameters

    Creating your first chrome app on a Chromebook

    Brewers CAP Theorem on distributed systems