In our previous blog, we have integrated Amazon Elastic file system with our EC2 instances. The concept of EFS comes very handy when we want to run multiple instances in our virtual private cloud, accessing to the same data in a consistent manner. So, an obvious question arises from here. Why on earth we would want to run the same server running over multiple instances delivering the same content. The answer to this very question is Redundancy. However the robust architecture and precise configuration our server possess, we should always ensure redundancy. It is key to higher fault tolerance. We can divide our traffic among various instances and manage our load as per their availability. The term coined for this approach is load balancing.
Amazon Web Services provides its own load balancer which forwards the traffic across multiple targets (say EC2 instances) depending upon their availability. It is termed as Elastic load balancer. ELB can automatically scale its request handling capacity to meet the demands of incoming traffic, justifying the term elastic associated with it. It monitors the health of registered targets and routes traffic only to the healthy targets. Also, it serves as an end point for clients, and it can add or remove desired number of instances, which increases the availability of your server or application which ensures high availability.
Apart from benefits of high availability and elasticity, ELB provides robust networking and security features. Your load balancer can be internet facing (public) or private (internal) or both (multi-tier architecture). A multi-tiered architecture can be implemented using internal and public load balancers to route traffic between application tiers. With this multi-tier architecture, our application infrastructure can use private IP addresses and security groups. It exposes only the internet-facing tier with public IP addresses. It also provides integrated certificate management. Also, SSL decryption allows you to centrally manage the SSL settings of the load balancer. It can also integrate with AWS Certificate Manager to enable SSL for your site or application. You get integrated certificate management, managed certificate renewal and deployment, and SSL decryption, allowing you to centrally manage the SSL/TLS settings of the load balancer.
Best Practices for using Elastic Load Balancer
- Higher level of fault tolerance can be achieved by placing our target zones in multiple availability zones. So just in case, if one of the availability zones becomes unreachable then load balancer will route traffic to the available instance in another availability zone.
- Amazon Route 53 provides health checks and DNS failover features to ensure the availability of the applications running behind Elastic Load Balancers. Route 53 disallow traffic from load balancer if there are no healthy instances registered with the load balancer or if the load balancer itself is unhealthy. Facilitating it features, we can run our servers in various regions and assign alternate load balancers across regions for failover. If a load balancer becomes unhealthy or unavailable, route 53 will remove that load balancer endpoint from service and route the traffic to available load balancer in another region.
- Amazon Autoscaling also ensures a specific count for healthy instances. It can also generates instances on demands for met conditions. Autoscaling is configured in front of our ELB and provides better availabilty, fault tolerance, and cost management. Cost effective in the manner that we only pay for instances being needed at the moment.
- Assign security groups to ELB to control exposed ports for security point of view. Each subnet for your load balancer should have a CIDR block with atleast a /27 bitmask and atleast 8 free IP addresses. Load balancer uses this IP addresses to establish connections with instances.
- Enable HTTP keep-alive in web server as it enables load balancer to reuse connections to instances, which reduces CPU utilization. HTTP keep-alive time should be greater than idle timeout setting on load balancer.
Classification of Elastic Load Balancer
Elastic load balancer is broadly classified as Classic load balancer and Application load balancer. Quoting from the AWS docs, “Classic Load Balancer routes traffic based on either application or network level information, and the Application Load Balancer routes traffic based on advanced application level information that includes the content of the request. The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while the Application Load Balancer is ideal for applications needing advanced routing capabilities, microservices, and container-based architectures. Application Load Balancer offers ability to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.”
This has been a brief introduction to Amazon Elastic load balancer. In our next blog, we will dive in a little more deeper in Elastic load balancer classification and configuration.