Geo Distributed Architecture (Using Route53 LBR) - Part 2
Tuesday August 21, 2012 , 5 min Read
Geo Distributed Architecture (Using Route53 LBR)
In this architecture, the web/app infrastructure of the online cruise website is geo distributed across multiple AWS regions (example Europe, Sao Paulo, US-West regionetc).
User requests originating all around the world are directed to the nearest AWS region or AWS region with lowest network latency (more precisely). For example, suppose you have Load balancers in the Sao Paulo, US-West,Europeregion and we have created a latency resource record set in Route 53 for each load balancer. An end user in Paris enters the name of our domain in their browser, and the Domain Name System routes their request to a Route 53 name server. Route 53 refers to its data on latency between Paris and the EuropeEC2 region and between Paris and the Sao Paulo/USEC2 region. If latency is lower between Paris and the Europe region( most of the times), Route 53 responds to the end user's request with the IP address of your load balancer in the Amazon EC2 data center in Europe EC2 region. If latency is lower between Paris and the Sao Paulo region, Route 53 responds with the IP address of the load balancer in the Amazon EC2 data center in Sao Paulo. This architecture rapidly cuts down the latency and gives the user a better experience. Also incase one of the regions is facingnetwork problems, the requests can be routed to alternate low latency region achieving High Availability at overall application level. Though this architecture has benefits it comes with various complexities depending upon the use case and technical stack used, we will uncover some of them below.
The following diagram illustrates a simple multi-tiered technical stack on AWS used by the online cruise company.
- Route 53 is used as the DNS Layer
- Amazon CloudFrontis used for delivering Static assets
- Latency based routing is configured on Route53
- Elastic Load Balancers are created on US-West, Europe, Sao Paulo region
- Latency record sets are created in Route 53 and pointed to these Elastic Load Balancers in different regions
- Web/App EC2 instances are launched with Amazon Auto Scaling in Multiple-AZ’s. Infrastructure in every region is individually elastic and auto scaled. They can seamlessly expand and handle the traffic from a single geography and during regional outages they are capable to handle traffic directed from other regions as well.
- RDS MySQL can be used on Multi-AZ mode for storing data inside a single region. In case multi-region synchronization of data is required, it is suggested to use raw MySQL EC2 instances in M-MMM or M-SSS slave mode.
- All the HTTP, AJAX calls will pass through the Route 53 to the ELB. Amazon ELB will balance the request to the Web/App EC2 based on the algorithm configured. The Web/APP EC2 will access the database, process the result and send the response. The dynamic content is delivered from Web/App EC2 and static assets from CloudFront CDN. Some of the data that requires synchronization are replicated between AWS regions using SQS + custom programs, Web Services or MySQL Asynchronous replications.
- Centralized or Shared services (Booking, Payments etc) can be accessed over internet using HTTP/S, Web Services by the Web/App EC2. Shared services can be implemented in one of the AWS regions or on an External Data center as well.
Round Trip Latency measurements after Geo Distribution and Route 53 LBR
Note: The above measurements are not constant and may keep varying every few seconds.
From the above table we can observe that using the Geo Distributed architecture the HTTP/S and AJAX calls are delivered from the AWS regions with lowest latency (usually nearest region), the round trip latency measurements have significantly dropped and performance has increased for the users.
How much does it cost to use Route 53 and LBR?
Hosted Zones
$0.50 per hosted zone / month for the first 25 hosted zones
$0.10 per hosted zone / month for additional hosted zones
Standard Queries
$0.500 per million queries – first 1 Billion queries / month
$0.250 per million queries – over 1 Billion queries / month
Latency Based Routing Queries
$0.750 per million queries – first 1 Billion queries / month
$0.375 per million queries – over 1 Billion queries / month
Alias queries for ELBs free of charge
What are the key benefits of using Route53Latency Based Routing?
- Better performance than running in single AWS Region
- Improved reliability relative to running in a single region
- Easier implementation and much lower prices than traditional DNS solutions
What are the negatives of using Route 53+Geo Distributed Architecture?
- Route 53 LBR does not have much to crib about, but on the whole I would be happy if AWS Route 53 team can bring directional traffic routing and other algorithms available inproducts like UltraDNS to its portfolio in coming months. This will give architects better control in designing High Availability, Disaster Recovery and Geo Distributed solutions for more use cases.
- The sample Tech stack architecture that I had illustrated may not be applicablefor some use cases. In such scenarios , where there are more complex systems like Search Servers, Cache, NoSQL, Queue, ESB servers etcin your stack, things might get very complicated while designing Geo Distributed solutions.
- Since most of the AWS services operate on regional scope, it may pose problems while designing Geo Distributed architecture. ReferURL: http://harish11g.blogspot.in/2012/06/aws-multi-region-high-availability.htmlwhere i have detailed some points we had faced earlier, while designing Multi-Region AWS deployments.
About the authors:
Harish Ganesan is the Chief Technology Officer (CTO) and Co-Founder of 8KMiles and is responsible for the overall technology direction of its products and services. Harish Ganesan holds a management degree from Indian Institute of Management, Bangalore and Master of Computer Applications from Bharathidasan University , India.
Santhosh is a Senior Software Engineer at 8KMiles with 4 years of experience working on Application development using Amazon Web Services. He was part of 8KMiles team involved in building the Virtual Computing Environment. He is the go to Engineer for Cloud Consulting, Cloud Migration, Performance Engineering at 8KMiles. Santhosh has a Masters in Information Technology degree from IIIT-Bangalore.