Understanding Latency Based Routing of Amazon Route 53 - Part 1

17th Aug 2012
  • +0
Share on
close
  • +0
Share on
close
Share on
close

DNS is a globally distributed service that translates human readable names like www.xyz.com into the numeric IP addresses like 192.0.2.2. DNS servers translate requests for names into IP addresses, controlling which server an end user will connect to when they type a domain name into their web browser. In Amazon World this function is provided by Route 53.

Route 53 is a scalable and authoritative Domain Name System (DNS) web service. Route 53 responds to DNS queries using a global network of authoritative DNS servers, which reduces latency. It also provides secure and reliable routing to our infrastructure that uses Amazon Web Services (AWS) products, such as Amazon Elastic Compute Cloud (Amazon EC2) and Elastic Load Balancing.

AWS recently enhanced Route53 with the ability to do latency based routing, which serves user requests from the AWS region with lowest network latency.

If our application is hosted on Amazon EC2 instances in multiple AWS regions, we can reduce latency for our end users by serving their requests from the EC2 region with lowest network latency. Route 53 LBR lets us use DNS to route end-user requests to the AWS region that will give our application users the fastest response. This way it helps us to improve our application’s performance for a global audience.

Let us explore this feature with a use case:

Imagine aXYZ Cruise Lines companyfrom Florida, USA which operates 50+ destinationsspanning 10+ countries in the regionand rapidly expanding every year. Majority of their bookings and business happens throughonline and mobile medium.Their website isvisited by users from Europe, USA, Canada, South America and Middle East all over the year. During promotions, holiday seasons etcthey will have visitors from more locations around the world. Since they have heavy dependence upon the online medium, their site needs to be highly available and scalable with better performance.The online cruise company has hosted their application on AWS.

Like most of the companies, the online cruise company has followed a simple and common infrastructure architecture pattern for their website – Centralized Architecture


centralized_architect

The entire web/app infrastructure is provisioned inside a single AWS region (example USA-West region). User requests originating all around the world are directed to the centralized infrastructure launched at single region. This architecture may not be optimum when our user base is distributed across multiple geographies. The users accessing from different geographies will have different response times because of the network latency, Also there is single point of failure if the network link to that Amazon EC2 region is broken (though the latter is a very rare occurrence).Example: Users from USA will have faster response times with AWS USA–West Servers than Users from Europe, MEAregions (they might feel the latency creeping up).

Generally in an online travel application (like the xyz-cruise.com) some functions are heavily used compared to others. Functionslike Search for Tickets, User/Product details, discounts and offers are heavily accessed compared to others. These modules need to be highly scalable and available for the visitors to consume information anytime. Since they are also the customer facing pages/catalogues, usually these functions are designed with heavy content and graphics. Performance becomes very criticalwhen we adopt a content heavy design and adds burden to your latency.Also, Millions of hits will happen on search and product view functions, but only a percentage (few thousands) of them will be converted as tickets, it becomes very critical for the online travel company to architect these functions and customer facing pages with proper user experience and faster performance. Every second it takes extra to load these pages thousandsof customers will lose patience and leave the site and as a result millions worth dollars of business is lost in a year. The following diagram illustrates the sample list of functions provided by online cruise company.


geo_distributed_funct

On the other hand if you observe,functions like Booking and Payments etc are done only by few thousand users in a day (compared to millions of hits to landing, search pages etc). They are not graphics heavy, but usually secured using SSL/HTTPS. Users can afford to wait few seconds while using these pages but security, availability and data integrity takes precedence in these functions for them. Depending upon the use case, thesefunctions can be exposed over the internet as HTTP/S or web services (REST, SOAP) for consumption by other services. Also since these functions need not be regionally distributed (depends on use case) they can be shared and served from a common region/location.

Since the online cruise company follows a simple centralized architecture and has a data center in US-West and all the visitors are distributed across the globe, let us see what it takes as the roundtrip latency measurements to access their website in centralized infrastructure:

Online cruise website is hosted on USA-West region (centralized architecture)From Japan (ms)From Germany(ms)From New York (ms)From Rio (ms)
11317876101

Note: The above measurements are not constant and may keep varying every few seconds.

If a single round trip takes 178ms from Europe imagine content heavy pages loading images, JS, scripts, HTTP/S, multiple AJAX calls etc. The static assets can be delivered using Akamai or CloudFront CDN, but the HTTP or AJAX calls serving the dynamic pages will still face the latency problems.What if the the company cannot afford a CDN? How do we improve and solve this latency problem? Solution – Geo Distributed architecture with Route 53 LBR.

About the authors:

Harish Ganesan is the Chief Technology Officer (CTO) and Co-Founder of 8KMiles and is responsible for the overall technology direction of its products and services. Harish Ganesan holds a management degree from Indian Institute of Management, Bangalore and Master of Computer Applications from Bharathidasan University , India.

Santhosh is a Senior Software Engineer at 8KMiles with 4 years of experience working on Application development using Amazon Web Services. He was part of 8KMiles team involved in building the Virtual Computing Environment. He is the go to Engineer for Cloud Consulting, Cloud Migration, Performance Engineering at 8KMiles. Santhosh has a Masters in Information Technology degree from IIIT-Bangalore.

  • +0
Share on
close
  • +0
Share on
close
Share on
close
Report an issue
Authors

Related Tags

    Our Partner Events

    Hustle across India