This article is the second in a series of articles outlining how to setup a highly available, scalable, serverless single page application (SPA) using AWS S3.
In the previous article I went over the creation of our site hosted in S3.
In this article I will cover:
- Creating a public hosted zone in Amazon Route53 to store DNS records
- Linking this hosted zone to my domain registrar – in my case GoDaddy
- Testing DNS with dig and nslookup
- Creating a alias record set for my website hosted in S3
Route53 for DNS
Using Amazon Route53 to manage your DNS gives a number of advantages over other DNS providers:
- High availability and scalability
- Integrates seamlessly with AWS services such as S3 website hosting, elastic load balancing (ELB)
- Provides features such as latency and geo based routing
- Even with applications not hosted in AWS it offers features such as health checks, monitoring and DNS fail-over
Cost of Route53
Route53 is a fairly cost effective service. You get charged per hosted zone and the number of DNS queries served. This means that in some cases you can improve costs by increasing the TTL of your record sets. However in cases such as elastic load balancing you may not have the option to set your own TTLs.
Creating the Public Hosted Zone
From the AWS management console go to Route53. From there you can select “Create Hosted Zone”. Ensure that the
Type is set to “Public Hosted Zone”.
Once the hosted zone has created you will be presented with the hosted zone view with a default NS record set and a default SOA record set.
Pointing GoDaddy Domain to AWS DNS
Now that you’ve created your public hosted zone you need to tell your domain registrar to use the Amazon provided name servers for DNS. From GoDaddy, navigate to your domain DNS settings. In the “Nameservers” section select “Using custom nameservers” and the provide the 4 NS domains that are listed in your AWS public hosted zone. Once you save it can take up to 24 hours to completely transition the nameservers. This is based on the SOA record set by GoDaddy.
Testing DNS records with nslookup and dig
Now that the DNS nameservers have changed, how can we check that the propogation has completed? This is where tools such as
nslookup come in.
NSLookup is a tool that is usually installed on Windows or Unix based systems. It’s accessible via the command line.
$ # Query the nameserver records for lula.cloud $ nslookup -query=NS lula.cloud -debug $ # Query the start of authority (SOA) record for lula.cloud $ nslookup -query=SOA lula.cloud -debug $ # Query the naked domain DNS for lula.cloud $ nslookup lula.cloud -debug
From the outputs you can tell whether your DNS is pointing to GoDaddy or Amazon Route53.
TTL and Negative TTL
From the results of
nslookup you can see the time to live
TTL of the records. If the DNS is still pointing to GoDaddy (or your registrar) you will need to wait for the TTL (in seconds) before the DNS servers update the records.
Negative TTL can be seen as the
minimum value when querying the SOA record. If you request a DNS record that is not set (i.e. before creating the record set), DNS servers will cache this negative result up-to this
minimum number of seconds. This can be a nuisance when you set the DNS record set and then don’t see the propagation until long after.
Creating the Alias Record to the S3 Website
Once the DNS has propagated you can create a record to point to the S3 website. From the Hosted Zone view create a new Record Set.
Leave the name field blank (we want to set the naked domain). Chose
Type as A record. Select
Alias and then from the
Alias Target dropdown select the S3 bucket where your site is hosted. Leave the rest as defaults and save the Record Set.
Accessing the Site
Once the new DNS record has propagated you can navigate to the new domain name in your browser.
In future articles I will cover:
- Secure the site with an HTTPS cert provided by LetsEncrypt
- Improving loading time by setting up a CDN with AWS CloudFront.
- Implementing a deployment pipeline to automatically deploy changes from GitLab