Previously, in Intro and First Steps, we successfully configured an S3 bucket as a web host, but it’s still missing many features. In this article, we’ll make the scary-looking endpoint URL, http://bucket-name.s3-website-Region.amazonaws.com, a readable and memorable URL with a domain, like http://example.com.
There are many ways to set up a domain with Route 53. Of course, the documentation will be the most up-to-date regarding setting up Route 53. For this series, we’ll configure Route 53 as a DNS service.
First, we’ll have to acquire our domain from a registrar, and then we can configure it to use the Route 53 nameservers. Finally, we can finish up by creating records for our S3 bucket in Route 53.
So why not use a more trendy or poignant domain as an example? As it turns out, the Internet Assigned Numbers Authority (IANA) reserved many top-level domains for documentation purposes.
Reserved domains are great for internal documentation to prevent accidentally using a valid address when configuring something like a workspace or VPN.
That’s not all - the Internet Engineering Task Force (IETF) did the same by defining reserved IP address blocks.
What’s in a Domain Name?
So, what is a domain name? The details are out of this series’ scope, so I defer to Wikipedia for a detailed explanation about domain names. For this series, a domain name is a convenient way to identify an IP address of a server or Internet resource. Since domain names are unique, it represents our part of the Internet containing anything we want accessible online.
As one might suspect, choosing a domain name is a crucial part of establishing a brand. However, there are some finer points when it comes to selecting a brand-like domain. For example, should we use a long versus a short address, and should it be hyphenated? Christopher Heng covers these details in this article about choosing a good domain name.
Now that we know what domain we want, we have to secure it for our brand.
Don’t want to use a DNS? The article CentOS 7 AMI Web and Mail Server with DDNS covers what you’d need to set up a DDNS with DuckDNS. However, recall that in Intro and First Steps, we mentioned S3 doesn’t run server-side scripts by itself. So, to use DDNS with S3, we’d either need an EC2 or Lambda to let DuckDNS know what our IP address is when it changes. I’m afraid that’s another Project/Techbit for another day.
Select a Registrar and Purchase a Domain
Once we know which domain we’re looking for, we have to purchase it at a registrar and set it to point to Route 53. As for domain registrars, of course, I recommend NameSilo. We could also register a domain name with Route 53 for a more seamless experience but prepare for the higher cost of convenience with prices around 30% higher* than NameSilo.
That said, we can check for domain availability and configure any of the free features we want, like WHOIS Privacy and Domain Defender from NameSilo. This article has more information about the features NameSilo offers and includes a coupon code for a discount on the next order.
Now that we have a domain, we have to configure Route 53 to use it.
Configure Route 53 as a DNS for S3
Before we can tell our registrar what DNS servers to use, we first have to configure Route 53 as a DNS. Of course, the documentation for configuring Route 53 as a DNS will be the most up-to-date, but we’ll go over some of the highlights.
The first option has more steps but provides a smoother transition for domains actively getting traffic by lowering the Time To Live (TTL) before switching. Lowering TTL keeps DNS resolvers from caching the result for too long, which would prevent visitors from routing to the new web host.
The second option doesn’t change the TTL at all, so the DNS change will slowly propagate across DNS resolvers as their caches expire.
If this is a new domain, we can go with the second option. If we’re changing web hosts or DNS service providers, then we should choose the first option. Regardless, Route 53 generates an NS record with the name servers when we create a hosted zone. We must add these name servers to our domain via the domain registrar. For example, NameSilo has a NameServer Manager† that can bulk update domains.
Next, we have to create DNS records in Route 53 to define where the main domain (example.com) and any subdomains (www.example.com) should route traffic.
Afterward, we can configure Route 53 to route traffic to an S3 bucket. AWS also makes this process seamless because Route 53 recognizes internal resources and makes them available via a drop-down menu.
Unlike the documentation, we shouldn’t match the origin of the www subdomain to the primary domain. By doing so, search engines will treat them like two separate websites. Splitting our traffic splits our search engine rank between our “two” websites.
The better option is to redirect our www subdomain to our primary domain so that search engines always navigate to the primary address. However, an S3 bucket acting as a host can’t also redirect to another domain. Now what?
Subdomain Redirect with an S3 Bucket
That’s right - we can redirect with a subdomain S3 bucket. We didn’t cover this option in Intro and First Steps while configuring an S3 bucket as a web host. However, we can also redirect a bucket endpoint request to another host. As you might’ve guessed, the documentation will be more up-to-date, but we’ll summarize the steps.
Similar to Intro and First Steps, we’ll create another S3 bucket with a www. prefix, like www.example.com. This time in the Hosting type section, we’ll select Redirect requests for an object. Next, we’ll put example.com as the Host name and set Protocol to none.
But that’s not all! Since we created another bucket, we have to update the A record for the subdomain in Route 53 to point to the new bucket.
Well, that’s it: our static website now has a memorable domain name. Not only that, the www. prefix now redirects to our prefixless domain (or vice versa) to improve SEO.
However, it’s still missing some features we’d expect a website to have. Despite all our effort to date, it’s still only accessible using the HTTP protocol instead of the HTTPS protocol.
While we technically have two buckets now, there’re still latency and DDNS concerns because they’re only in one region. Not to keep adding to the problems, but if we move or rename an article, visitors will hit a 404 error page.
As the saying goes, “There’s no rest for the weary,” but not to worry - we’ll keep on keeping on! Next up, we’ll cover SSL certificates and caching with AWS Certificate Manager and CloudFront. Finally, we’ll cover adding redirects and security headers Lambda@Edge. So keep on staying tuned!
*: Based on current prices at the time of this writing.
†: This site participates in affiliate programs. In other words, by clicking on this link and placing an order, I get a small credit to my account. That helps offset what it takes to maintain this site and is much appreciated.