This is part of my series of try some alternative hosting options. I left Amazon until last at they’re the big gorilla in this market. Amazon Web Services (AWS) have been around since 2006 and hold significant market share. As such they’ve been used for a great many things, including hosting static sites.
Simple Storage Service (or S3 to use its more common name) is designed to store files (and there are options about how often they’ll be accessed), but can also be used to host static sites.
Obviously you need any account, and Amazon generously offers a free tier that includes access to all of its products for 12 months. You get the option to select if it’s a company or personal account. Despite the free tier you still have to enter a credit card. You also have to enter a phone number that it will call with a PIN number to verify.
Then you get the option to pick the support plan you want, from Basic (free) to Developer ($49/month) all the way up to Enterprise ($15,000/month!). That’s it, all done.
Amazon have to run you through hosting a static site. So I started by creating two buckets for my domain (one for the naked domain, one for www). I selected my closest Region (data center) to host it.
Next came uploading my files, which is where most services fall down. The default upload dialog doesn’t support folders, but there is the option to enable the Enhanced Uploader (Beta), which uses Java. It takes (up to) a couple of minutes to enable it, but I thought I would give it a try. It never actually got installed, probably because I don’t have Java enabled in my browser (it never actually gave an error, just sat spinning).
(note: I missed the glaringly obvious message that says I could drop folders on the upload box.)
There are a number of other options to upload files, depending on your OS. Amazon provides an API and there are a number of command line tools that make use of it, as well as GUI applications. I opted to install s3cmd.
I installed from repo using the instructions on the website. Took me a while to wrap my head around the usage, but first off you need to run:
Where you’ll be prompted for various keys, which you can get by going to Security Credentials in the Amazon Console (click your name top-right). Once done, use:
To list all of your buckets, then navigate to your folder and use something like:
s3cmd sync * s3://your-bucket-name
And let it do its thing. Note that you only need to load to the bucket of the naked domain.
In the Console you’ll now need to change the bucket’s permissions by adding a policy (example in Amazon’s article linked above). Then enable website hosting on the bucket, both of which are on the bucket’s properties (right-click the name in the console or click the little document with magnifier icon).
The latter also lets you set the index and error documents to use. I had both an index.html file and a 404.html file to use. You use the same option but set your other bucket (the www one) to redirect all requests to another host name (your naked domain).
Next up is setting up your domain in Route 53, which is necessary if you’re using a naked (root) domain. You’ll need to create a new Hosted Zone and then shift your Nameservers over to Route 53.
And that’s it, you’re up and running.
After testing for speed, I enabled a CloudFront Distribution . Cloudfront is Amazon’s CDN, so copies your files to endpoints around the world and caches them.
I again ran my standard series of tests once uploaded.
|Original host||Test host||with Cloudfront|
|PS Rank (Mobile)||88/100||81/100||81/100|
|PS Rank (Desktop)||93/100||87/100||87/100|
|WPT First Byte||0.156s||0.578s||0.256s|
|WPT Fully Loaded||2.892s||2.575s||1.801s|
|WPT Bytes (KB)||1,093||344||345|
|GTMet Load Time||2.5s||2.8s||2.7s|
|GTMet Size (Mb)||1.05||327||327|
Page Speed and YSlow scores took a small hit when compared to a standard LAMP host.
First Byte also took a small hit, but the total load time improved and, as you can see, the page size dropped dramatically.
Some of the speed differences between using Cloudfront and not may have been down to the time of day I tested.
Amazon doesn’t automatically GZip any of the files it serves, which would help, you would need to manually compress and load files. You have to add cache-controls manually too.
|Set-up difficulty||It took an hour or so and, while reasonably straightforward, was fairly onerous. I couldn’t have done it without a guide. Adding Cloudfront was another big leap.|
There seems to be a lot of steps and
|Speed||Not the fastest host out there, nor does it claim to be.|
|Cost||A free year is obviously nice, but it would have cost me about $0.70 for a month|
The set-up process was quite convoluted and harder than it needed to be. There seemed a lot of steps. I thought I had to install a thirty-party command line utility to upload files and folders, which seems like a step too far for most users. As it happens I could simply have loaded via the web interface.
In terms of pricing, I took advantage of the free tier, so it should have cost me nothing, although I appear to have been billed for Route 53 (a whopping $0.61 inc VAT). My site is so tiny the cost to serve would be almost free anyway.
Performance-wise it did pretty well, offering reasonable speed, comparable to shared hosting, but not lightning by any stretch of the imagination.
You obviously get the benefit of access to the entire AWS platform, although that probably just adds complexity if all you want to do is host a site. There is a obviously a lot you can do with the platform. My static site is generated using Hugo and I found some instructions that would allow me to host everything from the build process to the site on AWS without a server (which would site idle 99% of the time).
AWS certainly isn’t a bad way to host a static site, and very cost effective. If you’re tech-savvy then the command line may be seen as a benefit as you could automate deploys. Not an ideal platform for most people though.