Review: Amazon, the mother of all clouds
- 13 March, 2013 14:53
Ah, Amazon -- did Jeff Bezos choose that name to symbolize the largest bookstore in the world or did he realize that he would one day create an enterprise cloud service that was as large and complex as the river basin? After spending some time with his enterprise infrastructure service, I think he saw this coming.
Selling servers by the hour was a bold idea when the Amazon Cloud business launched a few years ago, but it seems quaint compared to all the options for sale today. There are currently 21 products available on Amazon Web Services, and only one of them is the classic EC2 machine, an abbreviation of the full name, the Elastic Compute Cloud. The original S3 (Simple Storage Service) now has cousins like the Simple Workflow Service and SimpleDB, a nonrelational data store. Then there are odder innovations like Amazon Glacier, a very cheap storage solution that takes hours to retrieve the data. Yes, hours. Not milliseconds, not seconds, not minutes -- but hours.
It's impossible to summarize it all in a paragraph or even an article. Amazon Web Services would require a book, but that tome would be out of date by the time it was printed because the service changes quickly. The best news is that Amazon is constantly looking at costs and generally lowering prices as it finds a way to deliver the product for less. Some prices have gone up occasionally over the years, an effort to make the prices reflect reality.
Amazon has also found plenty of supporters. A number of big companies such as Netflix are proud of using Amazon's servers, and plenty of startups are glad they didn't need to set up their own data centers to reach for the gold ring of IPO riches. Some customers brag about spending $1 million or more a month, an amount that would be more than enough for most companies to justify setting up an in-house facility and team. Clearly, Amazon is delivering a whole lot of value.
A smorgasbord of possibilities
The vast array of options is probably what keeps people coming back. When I started setting up a few test machines, it was clear Amazon had expanded the options until they no longer seem like commodities. There are at least 16 different sizes of machines. The instances generally bundle more RAM with more CPU cores and more disk space, but you can also choose lopsided versions that are heavier on the RAM, the CPU, or the I/O.
The size is just the first feature you can choose. There's back-end storage that can be mounted and you can fiddle with the amount of disk space. If you like, you can add EBS (Elastic Block Store), which is disk space that lives in the racks near you. This can be faster or slower and backed by more or less RAID protection.
There are so many options that spinning up an Amazon machine is almost as complicated and as flexible as buying a custom server. It's a bit like a toy store because you have to resist the temptation to play with cutting-edge technology -- such as one of the machines jammed full of Nvidia Tesla GPUs ready to run highly parallel algorithms written to Nvidia's CUDA platform. The mind often boggles.
Decoding the pricing table will take some collaboration between the CFO and the CIO. Not only are there 16 different-sized machines, but you can pay to reserve them in advance. If you pay a portion up front, Amazon will cut the hourly price along the way. It's sort of like one of those warehouse clubs where a membership buys you a discount. If you're judicious it's probably worth it, but it will take you some time to predict how much you'll use the machines.
The options aren't just in the size or configuration of the machine. The startup process offers a number of sophisticated options for customizing the distro from the beginning. You can, for instance, set up a "security profile" that controls which ports are open or shut immediately. This saves you the trouble of logging in after creating the machine and configuring the ports manually, a feature that's essential if you're going to start and stop dozens, hundreds, or thousands of machines.
Benchmarking the Cloud
I spent some time running benchmarks on the micro machine, Amazon's low-end model that's supposed to be able to handle bursts of extreme computation. It's intended for people who are either just testing some ideas or building a low-traffic machine. It costs only 2 cents per hour and comes with 613MB of RAM, an odd number that's probably an even fraction of some power of two minus a little overhead.
It was surprisingly hard to find a way to log into the machines. I couldn't get the public/private keys generated by Amazon to work with either PuTTY or the built-in Java-based SSH client. Yet it worked in seconds from my Mac's terminal. I wonder what kind of laptops are popular up at Amazon?
Little issues like this appeared fairly often during my time poking around the cloud. Amazon's Web portal is one of the more sophisticated tools available, offering more extensive diagnostics and hand-holding than the dashboards of competitors, but it is not always foolproof.
For instance, it offers a nice dialog box for helping you connect immediately to your instance with your SSH by formatting the command line. It worked some of the time for me, but it failed when it tried to get me to log into one of my Ubuntu instances as root, a problem that took five seconds to fix once I remembered that I was supposed to log in as "ubuntu." Any Unix user should be able to work around all of these tiny glitches. In fact they're only noticable because Amazon sets such a high bar with the quality of its portal.
The speed I saw with the machines wasn't very exciting. I tried the DaCapo Java benchmarks, a test suite that includes several computationally intensive tasks, including running a Tomcat server. The results were generally three to five times slower than the low-end machines on Microsoft's Windows Azure and often six to nine times slower than the low-end machines on Joyent's cloud. However, these numbers weren't perfectly consistent. On the Avrora simulation of a sensor network, the EC2 micro machine was faster than Joyent's, and it took only about 45 percent more time to finish than the low-end Azure machine.
The Joyent machines are priced at about 3 cents an hour, a small premium considering the gap in performance. The Azure machines have an introductory price of 1.3 cents per hour -- cheaper than Amazon's micros, though they're dramatically faster.
Bigger, faster, more
For comparison, I also booted up what Amazon calls a high-CPU machine that offers two virtual cores, each delivering 2.5 (in Amazon parlance) ECUs or Elastic Compute Units. That's five ECUs all together. The micro machine is supposed to offer two ECUs in bursts, while the high-CPU machine offers five ECUs all of the time. The price is dramatically higher -- 16.5 cents per hour -- but that includes 1.7GB of RAM. Again, what happened to our old friends, the powers of two?
The high-CPU machine was usually six to eight times faster than the micro machine, suggesting that the ECUs are just a rough measurement. The results were close in speed to the Joyent machine and often a bit faster, but at more than five times the price. It's worth noting for algorithm nerds that the DaCapo benchmarks used two threads when possible on the Amazon machine but were limited to one thread on the Joyent and Azure boxes.
Once again, this suggests that the algorithm designer, the build master, and the CFO are going to need to sit down and decide whether to buy bigger, faster machines for more money or live with a larger number of slower, cheaper machines.
More fun comes when you start exploring the other corners of the Amazon toy store. The pay-as-you-go Hadoop cloud, called Elastic MapReduce, lets you upload a JAR file, push a button and start the computational wheels turning. You stick the data in the Amazon's storage cloud, S3, and the results show up there when everything is done.
There's a separate Cloud of machines devoted to doing Hadoop processing. At least it looks separate, because you buy the compute cycles through a different Web page, but it could all be running in the same floating network of machines. That's the point.
If you want your Hadoop job to begin as soon as a machine is available, you pay the list prices. If you want to gamble a bit and wait for empty machines, the spot market lets you put in a lower bid and wait until spare machines are available for that price. Amazon is experimenting with constantly running an auction for compute power. This is yet another wrinkle for the engineers and the accountants to spend time discussing.
My favorite, relatively new feature is the Amazon Glacier, a backup system that takes hours to recover the data. Many people looked at Amazon's first cloud storage solution (S3) and found it was too expensive for backups or other data that wasn't accessed very often. One-size-fits-all solutions are one of the limitations of the cloud. Amazon designed S3 to meet the needs of servers that must access data relatively quickly.
As I mentioned before, there's no easy way to cover all of Amazon Web Services in one article like this. The only solution is to wade in, start booting up machines, and begin testing your application. Amazon offers some very basic services for free to help new customers, but for the most part it costs only a few cents to try out the different sizes. Then you can sit down with your accountant and start pricing out the services.
My impression is that Amazon's cloud has evolved into the high-end Cadillac of the breed. It provides extensive documentation, more hand-holding, and more sophisticated features than rivals, all at a price that is higher than the competition. Perhaps the competition's rates are only temporary and perhaps they're unsustainable, but maybe Amazon's rate is the price you pay for all of the extra features. Amazon's cloud is loaded with them.