Sunday, March 04, 2007

An Amazon EC2 cluster for BLAST searching ?

I've just been reading about the new Amazon Elastic Compute Cloud (EC2), which is essentially a pay-as-you go cluster, based on Xen virtual machine images. You can create and upload your own image using their tools, or use one of the pre-rolled GNU/Linux distro images already shared by other users of the EC2 system.

While it seems aimed at web service 'startups' that want a competitively priced hosting option which can quickly scale, I thought I'd attempt to figure out the economics of using something like this for some scientific computing. Would it be a cheap / easy / reliable alternative to the home-rolled Beowulf cluster ?

The advertised specs per node are: 1.7Ghz x86 (Xeon) processor, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth. Nodes cost US$0.10 per instance hour. Bandwidth between nodes within EC2 is free, as is bandwidth from EC2 nodes to the S3 storage service, however Internet traffic costs US$0.20 / GB.

First, lets think about BLAST, the bread-and-butter sequence search tool for many bioinformaticians. Now as far as I understand (and from my own experience), NCBI BLAST works best when the entire database can be cached in RAM ... otherwise lots of disk thrashing ensues and the search time is bounded by the disk I/O. The NCBI 'nr' (non-redundant) protein sequence database is currently about 3.3 Gb (and growing) ... so it won't fit in RAM on one of these EC2 nodes. While I don't mind paying to thrash Amazons' servers disks a little, it will slow the search down. However, if we use mpiBLAST the database gets split into chunks evenly distributed across each node, so if we were to use 3 nodes, the 'nr' database would be split into 1.1 Gb chunks and should fit in the RAM of each node (leaving ~600 Mb RAM for the OS and other overhead). However, now the speed of the network interconnects between nodes matters, since we are no longer computing on a single node ... but from what I've read 250 Mb/s should be enough for mpiBLAST to run such that is it not bounded by the internode communication speed. (Actually EC2 instances have shared gigabit interconnects , but since several instances might share the same ethernet card, gigabit performance 'per node' can't be expected. I guess the 250 Mb/s figure means that there are probably four EC2 instances per physical server/ethernet card ??). So with 3 nodes, this would cost US$ 0.30 per hour to run a (scalable) BLAST service.The performance should scale better than linearly with the number of nodes added. If you need the job done faster, just resplit the database and create more EC2 VM instances (mpiBLAST should work with the Portable Batch System to do this transparently, but I guess this would require some code to interface PBS with the control of your EC2 'elastic cluster'). It would only cost US$0.66 to upload the database to EC2 in the first place, and about US$0.50 cents to store it in S3 per month. This seems well within reach of many academic departments, and would really suite 'sporadic' users with occasional big jobs ....

Now for applications like molecular dynamics simulations (MD) (ie, GROMACS, NAMD, CHARMM etc etc), a lot more internode communication bandwidth is required. Looking at these benchmarks for GROMACS , it looks like things should scale nicely to two or four EC2 nodes, but after that the scaling would probably drop off, due to the less-than-gigabit ethernet. That doesn't mean you won't get more speed for more nodes, just that at some point adding more nodes will give greatly diminishing returns. While I'm speculating here, my I'd say it's probably better to leave this type of number crunching to the 'real' supercomputers or home-rolled purpose-built clusters; EC2 may not be worth the cost/effort here for big long running calculations. Others are using MPI applications on EC2 already though, and I'd love to be proved wrong.

One of the current difficulties for running database driven web applications on EC2 is that the virtual machine instances do not have persistent storage ... either a connection to a database running somewhere else needs to be used, or the precious data needs to be moved off each EC2 instance before shutting the server down. If it crashes before shifting data off ... goodbye database. I'm sure Amazon will come up with a solution to this, since it seems often requested on their forums. Having non-persistent data wouldn't be such a big deal for mpiBLAST ... the servers should rarely crash, the results could be stored in Amazon S3 or sent to a remote machine as they arrive, and the sequence database can also be stored in S3 (for about US$0.50 per month ... dirt cheap). There are already a few FUSE S3fs implementations floating about (like s3fs-fuse ) ... I haven't tried them yet, but essentially they should allow S3 storage to be mapped transparently to the Linux filesystem. My guess is it would be a bad idea to host a large MySQL database file on S3 using s3fs-fuse (there is a 5 Gb filesize limit for starters) ... but for lots of little-ish files, as is often generated by bioinformatics software, s3fs-fuse might just do the trick.

Whew ! .. Now I'm really itching to spend some spare change and a few hours to see if running mpiBLAST on EC2 is as good an idea as it sounds.

Doh ! Just tried to set up an account and the Amazon EC2 limited beta is currently full ... I'll have to wait .. :(.


A few additional links I was also looking at while writing this post .. wow ... someone has some 'issues' with the NCBI Blast implelmentation: http://blast.wustl.edu/blast/Memory.html
and http://blast.wustl.edu/blast/cparms.html


10 comments:

David Bullock said...

(Actually EC2 instances have shared gigabit interconnects , but since several instances might share the same ethernet card, gigabit performance 'per node' can't be expected. I guess the 250 Mb/s figure means that there are probably four EC2 instances per physical server/ethernet card ??).

Each physical system is dual processor, with two instances per system.

- Dave

Unknown said...
This comment has been removed by the author.
Unknown said...

I'm having dificulty with the comments on the blogger system, as they are all in Chinese with no option to get back to English...

I don't have trackbacks running on nodalpoint so I'll do it manually:

http://www.nodalpoint.org/2007/03/05/virtual_bioinformatics_clusters_with_ec2

Hopefully, Amazon's EC2 people will come across this post and open up an account for you. I would be genuinely interested in seeing how something like this would work as a intermediate step between having a virtual cluster and putting down big money for your own hardware.

Especially as a lot of institutions are adverse to spending money. At lease the powers that be at UNSW were :

Evan said...

I looked into this same possibility when EC2 was first announced, and decided that it wasn't worth the hassle, mainly for the persistence and memory issues you mention. Plus at the time infrastructure for deploying EC2 instances wasn't good. It has probably improved by now.

I do have an EC2 beta account that I never used. I assume it is still valid. Maybe I could transfer it to you somehow?

Unknown said...

I'm guessing I'll probably just have to sit tight and wait for Amazon to open up some more EC2 'beta' slots. In the meantime, I've tried to start playing with S3 in preparation ... it's been 48 hours and my account is still not 'authorized' .. not that I'm in any hurry, but my credit card appears valid for buying books and junk from Amazon, so I don't see why authorization should take this long. Anyhow, no choice but to just wait for them to do what they've gotta do ....

----

If I have ever made any valuable discoveries, it has been owing more to patient attention, than to any other talent -- (Sir) Isaac Newton

I am extraordinarily patient, provided I get my own way in the end -- Margaret Thatcher

Pete Skomoroch said...
This comment has been removed by the author.
Pete Skomoroch said...

I've started posting a tutorial describing a "roll-your-own" style MPI cluster on EC2. By the end, I'll upload a public ami and some benchmark results. I'm hoping Amazon can find a way to support faster interconnect for at least a subset of the machines.

On-Demand MPI Cluster with Python and EC2 (part 1 of 3)

Andrew Mitry said...

Did Amazon give you an EC2 beta slot yet? I just tried to sign up for EC2 and got the same message - just curious how long it'll be before they open more slots.

Unknown said...

Yes, I was given a beta account last week ... so it took just under a month. Shame I don't have as much time to spend playing with it now, but I'll still poke away at it slowly over the next few months. Looks like Peter has done most of the heavy lifting for MPI with his On-Demand MPI Cluster with Python and EC2 tutorial. Pretty sweet !

cariaso said...

I've got mpiBLAST working well inside EC2. If you'd like to try it out, you will find this document helpful.
http://mpiblast.pbwiki.com/AmazonEC2

Also see the discussion at nodalpoint

I will be at BioIT World at the end of April, and happy to discuss this topic with others.


Mike Cariaso * Bioinformatics Software * http://www.cariaso.com/