Skip navigation

We recently rolled out a private Beta for our new Search solution.  Not only is it a good functional test for us, we're also experimenting with Amazon EC2 and S3 for hosting and data storage.  And from what I've seen so far I'm impressed with Amazon and the approach they've taken to cloud computing. 


Amazon has made it dead easy to provision new servers.  They've created a collection of webservices to integrate to for starting and stopping instances.  It seemed odd to me, at first, that I'd have to manage my infrastructure through SOAP calls.  But client-side tools like Elasticfox and now Amazon's own AWS Console make it easy to manage.  There's also a good selection of Windows and various Linux flavors to choose from when setting up servers (we're running mostly Ubuntu 8.1.0 in Labs).


We wanted to make our Labs infrastructure extensible so we can new pilot applications quickly, regardless of platform.  To do this we're setting an Apache server on an EC2 Ubuntu host that routes to the appropriate app.  Requests to the base URL will be sent to a CMS/wiki that describes the various things we're working on.  Search, which currently resides in the root, will soon be available at  Any future pilot we rollout will be available, then, at{pilot-name}.


We're also setting up "global" memcached and MySQL servers, so that any application deployed to labs might benefit from these services (we love memcached by the way). 


Each pilot application will be responsible for load balancing its requests.  The Search application uses HA proxy running on a dedicated Ubuntu server to route requests to a collection of Apache instances running mod_rails distributed across Ubuntu "workers" (we can scale horizontally here, pending load, by adding more workers).


So far (and it's been about 1.5 months) we haven't had any Amazon-caused downtime.  Personally, I think this is a game changer.

2,721 Views 2 Comments Permalink Tags: cloud_computing, amazon, ec2, infrastructure, labs