Posts Tagged ‘Amazon’

How Lieberman Got Amazon To Drop Wikileaks | TPMMuckraker

December 3, 2010

This story suggests that a senator was able to coerce Amazon Web Services into dropping a paying customer (Wikileaks), without any legal due process.

How Lieberman Got Amazon To Drop Wikileaks | TPMMuckraker.

Amazon refutes this, and claims that the decision was their own, because the customer violated the AWS terms of service.

From Amazon’s message, it seems like Wikileaks is in violation of the TOS:

It’s clear that WikiLeaks doesn’t own or otherwise control all the rights to this classified content. Further, it is not credible that the extraordinary volume of 250,000 classified documents that WikiLeaks is publishing could have been carefully redacted in such a way as to ensure that they weren’t putting innocent people in jeopardy. Human rights organizations have in fact written to WikiLeaks asking them to exercise caution and not release the names or identities of human rights defenders who might be persecuted by their governments.

The trouble is, Wikileaks has not actually been charged with or convicted of a crime, and the potential long-term effects of the leak are debatable, so these terms of service are fairly subjective.  The whole idea of Wikileaks is, of course, controversial.  The problem is that Amazon’s action feels like an activist action – like the Wikileaks service was suspended because Amazon doesn’t like them.

For those of us building a business based on the AWS Infrastructure as a Service model, this is very scary.  The one thing that we can count on with our own servers is that they don’t judge our content.  We don’t have to convince our disk drives that our cause is just, and our moral compass is intact.  Whether AWS responded to pressure from Joe Lieberman or acted on their own doesn’t really matter.  The message is that if AWS thinks you are up to something fishy, their policy is to drop your service first, and ask questions later.

When I am looking at a business plan, I worry about technology risk, market risk, and execution risk.  Normally, I would think that building a business on top of a cloud platform like AWS would dramatically reduce the execution risk, but now I have to worry about the risk that Amazon won’t like what I am doing.  Amazon has been leading the cloud computing charge, and has been saying every step of the way “don’t worry, you can trust us”.  I think banning Wikileaks was a giant step backwards in their credibility as a utility partner.

 

 

Ten Cloud Computing Opportunities

August 23, 2010

Based on my recent work with Double-Take Software’s Cloud business, I guess I am now officially a cloud computing entrepreneur.  I am looking for my next project, and just decided to go open source with this.  Here are ten ideas off the top of my head – they definitely need some refinement, and some of them will probably not pan out, but feel free to steal them, offer improvements, or how about this – contact me to collaborate on one.

  1. Modify an existing open source cloud platform into a drop-in solution for hosting companies, with a global ecommerce site and management interface.  Hosting companies can get into cloud easily; customers get broad geographic coverage with a single interface.  (I know at least one company is already claiming this, but it is a big, big market).
  2. Take one (or more) existing open source web applications or frameworks (like MediaWiki, Django, Sugar CRM, etc.), optimize the deployment for a highly scalable distributed system (i.e. load-balanced front-end web farm, distributed / scalable db back-end, memcached, etc.), and make it available at a very low-cost for entry users, with the price scaling up with usage.
  3. Acquire an enterprise-class (or academic usage) simulation / analysis solution, and modify it to use map-reduce.  Deliver the results of massive calculations in a few minutes (or seconds), and only bill for the usage.  Commoditize massive calculations at a price smaller users can afford.
  4. Build a management platform to transparently migrate virtualized workloads from a private cloud up to a larger public cloud provider, and back down.  (This can be expanded to cross-cloud, or cross-region migrations.)
  5. Create a encrypting, deduplicating network transport protocol and file system that minimizes the bandwidth and storage required to keep workloads synchronized between private and public clouds. (Useful for #4.)
  6. Use one of the open-source cloud platforms to build an Amazon compatible cloud in places where Amazon doesn’t have a data center.  Amazon is expanding rapidly, but it is a big world, and most countries have regulations restricting businesses from hosting data outside the country.  Europe is an especially fertile market for this.
  7. Build a GUI macro editor with building blocks that include Amazon (or another, or all) cloud resources, ecommerce, and maybe some social and / or mobile features.  Let customers build new cloud-based web applications by drag & drop.
  8. Create a web app that lets a product manager or a sales person enter the typical problems their customers face, and the features of their product that solve those problems, and the ultimate benefit.  Let their customer walk through a wizard, checking the boxes to describe their needs, and automatically generate a beautifully designed, customized proposal, based on their requirements.  The whole thing is based on standardized templates, but it feels totally customer for both the vendor and the customers.
  9. Build a Linux-based file server appliance for SMB, with HSM and version archiving to the cloud.  It basically has storage that never ends, and the most current / relevant files are always local.
  10. Add virtual machine recovery and remote access capabilities to an existing laptop backup solution.  When your laptop blows up, you are happy to know that all your files are backed-up on-line.  Wouldn’t you be even more excited if you could boot up a virtual machine of your laptop on-line, and finish the task you were already late on?

Peak vs. Average

April 8, 2010

Maybe it is because of the business I am in, but I believe that cloud computing and disaster recovery are made for each other.  There are lots of reasons for this, but the main one comes down to peak vs. average utilization.

There are a couple of ways to build a disaster recovery (DR) solution, but to avoid long outages, you need another data center off-site, stocked with enough equipment to get your critical systems back on-line.  This is where the peak vs. average problem shows up.  You have to provision that data center for the peak load during recovery – that is, you need servers, storage, and networks big enough to run your production workload.  But you only get the benefit of average utilization – and for the DR center, average is mostly idle time.

As the graphic illustrates, most of the time, your DR data center is sitting idle, but it has to be ready to carry the production burden for all your critical systems at any time.  You have to pay for peak utilization, but you only get the benefit of average.

Cloud computing lets us break that model – you can have huge peak capacity standing by, but only pay for what you consume.  In a DR setting, you can pay for small amounts of utilization (e.g. just the data storage costs) most of the time, and then bring more resources on-line when you need them for recovery.

I will be doing a joint webinar with Double-Take Software and Amazon Web Services on April 21.  It is free, and you can register online.  We will talk about how to build a good DR solution using the cloud that provides exactly these kinds of benefits.

Big Scale

March 15, 2010

Amazon’s CTO, Werner Vogels (@werner) just tweeted that @rightscale has launched over 1,000,000 EC2 instances.  At first, I thought he meant they had 1,000,000 instances running simultaneously.  That would be very impressive, but not surprising, given Amazon’s attitude about scale.

On second-thought, I decided that this probably just means that RightScale had started that many instances, and who knows how long they ran.  Probably many (most?) of them were started and stopped several times in the normal course of development, or to react to scaling needs, and  so were counted multiple times.

Even if the truth is more of the latter, and less of the former, I think this is still a nice bit of credibility for Amazon Web Services.  Lots of folks think AWS is nothing special – after all, we can all run virtual machines in our own data centers, and probably with more features that AWS offers.  I think that AWS is all about operating virtual machines (and the associated storage and networking infrastructure) every efficiently, very reliably, at very large scale.  And this milestone (for RightScale) points directly to that large scale operation: how many of us have ever provisioned 1,000,000 virtual machines in our data centers – for any amount of time?

I know that this is only one small metric, and not the only decision criterion, but when you need a dozen virtual machines, do you feel more confident starting them in a rack where you have run a dozen other VMs, or on an infrastructure where VMs have been started over 1M times by their partners?

Basic Math

February 23, 2010

Amazon today announced the availability of Reserved Instance pricing for Windows. For users who will run server instances for more than about about 37% of the year, it is a new price reduction.  By paying a fee up-front, you get a huge discount on the per-hour price.  The break-even is about 37% of a year, if you start and stop all the time, or 19.3 weeks, if you just run all the time.

For example, if you are running on our new cloud recovery service, you will want to run an EC2 Small Windows Instance, 24 hours per day, 7 days per week, for as long as you use the service.  Here is how the numbers work out for you:

  • On-demand pricing: $0.13 per hour, or $1,139 per year.
  • Reserved instance pricing: $227.50 up-front fee (for the one-year deal), plus $0.06 per hour, totaling $753 per year.

This is a nice savings of $386 per year, or about 34%.  (Amazon also offers a 3 year deal, with even better savings; this breaks-even at about 30 weeks of full-time usage).  Of course, one of the key benefits of cloud computing is the ability to pay as you go, and stop whenever you need.  In this case, though, I suggest making a small investment.  As long as you think you will keep running for 19 weeks, you cannot lose, and every hour you run after that gives you a nice discount.

Amazon Price Reduction

February 3, 2010

On Monday, Amazon Web Services (AWS) announced a price reduction.  Specifically, they reduced the cost of outbound bandwidth by two cents per GB for all pricing tiers.  At the lowest-volume pricing tier (10TB per month), this is about a 12% discount.  At the highest-volume tier (150 TB per month), the discount is nearly 20%.  Depending on how you are using AWS, this might not be that big of a deal.  But if, say, you are running a web application that consumes a lot of outbound bandwidth for page views and downloads, this could be a big deal.  What if half your AWS bill comes from outbound bandwidth?  You would suddenly be saving between 5% and 10% of your costs.

Amazon has done this a couple of times in the time I have been watching this solution.  They have lowered the cost of compute time, and also inbound bandwidth.  I can only assume that any IaaS vendor that is competing with Amazon will have to make similar reductions in prices.  I think that this behavior could be one of the greatest features of cloud computing.  Now, businesses lower their prices all the time.  In the computer world, we know that memory and disks will be cheaper next year than they were this year.  But knowing that the next guy is going to get a better price on a hard disk than you got is not that interesting.  On the other hand, knowing that your on-going operating cost for your cloud-based infrastructure is going to go down significantly, without you even having to change vendors or renew a long-term contract – well, that is pretty interesting.