Archive for the ‘Articles’ Category

I’ll Take 26,280 Please

August 21, 2010

Being able to buy an entire IT infrastructure by the hour is a huge improvement in the decision cycle for IT operations.  It is hard to see this initially, but if you reverse the situation, it becomes obvious.  Imagine that you always bought a resource one unit at a time.  Now someone is proposing that you commit to a three-year supply all at once.  A three-year supply of hours is 26,280.  What else would you commit to purchasing more than 26 thousand of, in advance, with no return policy?

Being able to buy smaller chunks provides a couple of key opportunities:

  • Faster decision cycles: You need far fewer meetings, and far less planning to commit to just one hour of a data center.
  • Better responsiveness to changing needs: Your upgrade (and downgrade) window opens every hour.
  • ROI is much easier to capture: It is easy to keep 100% occupancy in a building where you can change the number of rooms every hour.
  • Access to smaller opportunities: Sometimes big opportunities only require small IT resources, but those IT resources may only come in large bundles, making the opportunity non-cost-effective. (Think about using 1000 computers to do a complex calculation in under an hour, but you only need them for that one hour.)

Buying 26,280 units in advance seems nuts, but this is exactly what we are doing when we buy a piece of IT equipment with a 3 year useful life, rather than renting that same capability from cloud provider.

DR Doesn’t Matter, Except When it Does

April 19, 2010

Information Technology has had a great impact on business productivity during my lifetime.  My father ran his business with a manual typewriter.  This was during the time when the IBM Selectric was popular, and personal computers were available, but not very good.  Having  a manual typewriter at that time was quirky, but it wasn’t that out-of-place in a small business where they just needed something simple , cheap, and functional.

My children (his granddaughters) do their elementary school homework using desktop publishing software and a color printer.  They can definitely do a lot more than my dad could on his trusty old Smith-Corona.

During the first wave of personal computers in the workforce, lots of business people wondered if they would actually help the business be more productive, or if they would just be a distraction.  After all, my dad seemed just fine with his old typewriter.  Today, that question has been pretty well answered.  We all know that modern businesses depend on their computers, and that if we took them away, we would need twice as many people, and maybe even then couldn’t get the work done with the same speed or level of quality.  (Imagine modern phone bills being hand-typed!)

So we now think of IT as a potential for competitive advantage.  Banks compete as much on their on-line banking system’s features as on the hours that their branches are open.  Better IT systems can cut cost, improve speed and quality of service, and make your business more attractive to customers.  But this isn’t true of all IT operations.  Some of them are necessary evils, like backup and disaster recovery (DR).

If you design your backup solution perfectly, or you build an elaborate DR plan, it does nothing.  It doesn’t get you any more customers, it doesn’t cut your cost, it doesn’t let you raise your prices.  From a business perspective, and investment in DR doesn’t matter.  Until it does.  If you have a significant failure, and need to recover systems and data, DR matters.  In fact, the general consensus is that large businesses that have a significant outage lose customers and reputation (i.e. future customers), and small businesses that are down for several days may never come back.

So the right way to think about DR is not by how much you will gain by doing it right, but by how much you will lose if you screw it up.  That’s right – if you get it right, you get nothing, but if you get it wrong, you potentially lose everything.  The question is – how much time, energy, and money do we want to invest into getting good at this subset of information technology?  Wouldn’t we rather put more focus on the areas that actually can create value for the business?  For areas where we cannot make a difference, we should think about outsourcing.

This is where cloud computing earns a lot of its value.  Cloud is not better because it can offer servers at a lower cost; cloud is better because cloud vendors have an incentive to be better, and cheaper, and more reliable, and more secure than their competitors.  They earn their living getting it right (at least the best ones do).  Non-IT businesses have to look at things like DR, where doing better provides no business improvement, but doing worse hurts the business, and ask if they can compete with companies that specialize in the IT basics – power, cooling, network, and racked / stacked CPU and storage.

If you don’t think you can do better than the cloud vendor, and you suspect that you could (on a bad day) do worse, shouldn’t you just find a vendor you can trust, and pay them to do the stuff that doesn’t matter?

Peak vs. Average

April 8, 2010

Maybe it is because of the business I am in, but I believe that cloud computing and disaster recovery are made for each other.  There are lots of reasons for this, but the main one comes down to peak vs. average utilization.

There are a couple of ways to build a disaster recovery (DR) solution, but to avoid long outages, you need another data center off-site, stocked with enough equipment to get your critical systems back on-line.  This is where the peak vs. average problem shows up.  You have to provision that data center for the peak load during recovery – that is, you need servers, storage, and networks big enough to run your production workload.  But you only get the benefit of average utilization – and for the DR center, average is mostly idle time.

As the graphic illustrates, most of the time, your DR data center is sitting idle, but it has to be ready to carry the production burden for all your critical systems at any time.  You have to pay for peak utilization, but you only get the benefit of average.

Cloud computing lets us break that model – you can have huge peak capacity standing by, but only pay for what you consume.  In a DR setting, you can pay for small amounts of utilization (e.g. just the data storage costs) most of the time, and then bring more resources on-line when you need them for recovery.

I will be doing a joint webinar with Double-Take Software and Amazon Web Services on April 21.  It is free, and you can register online.  We will talk about how to build a good DR solution using the cloud that provides exactly these kinds of benefits.

Big Scale

March 15, 2010

Amazon’s CTO, Werner Vogels (@werner) just tweeted that @rightscale has launched over 1,000,000 EC2 instances.  At first, I thought he meant they had 1,000,000 instances running simultaneously.  That would be very impressive, but not surprising, given Amazon’s attitude about scale.

On second-thought, I decided that this probably just means that RightScale had started that many instances, and who knows how long they ran.  Probably many (most?) of them were started and stopped several times in the normal course of development, or to react to scaling needs, and  so were counted multiple times.

Even if the truth is more of the latter, and less of the former, I think this is still a nice bit of credibility for Amazon Web Services.  Lots of folks think AWS is nothing special – after all, we can all run virtual machines in our own data centers, and probably with more features that AWS offers.  I think that AWS is all about operating virtual machines (and the associated storage and networking infrastructure) every efficiently, very reliably, at very large scale.  And this milestone (for RightScale) points directly to that large scale operation: how many of us have ever provisioned 1,000,000 virtual machines in our data centers – for any amount of time?

I know that this is only one small metric, and not the only decision criterion, but when you need a dozen virtual machines, do you feel more confident starting them in a rack where you have run a dozen other VMs, or on an infrastructure where VMs have been started over 1M times by their partners?

Basic Math

February 23, 2010

Amazon today announced the availability of Reserved Instance pricing for Windows. For users who will run server instances for more than about about 37% of the year, it is a new price reduction.  By paying a fee up-front, you get a huge discount on the per-hour price.  The break-even is about 37% of a year, if you start and stop all the time, or 19.3 weeks, if you just run all the time.

For example, if you are running on our new cloud recovery service, you will want to run an EC2 Small Windows Instance, 24 hours per day, 7 days per week, for as long as you use the service.  Here is how the numbers work out for you:

  • On-demand pricing: $0.13 per hour, or $1,139 per year.
  • Reserved instance pricing: $227.50 up-front fee (for the one-year deal), plus $0.06 per hour, totaling $753 per year.

This is a nice savings of $386 per year, or about 34%.  (Amazon also offers a 3 year deal, with even better savings; this breaks-even at about 30 weeks of full-time usage).  Of course, one of the key benefits of cloud computing is the ability to pay as you go, and stop whenever you need.  In this case, though, I suggest making a small investment.  As long as you think you will keep running for 19 weeks, you cannot lose, and every hour you run after that gives you a nice discount.

Will Electricity Someday Become a Commodity?

February 10, 2010

via Twitter, @brennels asks: “Is Cloud Computing a Commodity? or will it be?”

I think it is a good question, because I think the commodity idea is at the heart of the cloud computing idea.  In “The Big Switch“, Nicholas Carr draws a parallel between the history of the electric utility industry, and the computer industry.

Electricity from a utility is, perhaps, the ultimate commodity.  Each unit of consumption is completely undifferentiated, to the point that few people can even identify where or even how the electricity in their home or office is generated.  This wasn’t always true.  One hundred years ago, 95% of electricity was generated by custom-built systems, owned, managed, and maintained by technicians employed by the factory.  The industrial age was defined by the power of machines in factories, and the start of that age had every factory generating it’s own source of energy.

Today, we are in the information age, and it is defined by the power of our information systems.  We are used to thinking of these systems as the humming, blinking boxes, racked and stacked in air-conditioned rooms.  But that is like thinking of your house as the land it is built on.  Processors and memory and disks and cabling are just the substrate, the surface upon which information systems are built.  The real systems are software applications.  When a CFO wants a new accounting system, the vendor typically says “start with a server with X capacity, and Y operating system”, and then makes configuration changes to the accounting software to accommodate the business.

In a factory, that is like a machine demanding a certain type and amount of electricity.  Today, a factory owner doesn’t have his PT (power technology) department build up a properly sized generator.  He just has an electrician put the proper type of outlet in the wall, and expects to see his monthly electricity bill go up.

Cloud computing has this same promise, that we can just buy as much undifferentiated computing power as we need, without building it ourselves.  Along the progression from dedicated on-site generators to ubiquitous electric utility, though, were a few stages.  Thomas Edison invented the idea of the electric utility, and launched the first one.  But his idea was a bunch of small providers, serving the local community, and his companies supplied the equipment to those providers.  Large companies bought his equipment and served their own office campuses.

Today, we have an integrated grid of power generators and consumers.  Adding more capacity to the grid is like pouring more water into a barrel – it all looks the same to the consumer.  Amazon is starting to make cloud computing look like a commodity, because I can just consume more storage and more CPU cycles, and they all look about the same.  But someday, I won’t care if my CPU cycles come from Microsoft, or Amazon, or Google, or Rackspace, because they will all look alike.  Maybe some trader will buy up cheap storage-hours in bulk from a local vendor, and sell them on the grid at market price.

Is Cloud Computing a Commodity?  It is getting close – I can order capacity on demand, and not worry about how it got built, or even too much about where it lives.  Someday, when I don’t care who’s name is on the building where my capacity lives, then it will be a real commodity.

Amazon Price Reduction

February 3, 2010

On Monday, Amazon Web Services (AWS) announced a price reduction.  Specifically, they reduced the cost of outbound bandwidth by two cents per GB for all pricing tiers.  At the lowest-volume pricing tier (10TB per month), this is about a 12% discount.  At the highest-volume tier (150 TB per month), the discount is nearly 20%.  Depending on how you are using AWS, this might not be that big of a deal.  But if, say, you are running a web application that consumes a lot of outbound bandwidth for page views and downloads, this could be a big deal.  What if half your AWS bill comes from outbound bandwidth?  You would suddenly be saving between 5% and 10% of your costs.

Amazon has done this a couple of times in the time I have been watching this solution.  They have lowered the cost of compute time, and also inbound bandwidth.  I can only assume that any IaaS vendor that is competing with Amazon will have to make similar reductions in prices.  I think that this behavior could be one of the greatest features of cloud computing.  Now, businesses lower their prices all the time.  In the computer world, we know that memory and disks will be cheaper next year than they were this year.  But knowing that the next guy is going to get a better price on a hard disk than you got is not that interesting.  On the other hand, knowing that your on-going operating cost for your cloud-based infrastructure is going to go down significantly, without you even having to change vendors or renew a long-term contract – well, that is pretty interesting.

Private Clouds are an Oxymoron

August 26, 2009

I have been to several cloud conferences over the last few months, and I hear a lot about private clouds.  The general idea seems to follow this basic logic:

  1. Cloud computing offers a lot of promise in efficiency, scalability, and flexibility.
  2. As an enterprise-sized business, we are very risk averse, and public cloud offerings scare us, so. . .
  3. Let’s do the same thing, but make it private!

The trouble with this is that almost all of the flexibility, scalability, and efficiency in cloud offerings come from sharing the load with other customers.  For example, any business has to build for peak capacity, but can only easily monetize the average utilization.  But a public cloud vendor can share the cost of the excess capacity across all the customers, lower the cost of that inefficiency for everyone.  But a private cloud must, by definition build (and pay for) peak capacity, and bear the cost of that inefficiency alone.

I suppose that the tools and technologies that enable cloud computing can be applied to private data centers to improve operational efficiency, and make the IT staff more responsive to business unit requests, but it seems too much to call this a private cloud.

In fact, this is the exact opposite of how Amazon Web Services came into being.  The story is that they built standardized platforms to support their internal requirements, then realized that others could benefit from these standardized capabilities.  I am all for making your internal operations be more flexible, but using these technologies internally and calling it a private cloud is like drinking alone and calling it a private party.