Archive for the ‘Articles’ Category

Naming the parts is not enough

March 19, 2017

Some people can name all the parts of a car, and have strong opinions about the various parts.  That doesn’t mean that they know how to build a car that will work, or one that other people will want to drive.  And it definitely doesn’t say anything about their ability to build cars at large scale with a price and features that will attract a large customer base and generate a profit for the company.

The same is true for building software.  We know that many successful software companies today are made from web pages, databases, and maybe some machine-learning.  Knowing this this doesn’t help us know how to build a product people will like, or how to build a successful software company.

First we fear it, then we are mad when it isn’t perfect

September 2, 2015

Self-driving cars seem like they are just around the corner, and one of the benefits they hope to offer is more safety. Depending on who you ask, this might be a huge boon to the world, or an impossible dream, or the beginning of the AI apocalypse. There is almost always fear about new technology, but there is also a predictable path for successful innovations: First we fear it, then we overestimate the benefits, then rich people get it, then everyone gets it, and finally we take it for granted and start complaining that it isn’t better.

Today, autonomous vehicles are in the “fear and overestimate” stage. Another life-saving automobile technology, airbags, were a new thing when I was young, but are now in the “take it for granted” stage. An article in the WSJ (2015-09-02, print edition: “Fewer Air-Bag Replacements Needed”, section B2) had a picture with a caption that caught my eye. “Faulty Takata air-bag inflaters have been linked to eight deaths world-wide.” The article is not completely clear, but it seems like those deaths are from the air-bag failing to inflate when required.

We have some work to do to get there, but soon enough, today’s fears about allowing AI into our cars will be replaced with outrage that some manufacturer’s AI didn’t avoid enough accidents.

Our Competitive Advantage is Ease-of-Use

March 7, 2015

When developing a product, often a desired feature is “easy”. Perhaps this is even the planned differentiation: being easier than the competition. But “easy” doesn’t actually exist. It is like “cold” – physics has no concept of cold, only its opposite, heat. To make something cold, you must find a way to remove the heat. Likewise, if we want to make something easy, we have to find its opposite: hard.

We will find it nearly impossible to specify (or later market) “easy” in our product. Instead, we can work to understand what is hard about existing tasks, and remove those hard things. Now we have something concrete to specify in our product development, and something specific to market to our customers.

Cloud Platform Wanted

April 28, 2011

Clouds computing is about buying just the amount of data center resources that you need, and having the ability to change your mind about that quickly.  Any IaaS cloud provider worthy of the name will let you spin up a new virtual machine any time you want to add capacity to your data center.  The hard part is getting your application to take advantage of that extra capacity.  (Despite what I wrote here, this applies to private clouds, too.)  Most off-the-shelf applications don’t go twice as fast if you have twice as many servers – in fact, most off-the-shelf applications are designed to run on just one server (or virtual machine).  Even the typical n-tier application is constructed from a handful of special purpose system – e.g. web servers, database servers, application servers, file servers.  Although the web layer in this architecture can often benefit from just adding more servers, the database layer typically doesn’t.  If you are building something more specialized than the typical “web server that displays stuff from a database”, you probably have to invent a way to distribute your work across many machines, and then you have to build admin or automation tools to support adding and removing resources.  Now you are hiring developers that are good at building distributed applications, and spending lots of time (and perhaps a limited supply of start-up capital or project budget) building out a robust platform that your real project will run on.

In Rework, the founders of 37signals suggest that you should not worry about the scalability of your application, because once you start making money, you can always buy a more powerful machine.  I believe that their point was that instead of worrying about a hypothetical scaling problem, you should get some customers and generate some revenue, after which you will have some money to throw at the problem.  I agree wholeheartedly with that point.  But more and more of us are in environments where we know that if we cannot support a large number of users, or a large data set, or provide fast response times, the project will fail.  And we sometimes realize that even if we buy the biggest server that Dell or HP makes, it isn’t going to be enough.

So what we need is a good cloud platform for developing distributed applications that can go faster when we add more hardware, but runs fine with a small amount of hardware.  It should be something that doesn’t require super-human distributed computing development skills, that lets the developer focus on his application rather than the plumbing, and that an administrator can configure for whatever scale the occasion demands (e.g. seasonal load spikes).

Got a solution like this?  I’d love to hear about it.  If not, I might have to go build it – feature requests welcome 😉

How Lieberman Got Amazon To Drop Wikileaks | TPMMuckraker

December 3, 2010

This story suggests that a senator was able to coerce Amazon Web Services into dropping a paying customer (Wikileaks), without any legal due process.

How Lieberman Got Amazon To Drop Wikileaks | TPMMuckraker.

Amazon refutes this, and claims that the decision was their own, because the customer violated the AWS terms of service.

From Amazon’s message, it seems like Wikileaks is in violation of the TOS:

It’s clear that WikiLeaks doesn’t own or otherwise control all the rights to this classified content. Further, it is not credible that the extraordinary volume of 250,000 classified documents that WikiLeaks is publishing could have been carefully redacted in such a way as to ensure that they weren’t putting innocent people in jeopardy. Human rights organizations have in fact written to WikiLeaks asking them to exercise caution and not release the names or identities of human rights defenders who might be persecuted by their governments.

The trouble is, Wikileaks has not actually been charged with or convicted of a crime, and the potential long-term effects of the leak are debatable, so these terms of service are fairly subjective.  The whole idea of Wikileaks is, of course, controversial.  The problem is that Amazon’s action feels like an activist action – like the Wikileaks service was suspended because Amazon doesn’t like them.

For those of us building a business based on the AWS Infrastructure as a Service model, this is very scary.  The one thing that we can count on with our own servers is that they don’t judge our content.  We don’t have to convince our disk drives that our cause is just, and our moral compass is intact.  Whether AWS responded to pressure from Joe Lieberman or acted on their own doesn’t really matter.  The message is that if AWS thinks you are up to something fishy, their policy is to drop your service first, and ask questions later.

When I am looking at a business plan, I worry about technology risk, market risk, and execution risk.  Normally, I would think that building a business on top of a cloud platform like AWS would dramatically reduce the execution risk, but now I have to worry about the risk that Amazon won’t like what I am doing.  Amazon has been leading the cloud computing charge, and has been saying every step of the way “don’t worry, you can trust us”.  I think banning Wikileaks was a giant step backwards in their credibility as a utility partner.

 

 

Network Effects

August 24, 2010

One of the easily overlooked features of cloud-based solutions is the potential for network effects.  Because a cloud (especially SaaS) vendor has some metadata about their customers, there is the potential to learn something interesting by analyzing that data.  Now, every superpower can be used for good or for evil, so let’s be clear about what I am proposing.  Evil usage is to try to extract personal information about your clients, and use it against them, or sell it to a random third-party without their permission.  Good usage is to use that data to make your service better for your customers.  The key to network effects is that the more users you have, the better the solution becomes for the users.  These benefits are potentially very valuable, and any solution that ignores them will probably be surpassed by a competitor that uses them well.

Here are some quick ideas on how to create network effects in your cloud-based solution.

  1. Best Practices: compare one user’s usage patterns to the average of all users, or of users in a similar class.  Financial ratios are a great example of this kind of thing, but your users probably have some industry / application specific metrics.  Everyone likes to know how they are doing compared to everyone else.
  2. Common Interests: based on usage or user selection, you can recommend people or things that a user might like, based on what other, similar users like.  Think about the “you might also like” feature in the Amazon book store.
  3. Busy / Free / Location: Which other users are available or nearby right now?  Obviously you want to use this in an appropriate way – not every solution should incorporate a “stalker” feature.  But if users want to communicate or meet with other users, you may be able to help with this.
  4. Viral Features: If users might want to pass on certain information, e.g. quotes, screen shots, photos, links, high-scores, etc., make it easy for them to do this.  You definitely want to make it easy for existing users to invite new users, and make it easy for the new users to get basic functionality. (remember how limited email was when half your friends didn’t have it?)
  5. Crowd sourcing: Let users tag or rate things to determine relevancy (e.g. once 10% of your users tag something as spam, you can automatically block it from all the rest), or answer questions to build a knowledge base.
  6. Social: Maybe your customers will be more interested in certain features or content if they know that other people are using them too – these could be their friends, or they could be influencers in the industry.  You can also help people discover hidden influencers, or hidden relationships.  Linked-In does this for your second-level network.
  7. Feedback: this is only sort-of a network effect, but it gets ignored by almost every business.  Every customer assumes that if you have lots of other customers, you should know a lot about what those customers want, and how they use your product. Most companies have no idea – they are only slightly better informed than any individual user.  If you make it easy for a customer to provide feedback, then 1000 customers can easily translate into 1000 real data points about preferences.

This is a place where first mover advantage can actually be defensible – if you have a bigger database of relevant recommendations than the new start-up competitor, they will have a hard time catching up.

Ten Cloud Computing Opportunities

August 23, 2010

Based on my recent work with Double-Take Software’s Cloud business, I guess I am now officially a cloud computing entrepreneur.  I am looking for my next project, and just decided to go open source with this.  Here are ten ideas off the top of my head – they definitely need some refinement, and some of them will probably not pan out, but feel free to steal them, offer improvements, or how about this – contact me to collaborate on one.

  1. Modify an existing open source cloud platform into a drop-in solution for hosting companies, with a global ecommerce site and management interface.  Hosting companies can get into cloud easily; customers get broad geographic coverage with a single interface.  (I know at least one company is already claiming this, but it is a big, big market).
  2. Take one (or more) existing open source web applications or frameworks (like MediaWiki, Django, Sugar CRM, etc.), optimize the deployment for a highly scalable distributed system (i.e. load-balanced front-end web farm, distributed / scalable db back-end, memcached, etc.), and make it available at a very low-cost for entry users, with the price scaling up with usage.
  3. Acquire an enterprise-class (or academic usage) simulation / analysis solution, and modify it to use map-reduce.  Deliver the results of massive calculations in a few minutes (or seconds), and only bill for the usage.  Commoditize massive calculations at a price smaller users can afford.
  4. Build a management platform to transparently migrate virtualized workloads from a private cloud up to a larger public cloud provider, and back down.  (This can be expanded to cross-cloud, or cross-region migrations.)
  5. Create a encrypting, deduplicating network transport protocol and file system that minimizes the bandwidth and storage required to keep workloads synchronized between private and public clouds. (Useful for #4.)
  6. Use one of the open-source cloud platforms to build an Amazon compatible cloud in places where Amazon doesn’t have a data center.  Amazon is expanding rapidly, but it is a big world, and most countries have regulations restricting businesses from hosting data outside the country.  Europe is an especially fertile market for this.
  7. Build a GUI macro editor with building blocks that include Amazon (or another, or all) cloud resources, ecommerce, and maybe some social and / or mobile features.  Let customers build new cloud-based web applications by drag & drop.
  8. Create a web app that lets a product manager or a sales person enter the typical problems their customers face, and the features of their product that solve those problems, and the ultimate benefit.  Let their customer walk through a wizard, checking the boxes to describe their needs, and automatically generate a beautifully designed, customized proposal, based on their requirements.  The whole thing is based on standardized templates, but it feels totally customer for both the vendor and the customers.
  9. Build a Linux-based file server appliance for SMB, with HSM and version archiving to the cloud.  It basically has storage that never ends, and the most current / relevant files are always local.
  10. Add virtual machine recovery and remote access capabilities to an existing laptop backup solution.  When your laptop blows up, you are happy to know that all your files are backed-up on-line.  Wouldn’t you be even more excited if you could boot up a virtual machine of your laptop on-line, and finish the task you were already late on?

I’ll Take 26,280 Please

August 21, 2010

Being able to buy an entire IT infrastructure by the hour is a huge improvement in the decision cycle for IT operations.  It is hard to see this initially, but if you reverse the situation, it becomes obvious.  Imagine that you always bought a resource one unit at a time.  Now someone is proposing that you commit to a three-year supply all at once.  A three-year supply of hours is 26,280.  What else would you commit to purchasing more than 26 thousand of, in advance, with no return policy?

Being able to buy smaller chunks provides a couple of key opportunities:

  • Faster decision cycles: You need far fewer meetings, and far less planning to commit to just one hour of a data center.
  • Better responsiveness to changing needs: Your upgrade (and downgrade) window opens every hour.
  • ROI is much easier to capture: It is easy to keep 100% occupancy in a building where you can change the number of rooms every hour.
  • Access to smaller opportunities: Sometimes big opportunities only require small IT resources, but those IT resources may only come in large bundles, making the opportunity non-cost-effective. (Think about using 1000 computers to do a complex calculation in under an hour, but you only need them for that one hour.)

Buying 26,280 units in advance seems nuts, but this is exactly what we are doing when we buy a piece of IT equipment with a 3 year useful life, rather than renting that same capability from cloud provider.

DR Doesn’t Matter, Except When it Does

April 19, 2010

Information Technology has had a great impact on business productivity during my lifetime.  My father ran his business with a manual typewriter.  This was during the time when the IBM Selectric was popular, and personal computers were available, but not very good.  Having  a manual typewriter at that time was quirky, but it wasn’t that out-of-place in a small business where they just needed something simple , cheap, and functional.

My children (his granddaughters) do their elementary school homework using desktop publishing software and a color printer.  They can definitely do a lot more than my dad could on his trusty old Smith-Corona.

During the first wave of personal computers in the workforce, lots of business people wondered if they would actually help the business be more productive, or if they would just be a distraction.  After all, my dad seemed just fine with his old typewriter.  Today, that question has been pretty well answered.  We all know that modern businesses depend on their computers, and that if we took them away, we would need twice as many people, and maybe even then couldn’t get the work done with the same speed or level of quality.  (Imagine modern phone bills being hand-typed!)

So we now think of IT as a potential for competitive advantage.  Banks compete as much on their on-line banking system’s features as on the hours that their branches are open.  Better IT systems can cut cost, improve speed and quality of service, and make your business more attractive to customers.  But this isn’t true of all IT operations.  Some of them are necessary evils, like backup and disaster recovery (DR).

If you design your backup solution perfectly, or you build an elaborate DR plan, it does nothing.  It doesn’t get you any more customers, it doesn’t cut your cost, it doesn’t let you raise your prices.  From a business perspective, and investment in DR doesn’t matter.  Until it does.  If you have a significant failure, and need to recover systems and data, DR matters.  In fact, the general consensus is that large businesses that have a significant outage lose customers and reputation (i.e. future customers), and small businesses that are down for several days may never come back.

So the right way to think about DR is not by how much you will gain by doing it right, but by how much you will lose if you screw it up.  That’s right – if you get it right, you get nothing, but if you get it wrong, you potentially lose everything.  The question is – how much time, energy, and money do we want to invest into getting good at this subset of information technology?  Wouldn’t we rather put more focus on the areas that actually can create value for the business?  For areas where we cannot make a difference, we should think about outsourcing.

This is where cloud computing earns a lot of its value.  Cloud is not better because it can offer servers at a lower cost; cloud is better because cloud vendors have an incentive to be better, and cheaper, and more reliable, and more secure than their competitors.  They earn their living getting it right (at least the best ones do).  Non-IT businesses have to look at things like DR, where doing better provides no business improvement, but doing worse hurts the business, and ask if they can compete with companies that specialize in the IT basics – power, cooling, network, and racked / stacked CPU and storage.

If you don’t think you can do better than the cloud vendor, and you suspect that you could (on a bad day) do worse, shouldn’t you just find a vendor you can trust, and pay them to do the stuff that doesn’t matter?

Peak vs. Average

April 8, 2010

Maybe it is because of the business I am in, but I believe that cloud computing and disaster recovery are made for each other.  There are lots of reasons for this, but the main one comes down to peak vs. average utilization.

There are a couple of ways to build a disaster recovery (DR) solution, but to avoid long outages, you need another data center off-site, stocked with enough equipment to get your critical systems back on-line.  This is where the peak vs. average problem shows up.  You have to provision that data center for the peak load during recovery – that is, you need servers, storage, and networks big enough to run your production workload.  But you only get the benefit of average utilization – and for the DR center, average is mostly idle time.

As the graphic illustrates, most of the time, your DR data center is sitting idle, but it has to be ready to carry the production burden for all your critical systems at any time.  You have to pay for peak utilization, but you only get the benefit of average.

Cloud computing lets us break that model – you can have huge peak capacity standing by, but only pay for what you consume.  In a DR setting, you can pay for small amounts of utilization (e.g. just the data storage costs) most of the time, and then bring more resources on-line when you need them for recovery.

I will be doing a joint webinar with Double-Take Software and Amazon Web Services on April 21.  It is free, and you can register online.  We will talk about how to build a good DR solution using the cloud that provides exactly these kinds of benefits.