I've been spending a lot of time over the last few months thinking about the fact that good software hides complexity from it's users until they are ready for it. I've talked about this in various contexts, but I've been missing something. Then it dawned on me the other day - what I'm describing is simply a flat learning curve.

Mostly I think about this from a developer's perspective. As a consumer of other libraries/toolkits/frameworks/server products, how usable is a particular thing? The answer is that if it's open source, it's probably terrible. Take a look at the graphic below.

The majority of open source software targetted at developers follows a kind of inverted-s learning curve. The four labelled stages can be described as:

  • 1: Struggling to understand the many concepts presented all at once when learning a new toolkit. Having been thrown into the thick of things without good documentation you waste countless hours/days until suddenly you see how all the pieces fit together.
  • 2: The plateau of basic proficiency. Basic tasks can be accomplished without terrible productivity loss. Confidence increases.
  • 3: Attempts to use advanced features or tackle corner cases highlight inconsistencies and lots of tiresome trial and error testing might lead to a solution. This phase is often repeated many times!
  • 4: All the major quirks, defeciencies and workarounds have been learned. The practitioner is now reasonably productive.

This is, I believe, how the more experienced developers in the community live through the adoption of the average OSS library or framework (and some non-OSS software too). More junior developers may never make it out of stage one or two, but simply look to more experienced teammates for solutions.

I'm not going to cite examples; I think most of you are able to think of plenty without my help. In some cases it's the complexity of the domain that leads to a steep learning curve (e.g. O/R mapping) and in these cases even the most talented developers cannot make things as simple as we'd like. But in the majority of cases I believe it comes down to developers who don't understand usability (of software; most developers don't understand end-user interface usability either, but that's another story).

All sorts of things can lead to poor usability and a troubling learning curve. A few examples:

  • Multiple or overly verbose configuration files
  • Forcing the developer to learn additional languages (expression, query, template languages) just to get started
  • Having to implement workarounds for common use cases
  • Poorly designed and poorly-generalized APIs
  • Scant, bad, out of date or factually incorrect documentation
  • High dependency count

"Good" software uses a number of techniques so that both a seasoned veteran and a junior developer can be happy and productive. In many cases these amount to the inversion of the above examples. But by far and away the biggest gain comes from an incremental introduction to complexity. By constructing well designed interfaces, using sensible defaults and providing good quality documentation it is possible for developers new to a tool to become productive quickly, and continue to get more productive as they use the software. The end result is a flat learning curve (like the blue line in the graphic).

In conclusion: well designed software can offer more features in a package that appears simpler to the consumer, and make said consumer many times more productive.