80-20 rule applied to testing

Everybody heard about the 80-20 rule that says that 80% of the results are coming from 20% of the subjects.

This can be applied to any field as follows:

- 80% of the revenue of a company is coming from 20% of the clients

- 80% of the donations for a charity are coming from 20% of the people

- 80% of the books from a bookstore are purchased by 20% of the clients

For software, this could mean that

- 80% of the clients are using 20% of the functionality

- 80% of the bugs are caused by 20% of the functionality

I used to think that this is how things are as the rule is too attractive in its common sense and simplicity.

The problem is that when you investigate it a little bit, things are becoming a little more complicated.

Joel Spolsky has the following opinion on the topic in this article:

A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.
Unfortunately, it's never the same 20%. Everybody uses a different set of features. In the last 10 years I have probably heard of dozens of companies who, determined not to learn from each other, tried to release "lite" word processors that only implement 20% of the features.
This story is as old as the PC. Most of the time, what happens is that they give their program to a journalist to review, and the journalist reviews it by writing their review using the new word processor, and then the journalist tries to find the "word count" feature which they need because most journalists have precise word count requirements, and it's not there, because it's in the "80% that nobody uses," and the journalist ends up writing a story that attempts to claim simultaneously that lite programs are good, bloat is bad, and I can't use this damn thing 'cause it won't count my words. If I had a dollar for every time this has happened I would be very happy.
When you start marketing your "lite" product, and you tell people, "hey, it's lite, only 1MB," they tend to be very happy, then they ask you if it has their crucial feature, and it doesn't, so they don't buy your product.
Bottom line: if your strategy is "80/20", you're going to have trouble selling software. That's just reality. This strategy is as old as the software industry itself and it just doesn't pay; what's surprising is how many executives at fast companies think that it's going to work.
How does this apply to testing?

Well, the project release date is fixed so you cannot test everything well.

So, test only 20% of the application as this is what the majority of the users will use.

Select the 20% of the application's functionalities that have the highest risk and test them well.

Test the remaining 80% of the functionalities by just taking the happy paths.

You think you did a good job, the project manager is happy with the results.

And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.

More, the senior management of the company starts noticing problems all over the application too.

The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.

How familiar is this scenario?

Share this