Spaghetti Sauce and Software Requirements

Mirácoli noodles with Mirácoli sauce by Kraft ...
Image via Wikipedia

Malcolm Gladwell, author of The Tipping Point, Blink and Outliers, speaking at a 2004 TED conference, offered a story of one of his favorite Americans. Howard Moskowitz is an American market researcher and Psychophysicist, which according to WikiPedia is “the scientific study of the relation between stimulus and sensation”[2] Moskowitz has worked for large conglomerates such as Pepsico, McDonald’s, AllState and Kraft and is principally known for creating the insane number of product variations you see at your grocery store.

Moskowitz determined that people often don’t know the range of options they would like to buy until someone presents them with the choice.
Moskowitz was retained by Campbell Soup Company, to help them grow their spaghetti sauce brand, Prego. With his team of researchers in tow, his team tested public reaction to forty-five different varieties of sauce. When analyzing the data, the researchers didn’t just look for the combination with the highest ratings. They believed people craved variety even if they didn’t know it. They grouped their data to look for clusters- patterns. When reporting back the executives he found people like three kinds of sauces: plain, spicy, and extra chunky.
For years the approach to gathering product requirements, consumer tastes, researchers would go into focus groups and say, “What do you want in a spaghetti sauce? Please tell us. Not once did anyone say, ‘extra chunky’, even though in their heart they craved it.” Moskowitz revealed there was no “single sauce for the market. There were sauces”
How does a product owner manage in that environment? When you read the story it’s easy for us to see the solution. “Of course we want choices. The product development team should incorporate that into their development process.” And now, admittedly this is rather common place. At the time, however, Campbell’s soup executives were astonished to hear that over 1/3 of all Americans craved chunky spaghetti sauce and no one was providing it. When the market need was met, over $600M revenue was added to the Campbell’s Soup line.

So what does this have to do with software requirements? It is a story to illustrate the folly in thinking we can always know what our client will want, specify it and then build and deliver it. Even when you are dealing with an industry as mature as consumer goods & spaghetti sauce, we see that when your product is attempting to change the category, you need experimentation in your development cycle.
MIT professor and author of Predictably Irrational, Dan Arielly, reminds us we need to have something to compare against to settle our preference.

humans rarely choose things in absolute terms. We don’t have an internal value meter that tells us how much things are worth. Rather, we focus on the relative advantage of one thing over another, and estimate value accordingly. (For instance, we don’t know how much a six-cylinder car is worth, but we can assume it’s more expensive than the four-cylinder model.

Until end-users and stakeholders have something to compare it against, they can be indecisive- slowing down projects. This explains the challenge with just getting people in a room and asking them to describe to you what they want. They probably don’t really know- even when they tell you.
Typically the approach we take when we  are unsure what it is we are building, we attempt to stress more documentation and more reviews. We’ve been taught that correcting a requirements error early in the life cycle saves costs down the line. We work to reduce our discomfort with the uncertainty by forcing a someone else to make a decision so we can then narrow in on our solution. We insist on a freeze in requirements and sometimes limiting participation because “too many cooks” spoils the broth- or sauce as it were. Unfortunately, we are ignoring the sometimes less than rational way people make decisions. All the process waving and feet stomping in the world won’t change the reality of the marke, meanwhile your competitor is giving your customer  chunky spaghetti sauce.

Attempting to be to deterministic in specifying our project requirements usually stifles creativity. Roger Sessions of ObjectWatch, Inc. Uses the terms directed and non-directed methodologies. Equating directed methodologies to playing the childhood game, “Hot-or-Cold”, where there truly is only one possible solution. Where as Poker is a non-directed methodology where there are multiple ways to win. Just because your client specifies the need for a straight flush doesn’t mean he won’t be happy with two pair if he wins the pot.

The practices within the application life cycle that best provide some allowance for this behavior include:

Simulation– Using rough cut, working software simulation tools such as iRise can give clients multiple usage scenarios to evaluate.
Integrated product design– Nothing works as well as having the entire product development team deeply involved from product concept to launch, collaborating in real time- seeing the implications of their decisions and indecision.
Iterative development cycles– Knowing the limits of prediction, rapid feedback of working software allows for course correction in architecture and product direction before too much is sunk.
Component Architectures– Understanding where the range of diversity in our users tastes lies then requires us to build in that flexibility.

Gladwell’s story also reminds us that solutions are a continuum. A variety of different end products and combinations will make our clients happy. Boiling down to the lowest common denominator ignores the individual and organizational diversity. It may seem logical to get the market to standardize for the best greatest use, but if that’s how we made decisions we’d all be driving Model-T’s. Thank you Mr. Moskowitz for recognizing the need for spicy sauce. Now go expand your customers range of options. Go find your secret sauces.

Have any Question or Comment?

5 comments on “Spaghetti Sauce and Software Requirements

Hi Clay. This is a very interesting article. I agree that users often don’t know what they really want until they are given a set of solutions to choose from. The ability to draw out the true needs (vs wants), by presenting them with solution alternatives that are realistic is a talent of business analysts that is often ignored in mainstream agile methods such as Scrum. Many Business Analysts are rightfully concerned that these methods do not explicitly recognize the valuable role that a good analyst fulfills, and the elicitation and negotiation skills that they bring to a project. In DAD, we remind organizations that the requirement for these skills doesn’t go away on agile projects. In our upcoming book, Scott and I devote a lot of material to discussing the roles and cross functional skills required on most agile projects, as part of a “people first” mentality.


Hi Scott,
While I like the overall message of this post, a lot, there is at least one flaw in it, and it also triggers a thought that is new to me.
We try to catch Requirements errors with requirements reviews, or inspections. Errors are something like a contradiction, or a discription of a special case missing. I believe we should carry on doing that. Reviews are meant to judge the quality of the requirements, not to make decisions about whether they add to any maker needs.
But far more interesting IMHO is the following: if the market likes different sauces, isn’t it then a good idea to provide a software product which has many options? A dozen ways to do a task? Extensive me yes to choose from, various assistants, a command line interface, and a web service, all within the same software? This is somehow counterintuitive, as we like to keep things simple, for simple promises less effort needed to maintain.
I’m wondering!
Thanks for the post and the website. I love reading it and had many aha moments. Please keep up the excellent work.
Cheers, .r


Reviews are one way to identify requirements problems, but there are better ways such as active stakeholder participation in requirements modelling and holding regular demos of working software.

I prefer a usage driven approach to requirements. Work through how end users actually want to work with your solution. If that results in several ways to do the same sort of thing then so be it (assuming that it makes sense to invest in implementing the various ways).


Hi Scott,
we’re on the same page, I guess. I sense a different understanding of the purpose of “requirements review” as a process/method. I both like stakeholder participation, demos, and usage-driven approaches. I should have been clearer on my first response. Reviews, like inspections, in my world are ways to measure the quality of something, like “is it clear enough to test,” or “can stakeholders agree on one meaning of this.” I wasn’t thinking of Scrum’s Sprint Reviews, in particular. But this is just semantics.

From what I see, giving the user various options to achieve the same can be done nicely when someone plans for that kind of flexibility when needed. It can turn into a nightmare if it “just happens” over time, as this is a scenario that adds undue amounts of complexity. However, this is true for almost all requirements 🙂


Can’t believe it’s been nearly 5 years since this post… The intent at the time was to combat the notion that you could achieve a perfect requirements set before you demonstrated some functionality through either simulation, prototyping, or these days even a MVP. I don’t see it attempted as often, but I spend less time in that space these days.

I also agree that a takeaway from this idea is that configurable software likely our best “solution” to this behavior. Microservice architectures will hopefully be continue to give the consumer “after market” choices that fit their unique workflow.

The world is changing too quickly and the answer isn’t to slow down change, but find ways to embrace it.


Leave a Reply

Your email address will not be published. Required fields are marked *