We hear a lot about building products which are “good enough” or “just barely good enough.” How do we know what “good enough” means for our customers? No one really tells us.Different Perspectives of Good Enough
There are several important ways to think about a product being good enough – for this article, we will limit the context for discussion to “good enough to ship to customers” or “good enough to stop making it better (for now).” Determining good enough informs the decision to ship or not. Otherwise this is all academic
There are several perspectives on good enough which are important – but don’t help product managers enough. The body of work seems to be focused on aspects of goodness which don’t help product managers to make the prioritization decisions which inform their roadmaps. They are important – they are necessary but not sufficient for product managers. Here are some pointers to some great stuff, before I dive into what I feel is a missing piece.
- Good enough doesn’t mean you can just do 80% of the coding work you know you need to do, and ship the product – allowing technical debt to pile up. Robert Lippert has an excellent article about this. Technical debt piles up in your code like donuts pile up on your waistline. This is important, although it only eventually affects product management as the code base becomes unwieldy and limits what the team can deliver – and increases the cost and time of delivery.
- Be pragmatic about perfectionism when delivering your product. Steve Ropa has an excellent article about this. As a fellow woodworker, his metaphors resonate with me. The key idea is, as a craftsman, to recognize when you’re adding cost and effort to improve the quality of your deliverable in ways your customer will never notice. This is important, and can affect product managers because increasing the cost of deliverables affects the bang-for-the-buck calculations, and therefore prioritization decisions.
With the current mind share enjoyed by Lean Startup and minimally viable products (MVP), there is far too much shallow analysis from people jumping on the bandwagon of good ideas without fully understanding the ideas. Products fail because of misunderstanding of the phrase minimum viable product.
- Many people mis-define product in MVP to mean experiment. Max Dunn has an excellent article articulating how people conflate “running an experiment” with “shipping product” and has a good commentary on how there isn’t enough guidance on the distinction. This is important for product managers to understand. Learning from your customers is important – but it doesn’t mean you should ship half-baked products to your market in order to validate a hypothesis.
- MVP is an experimentation process, not a product development process. Ramli John makes this bold assertion in an excellent article. Here’s a slap in the face which may just solve the problem, if we can get everyone to read it. MVP / Lean Startup is a learning process fueled with hypothesis testing, following the scientific method. Instead of trying to shoehorn it into a product-creation process, simply don’t. Use the concept to drive learning, not roadmaps.
- “How much can we have right now?” is important to customers. Christina Wodtke has a particularly useful and excellent article on including customers in the development of your roadmap. “Now, next, or later” is an outstanding framework for simultaneously getting prioritization feedback and managing the expectations of customers (and other stakeholders) about delivery. My concern is that in terms of guidance to product managers, this is as good as it gets. Most people manage “what and when” but not “how effectively.”
There are three perspectives on how we approach defining good enough when making decisions about investment in our products. The first two articles by Robert and Steve (linked above) address the concerns about when the team should stop coding in order to deliver the requested feature. There is also the valid question of if a particular design – to which the developers are writing code – is good enough. I’ll defer conversation about knowing when the design of how a particular capability will be delivered (as a set of features, interactions, etc) for another time. [I’m 700 words into burying the lead so far].
For product managers, the most important perspective is intent. What is it we are trying to enable our customers to do. Christina’s article (linked above) expertly addresses half of the problem of managing intent. Note that this isn’t a critique of her focus on “what and when.”We need to address the question “how capable for now?” How Capable Must it Be for Now?
Finally. I wrote this article because everyone just waves their hand at this mystical concept of getting something out there for now and then making it better later. But no one provides us with any tools for articulating how to define “good enough.” Several years ago I wrote about delivering the not-yet-perfect product and satisficing your customers incrementally – but I didn’t provide any tools to help define good enough from an intent perspective.
Once we identify a particular capability to be included in a release (or iteration), we have to define how capable the capability needs to be. Here’s an example of what I’m trying to describe:
- We’ve decided that enabling our target customer to “reduce inventory levels” is the right investment to make during this release.
- How much of a reduction in inventory levels is the right amount to target?
That’s the question. What is good enough?
Our customer owns the definition of good enough. And Kano analysis gives us a framework for talking about it. When looking at a more is better capability, from the perspective of our customers, increases in the capability of the capability (for non-native English speakers “increasing the effectiveness of the feature” has substantially the same meaning) increases the value to them.
We can deliver a product with a level of capability anywhere along this curve. The question is – at what level is it “good enough?”
Once we reach the point of delivering something which is “good enough,” additional investments to improve that particular capability are questionable – at least from the perspective of our customers.Amplifying the Problem
Switch gears for a second and recall the most recent estimation and negotiation exercise you went through with your development team. For many capabilities, making it “better” or “more” or “faster” also makes it more expensive. “Getting search results in 2 seconds costs X, getting results in 1 second costs 10X.”
As we increase the capability of our product, we simultaneously provide smaller benefit to our customers at increasingly higher cost. This sounds like a problem on a microeconomics final exam. A profit-maximizing point must exist somewhere.An Example
Savings from driving a more fuel efficient car is a good example for describing diminishing returns. Apologies to people using other measures and currencies. The chart below shows the daily operating cost of a vehicle based on some representative values for drivers in the USA.
Each doubling of fuel efficiency sounds like a fantastic improvement in a car. 80 MPG is impressively “better” than 40 MPG from an inside-out perspective. Imagine the engineering which went into improving (or re-inventing) of technology to double the fuel efficiency. All of that investment to save the average driver $1 per day. This is less than $2,000 based on the average length of car ownership in the USA.
How much will a consumer pay to save that $2,000? How much should the car maker invest to double fuel efficiency, based on how much they can potentially increase sales and/or prices? An enterprise software rule of thumb would suggest the manufacturer could raise prices between $200 and $300. If the vendor’s development budget were 20% of revenue, they would be able to spend $40 – $60 (per anticipated car sold) to fund the dramatic improvement in capability.One Step Closer
What good enough means, precisely, for your customer, for a particular capability of a particular product, given your product strategy is unique. There is no one-size-fits-all answer.
There is also no unifying equation which applies for everyone either. Even after you build a model which represents the diminishing returns to your customers of incremental improvement, you have to put it in context. What does a given level of improvement cost, for your team, working with your tech-stack? How does improvement impact your competitive position – both with respect to this capability and overall.
You have to do the customer-development work and build your understanding of how your markets behave and what they need.
At least with the application of Kano analysis, you have a framework for making informed decisions about how much you ultimately need to do, and how much you need to do right now. As a bonus, you have a clear vehicle for communicating decisions (and gaining consensus) within your organization.
Your product roadmap is a view of what you are building right now, in the near future, and in the more distant future. Or is your roadmap a view of why you are building whatever you’re building right now, in the near future, and in the more distant future?
Your roadmap is both – but one is more important than the other – and product managers need to be able to view the roadmap both ways.
When you view the spinning cat animation(1) above, you will either see it as rotating clockwise or counter-clockwise. Everyone has a default. Because of bi-stable perception, the direction of rotation may reverse for you, but reversing direction is apparently more difficult for some people than others.
Depending on your role in your organization you will be biased towards viewing your roadmap either as a view of what the team will be building into your product, or a view of why the team will be building things into your product. As a product manager, it is imperative that you can view it both ways. I also hope to make the case that one view is more important than the other.What the Team Will Be Building
Janna Bastow wrote an excellent article about the importance of flexibility in roadmaps. The article provides a really good explanation of how product management is not project management, and the evils of having a roadmap which is nothing more than a “glorified Gantt chart.”
She provides a great visual depiction of rolling-wave planning.
[larger image available in Janna’s article]
I describe rolling-wave planning as being precise about the immediate-term future, having less specificity about the near future, and being nonspecific (but still helpful) about the more distant future. This is because – from the perspective of someone focusing on what they will be doing – there is too much uncertainty about what will happen between now and the future. [ BTW, this is where waterfall is wasteful – it assumes incorrectly an ability to predict the future in detail.]
The article provides very good explanations about the need for flexibility both in terms of time and scope within a roadmap. Shifting the delivery of something to an earlier or time frame, or alternately thinking about a particular time frame including more or less deliverable. This is however, only a “good explanation” when you’re thinking from the point of view of what the team will be building.
The key attributes of a product roadmap from this view are that descriptions of what to do are precise right now, less specific in the near term, and very flexible in the future. While I completely agree – given a focus on what is being built – I don’t think about product roadmaps in this way. I believe my thinking has shifted because I’m primarily creating roadmaps, not consuming them.Intentionality
A concept which may help you shift to thinking about a product roadmap in terms of why the team will be building is intentionality. There is a reason why you’re building something.
Your team is building features because they are trying to build solutions. More precisely, your team is building a product with a set of capabilities, which you hope your customers will choose (and use) to help with solving their problems.Customers are trying to solve problems, they aren’t trying to use features.
As a product manager, your perspective needs to be rooted in the perspective of the problems your customers are trying to solve – the intent driving your roadmap – not the things your team is building in order to solve the problem.Why the Team Will Be Building
Where the view of what to build gets fuzzier as you move from the present to the future, the view of why you build gets clearer as you move from the present to the future. This is the exact opposite of rolling-wave planning, and that’s not only fine, it is good.
Choosing a particular group of customers (or market) to serve is a strategic decision. Generally this will not change, and when it does it will not be frequent. This is a long term view – the future for the team will involve building things to support this group of customers. There is great and powerful clarity here – specifically about the future. “We are, and will be, building to support customer group X.”
Given the decision to provide value for a specific group of customers, the logical next question is “how?” To avoid ambiguity, the question is really “which of the problems our target customer faces are we going to help them solve?” The charter for a product team can be described as “Help [a specific set of] customers solve [a specific set of] problems.”
In the near-term, there is flexibility in the choice of which problem to address next. Having that flexibility is imperative, because discovery (from feedback from customers) tells us if we’ve got the right problems, in the right sequence. So we need to be able to re-prioritize, and add or remove from the list of problems as we manage our roadmaps thematically.
In the “right now” the team is testing hypotheses. Given a customer, and a problem the customer is trying to solve, there is a hypothesis about how best to help. That hypothesis is a design. The design may or may not be a good one. The implementation of the design may be great, adequate, failing, or incomplete. In this time horizon, the activities of the team building the product are concrete – build, test, learn.
These activities are much hazier from the perspective of why the team is building something. Technically, there is still clarity about the customer and the problem (they aren’t changing). However, there’s additional explanation required – a description of the hypothesis – to explain why a particular activity is being managed for the current sprint or release. As an example
“We found that forcing users to make decision (X) when solving problem (Y) caused those users to abandon our product. Based on interviews we’re defaulting decision (X), and we believe this will reduce abandonment by (Z%).”
The need for explanation about the specific “what” is how I interpret the reduced clarity about the near term, when focusing on why versus what.
I selected the spinning cat image because both points of view are valid. From one point of view, all of the clarity occurs in the immediate term, and from the other point of view, all of the clarity manifests in the longer-term big picture. Most people first see the cat spinning in the same direction, and most of the time product managers should seef their roadmap “customer first.”
A product manager has to be able to switch between both views – and communicate in either framework – depending on the context of what they are doing in the moment. Building the roadmap is working backwards from the customer to the feature. Helping the team execute, and appreciate the context and relevance of what they are working on involves going from the current deliverables up to the intent for building them.Frequency of Change from Learning
Another reason I work “outside in” in my thinking about roadmaps is the frequency of change which comes from what we learn as we engage with our market, study our competitors, and adapt to industry changes.
Significant and sustained* feedback is needed to tell us we have chosen the wrong customers. How we choose customers is a topic for another article. When I hear about companies pivoting, I think of it as picking different customers.
We start out engaging customers with a hypothesis that they will pay us to help them address a particular set of problems. We start this process with a point of view of the relative priority of solving each problem. Feedback from customers – throughout the product creation process – helps us improve this list of problems. All within the stable context of helping a particular customer group solve their problems. Processes like the structured conversation from Discover to Deliver include these activities as part of the overall product creation process.
While working with customers, we find out what it means to satisfice. We also develop and refine our understanding of the nature of the problems using tools like Kano analysis. We also have the opportunity to discover when a particular design does not achieve the goal of helping the customer, or when a poor implementation fails to achieve the vision of the designer. Sometimes, when stories are split to fit within a sprint, the smaller stories don’t actually solve the problem, and additional splits need to be completed before moving forward.
*This assumes we did not make colossally bad choices to begin with – opening us up to the debate between rationality and empiricism. Bad choices may be discovered immediately.Communication and Conclusion
Executives (should) want to talk about intent and strategy for investment in the product. Base your conversations on this. If they need status report type updates about the activities of the team, then talk about features. But try and shift the conversation back to corporate goals, strategy, and the role your product is intended to fill.
Other interested stakeholders will almost always ask about features. This is perfectly understandable – they are not product managers, and they are focused on the wrong things. They are focused on problem manifestations, not problems. Help them refocus while you address their concerns.
Product managers should drive roadmaps based on “why” not “what.” We still need to be able to think in the opposite direction, but that approach should be secondary.Attributions
(1) Thanks Pech-Misfortune for the original animated image.
“Agile” is something most teams do wrong*, without realizing they’re doing it wrong. A good 2×2 matrix acts as a lens, helping to convert information into insight. Let’s apply this lens to agile as applied within a company, and see if it helps people decide to do things differently.When You Say Agile, What Do You Mean?
There may be as many definitions of agile as there are teams practicing agile development. Generally, people are talking about iterating in what they do. Instead of having a long, throw it over the wall process, out of which emerges a deliverable; a team will have a series of shorter iterations where they engage stakeholders and otherwise rethink what they are doing to course-correct and “get smarter” about what they are doing. The Wikipedia page on agile is pretty comprehensive.
Most teams think about agility in terms of how their development teams manage their process. When “going agile” is the only thing you do, your product does not magically become more successful. Some teams** think about what it means to be agile when determining what the development team should be doing in the first place. My epiphany was in realizing that these are two separate decisions an organization can make.A 2×2 Matrix of Agile
When an organization can make two discrete decisions about being agile in how they create products, it results in four possible outcomes. A 2×2 matrix can act as a powerful lens for exploring these decisions. Our first step is to define our two axes.
Requirements – how are they treated within the organization / by the team?
- Requirements and expectations are immutable – this is the typical expectation within a large bureaucracy; someone built a business case, got funding, and allocated a team to deliver the product as-defined.
- Requirements continually revisited – this is what we see nimble teams doing – at different levels of granularity, context, and relevance; at a low level, this is A|B testing and at a high level this is a pivot.
Development process cadence – how frequently does the team deliver***?
- Infrequent delivery – there is no one size fits all measure to define infrequent vs. frequent; some companies will have fast-moving competitors, customers with rapidly evolving expectations, and significant influence from evolving technology – others will not (for now).
- Frequent delivery – the precise delineation from infrequent to frequent delivery is contextually dependent.
With these two axes, we can draw a matrix.
A subordinate message that I couldn’t resist putting into the matrix is that it is harder to do your job in an agile way. I think you could pedantically argue that agile is easier – by saying it is easier to deliver the equivalent results when your process is agile. And that’s true. The point is to deliver a more successful product, which is harder than delivering a less successful product. An agile approach makes that task easier. Maybe another way to think about it – if your team is not capable of delivering a good product, going agile will make that more obvious, faster.Living in Boxes
Everyone can map their team into one of the four boxes. That’s the power of this sort of abstraction.
Here’s where I can use your help: What are better names for these boxes? I have satisficed with these names, but they could be better. Please comment below with proposed alternatives, because I’ll be incorporating this lens into other aspects of my work, and I want it to be better than it currently is.
Waterfall as Practiced
While there are some teams which consciously choose agile because of the planning benefits or perceived risks to quality, I believe that most waterfall teams are still waterfall either because they haven’t chosen to revisit their process choices, or they tried and failed. Perhaps their instructors weren’t good, perhaps the team was not equipped to make the shift. My guess is that their organizations were unwilling or unable to support any change in the bureaucratic status quo, effectively making it impossible for the teams to succeed.
BUFD & BUFR (Buffed and Buffer)
BUFR is an acronym for big up-front requirements, and BUFD is the equivalent for big up-front design. Both of them are labels assigned as part of the decade-old war between interaction design and extreme programming. Conceptually, the battle is between rationalists and empiricists. In a nutshell, the requirements are Defined (capital Defined), then the team applies an agile development methodology (mostly) to incrementally build the product according to the requirements.
This is another area where can explore more – what are requirements, what is design, who owns what? My main point is that the developers, while going through the agile motions – even when getting feedback – are only realizing some of the benefits of agile. Yes, they can improve the effectiveness of their particular design or implementation at solving the intended problem. Yes, they can avoid the death-march.
The problem is that the requirements are set in stone metaphorically.
At the end of the day, the team is empowered to rapidly iterate on, and change how they choose to solve the target (market) problems. The team is not empowered to rapidly change their minds about which market problems to target.
When agile is being introduced to a larger organization, as a grass-roots initiative starting with a development team, this is corner the team will find themselves in.
Req’s Churn or Glacial Dev
I struggle for the right way to describe the situation where the people responsible for determining the requirements are getting market feedback and changing their requirements, which the people responsible for creating the product are unwilling or unable to accept changes from the initial plan.
From the development team’s point of view, “the product manager can’t make up his mind – we are just churning, without getting anything done!”
From the product manager’s point of view, “the development team is too slow, or intransigent, and can’t seem to keep up.”
There’s only one environment where this approach is somewhat rational – outsourced development with limited trust. When the relationship between the product management / design team, and the product creation / test team is defined by the contract, or the two teams do not trust each other, the only reasonable way to make things work is to establish explicit expectations up front, and then deliver to those specifications. Note that the specifications typically include a change-management process, which facilitates reaching an agreement to change the plan. The right way to make this type of relationship work is to change it, but if you’re stuck with it – this is your box.
Agile as Intended
Ah, the magic fourth box. Where rapid delivery leads to rapid learning which leads to rapid changes in the plan. The success of agile is predicated on the assumption that as we get feedback from the market, we get smarter; as we get smarter, we make better choices about what to do next.
This is what enables a sustainable competitive advantage, by enabling you to sustainably differentiate your product from competition, rapidly adapt to changing customer expectations and market conditions. Effectively, you are empowered to change what you choose to do, as well as how you choose to do it. This is what agile product management is about – enabling business agility.
A winning strategy involves selecting an attractive market, developing a strategy for how you will compete within that market, then developing a product (or portfolio) roadmap which manifests the strategy, while embodying the vision of the company. It is possible to do this in any corner of the matrix (except the upper left, in my opinion). The less willing you are to rely on your ability to predict the future accurately, the more you will want to be in the upper right corner.Conclusion
There isn’t a particularly strong argument against operating your team in the upper right hand corner of the box, Agile as Intended. The best argument is really just “we aren’t there yet.” From conversations I’ve had with many team leaders, they seemed to think that getting to the lower right corner was the right definition of “done.” They thought they were “doing agile” and that there wasn’t anything left to change, organizationally. And they wondered why their teams weren’t delivering on the promise of agile. It’s because they weren’t there yet.
Hopefully this visual will help drive the conversation forward for some of you out there. Let me know if it helps light bulbs go off.Attributions and Clarifications
*Agile isn’t a noun really something you do, agile is an adverb describing how you do something. English is a funny language, and “doing agile” is generally used to reflect developing a product in an agile manner. Sometimes it is important to point this out – particularly when you’re trying to help people focus on the product and not the process (like here), but for this article, I didn’t want to dilute the other messages. As a bonus, the people who would be tweaked by the use of agile as a noun are generally people who “get it” and I like the idea that they read the whole article, just to see this caveat. Thanks for reading this :).
**This is based on anecdata (thanks Prabhakar for the great word), but my impression is that small companies commonly do this – think Lean Start Up – and large companies rarely do this. I suspect this is more about the challenge of managing expectations and otherwise navigating a bureaucracy built to reward execution against a predetermined plan.
***Definitions of “deliver” make for a great devil is in the details discussion too – do you deliver to end-customers or internal stakeholders? What if your existing customers refuse to update every month? How many versions of your product do you want in the field? Another great topic – but not the focus of this article?