Ponder this scenario…
Suppose you have an Agile team building applications based on data API’s that are managed by a services team. Suppose that you cannot change the structure of the organization. If changes are needed to get a different data field from the API in support of a feature in your app — what do you do?
The answer is a cross-team story. A cross-team story is really two parts: the validation piece of the story stays with the team that identified the need, and the actual development work story goes to the team that completes the work.
If both teams are running in Sprints, the shortest timeframe for completion of a cross-team story is two sprints. One sprint for the team doing the work, and one sprint for the validation of the team that identified the work.
A cross-team story workflow may look something like this:
1. During release planning or sprint planning, a team identifies work that must be delivered by a different team.
2. The team identifying the work creates two stories.
3. The product owners of the respective teams negotiate priority and scope of the story.
4. The team doing the work plans the story into their sprint. If the team doing the work uses a Kanban methodology, sprint planning is skipped and an expected date is communicated to the identifying team.
5. After the work is complete, the identifying team plays their validation story. This story could have been scheduled in a sprint based on the sprint, or an expected date from the work team.
6. When cross-team stories are identified prior to release planning there is an opportunity to play risk cards to ensure both teams are aware of the priority of the story, and the dependency can be manager by the Program Team or Portfolio Team. The tracking of the dependency can be accomplished in the Scrum of Scrums meetings.
In most organizations cross-team stories are rare. Many cross-team stories increase the risk of delay in delivery. If you notice that your team is always dependent on other teams to complete any functionality, this may be an indication that the skill set or architecture is not aligned with business priorities. This misalignment should be escalated to the Portfolio Team for resolution.
Several years ago, I was called to help an organization that was experiencing system outages in their call center. After months of outages and no effective action, they appointed an Operations Analyst to collect data and get to the bottom of the problem.
Once they had data, the managers met monthly to review it. At the beginning of the meeting, the Operations Analyst presented a pie chart showing the “outage minutes” (number of minutes a system was unavailable) from the previous month. It was clear from the chart which system the biggest source of outages for the month.
The manager for that system spent the next 40 minutes squirming as the top manager grilled him. At the end of the meeting, the top manager sternly demanded,“Fix it!”
By the time I arrived to help, they had many months of data, but it wasn’t clear whether any thing had improved. I dove in.
I looked at trends in the total number of outage minutes each month. I plotted the trends for each application, and created time series for each application to see if there were any temporal patterns. That’s as far as I could get with the existing data. In order to hone in on the biggest offenders, I needed to know not just the number of minutes a system was down, but how many employees and customers couldn’t work for when a particular system was down. One system had a lot of outage minutes, but only a handful of specialists who supported an uncommon legacy product used it. Another system didn’t fail often, but when it did, eight hundred employees were unable to access holdings for any customers.
Though they had data before I got there, they weren’t using it effectively. They weren’t looking at trends in total outage minutes… the pie chart showed the proportion of the whole, not whether the total number was increasing or decreasing over time. Because they didn’t understand the impact, they wasted time chasing insignificant problems.
When I presented the data in a different way, it led to a different set of questions, and more data gathering. That data eventually helped this group of managers focus their problem-solving (and stop pointing the roving finger of blame).
As a problem-solver, when you don’t have data, all you have to go on is your intuition and experience. If you’re lucky you may come up with a fix that works. But most good problem solvers don’t rely on luck. In some cases, you may have a good hunch what the problem is. Back up your hunches with data. In either case, I’m not talking about a big measurement program. You need good enough and “just enough” data to get started. Often there’s already some useful data, as there was for the call center I helped.
But what kind of data do you need? Not all problems involve factors that are easily counted, like outage minutes, number of stories completed in a sprint, or number of hand-offs to complete a feature.
If you are looking at perceptions and interactions you’ll probably use qualitative data. Qualitative data focuses on experiences and qualities that we can observe, but cannot easily measure. Nothing wrong with that. It’s what we have to go on when the team is discussing team work, relationships, and perceptions. Of course, there are ways to measure some qualitative factors. Subjective reports are often sufficient (and less costly). Often, you can gather this sort of data in quickly in a group meeting.
If you are using quantitative data, it’s often best to prepare data relevant to the focus prior to the problem-solving meeting. Otherwise, you’ll have to rely on people’s memory and opinion, or spend precious time looking up the information you need to understand the issue.
When I’m thinking about what data would be useful to understand a problem, I start with a general set of questions:
What are the visible symptoms?
What other effects can we observe?
Who cares about this issue?
What is the impact on that person/group?
What is the impact on our organization?
These questions may lead closer to the real problem, or at least confirm direction. Based on what i find, I may choose where to delve deeper, and get more specific as I explore the details of the situation
When does the problem occur?
How frequently does it occur?
Is the occurrence regular or irregular?
What factors might contribute to the problem situation?
What other events might influence the context?
Does it always happen, or is it an exception?
Under what circumstances does the problem occur?
What are the circumstances under which it doesn’t occur?
How you present data can make a big different, and may mean the difference between effective action and inaction, as was the case with the call center I helped
In a retrospective—which is a special sort of problem-solving meeting—data can make the difference between superficial, ungrounded quick fixes and developing deeper understanding that leads to more effective action—whether you data is qualitative or quantitative.
Here’s some examples how I’ve gathering data for retrospectives an other problem-solving meetings.Data TypeMethodExamples Notes QualitativeSpider or Radar ChartUse of XP practices.
Satisfaction with various factors.
Adherence to team working agreements.
Level of various factors (e.g. training, independence)
Shows both clusters and spreads.
Highlights areas of agreement and disagreement.
Points towards areas for improvement. Leaf ChartsSatisfaction. Motivation.
Severity of issues.
Anything for which there is a rating scale.Use a pre-defined rating scale to show frequency distribution in the group.
Similar to bar charts, but typically used for qualitative data. Sail boat (Jean Tabaka)Favorable factors (wind), risks (rocks), unfavorable factors (anchors), Metaphors such as this can prompt people to get past habitual thinking. TimelinesProject, release, iteration. events over time. Events may be categorized using various schemes. For example:
technical and non-technical
levels within the organization (team, product, division, industry).Shows patterns of events that repeat over time. Reveals pivotal events (with positive or negative effects). Useful for prompting memories, showing that people experience the same event differently. TablesTeam skills profile (who has which skills, where there are gaps)Shows relationships between two sets of information. Shows patterns. TrendsSatisfaction. Motivation.
Severity of issues.
Anything for which there is a rating scale.Changes over time. QuantitativePie ChartsDefects by type, module, source.
Severity of issues. Shows frequency distribution. Bar ChartsBugs found in testing by module + bugs found by customers by module.Frequency distribution, especially when there is more than one group of things to compare. Similar to histograms, but typically used for quantitative data. HistogramsDistribution of length of outages.Frequency of continuous data (not categories). TrendsDefects.
Shows movement over time. Often trends are more significant than absolute numbers in spotting problems. Trends may point you to areas for further investigation—which may become a retrospective action. Scatter PlotsSize of project and amount over budget.Show the relationship between two varianles. Time SeriesOutage minutes over a period of time.
Through-put.Show patterns and trends over time. Use when the temporal order of the data might be important, e.g., to see the effects of events. Frequency TablesDefects
Stories accepted on first, 2nd, 3rd, demo.A frequency table may be a preliminary step for other charts, or stand on its own. Data TablesImpact of not ready stories.Show the same data for a numberr of instances.
I am shepherding an experience report for XP 2016. A shepherd is sort-of like a technical editor. I help the writer(s) tell their story in the best possible way. I enjoy it and I learn from working with the authors to tell their stories.
The writers for this experience report want to pair-write. They have four co-authors. I offered them suggestions you might find useful:Tip 1: Use question-driven writing
When you think about the questions you want to answer, you have several approaches to whatever you write. An experience report has this structure: what the initial state was and the pain there; what you did (the story of your work, the experience); and the end state, where you are now. You can play with that a little, but the whole point of an experience report is to document your experience. It is a story.
If you are not writing an experience report, organize your writing into the beginning, middle, end. If it’s a tips piece, each tip has a beginning, middle, end. It depends on how long the piece is.
When you use question-driven writing, you ask yourself, “What do people need to know in this section?” If you have a section about the software interacting with the hardware, you can ask the “What do people need to know” and “How can I show the interactions with bogging down in too much detail” questions. You might have other questions. I find those two questions useful.Tip 2: Pair-write
I do this in several ways with my coauthors. We often discuss for a few minutes what we want to say in the article. If you have a longer article, maybe you discuss what you want to discuss in this section.
One person takes the keyboard (the driver). The other person watches the words form on the page (the navigator). When I pair-write with google docs, I offer to fix the spelling of the other person.
I don’t know about you, but my spelling does not work when I know someone is watching my words. It just doesn’t. When I pair, I don’t want the writer to back up. I don’t want to back the cursor up and I don’t want the other person to back up. I want to go. Zoom, zoom, zoom. That means I offer to fix the spelling, so the other person does not have to.
This doesn’t work all the time. I’m okay with the other person declining my offer, as long as they don’t go backwards. I become an evil witch when I have to watch someone use the delete/backspace key. Witch!Tip 3: Consider mobbing/swarming on the work
If you write with up to four people (I have not written with more than four people), you might consider mobbing. One person has the keyboard, the other three make suggestions. I have done this just a few times and the mobbing made me crazy. We did not have good acceptance criteria, so each person had their own idea of what to do. Not a recipe for success. (That’s why I like question-driven writing.)
On the other hand, I have found that when we make a list of sections—maybe not a total outline—pairs of people can work on their writing at the same time. Each pair takes a section, works on that, and returns to the team with the section ready for review. I have also been in a position where someone did some research and returned to the writing team.
I have also been in a position where someone did some research and returned to the writing team.Tip 4: Use a Short Timebox for Writing
When I pair, I write or navigate in no more than 15-minute timeboxes. You might like an even shorter timebox. With most of my coauthors, I don’t turn on a timer. I write for one-to-several paragraphs and then we switch. We have a little discussion and then we’re writing again. Most of my timeboxes are 5-7 minutes and then we switch.Pair Writing Traps
I have seen these traps when pair-writing:
- One person dictates to the other person. That smacks of first-author, all-the-rest-of-you-are-peons approach.
- One or both of you talk without writing. No. If someone isn’t writing in the first 90 seconds, you’re talking, not writing. Write. (This is the same problem as discussing the design without writing code to check your assumptions about the design.)
I didn’t realize I would make this a series. The post about writing by yourself is Four Tips to Writing Better and Faster.
I have a special registration for my writing workshop for pairs. If you are part of a pair, take a look and see if this would work for you.
Various methods exist for helping product owners to decide which backlog item to start first. That this pays off to do so (more or less) right has been shown in blogs of Maurits Rijk and Jeff Sutherland.
These approaches to ordering backlog items all assume that items once picked up by the team are finished according to the motto: 'Stop starting, start finishing'. An example of a well-known algorithm for ordering is Weighted Shortest Job First (WSJF).
For items that may be interrupted, this results not in the best scheduling possible. Items that usually are interrupted by other items include story map slices, (large) epics, themes, Marketable Features and possibly more.
In this blog I'll show what scheduling is more optimal and how it works.
In WSJF scheduling of work, i.e. product backlog items, is based on both the effort and (business) value of the item. The effort may be stated in duration, story points, or hours of work. The business value may be calculated using Cost of Delay or as is prescribed by SAFe.
When effort and value are known for the backlog items, each item can be represented by a dot. See the picture to the right.
The proper scheduling is obtained by sweeping the dashed line from the bottom right to the upper left (like a windshield wiper).
In practice both the value and effort are not precisely known but estimated. This means that product owners will treat dots that are 'close' to each other the same. The picture to the left shows this process. All green sectors have the same ROI (business value divided by effort) and have roughly the same value for their WSJF.
Product owners will probably schedule items according to: green cells from left-to-right. Then consider the next 'row' of cells from left-to-right.
Other Scheduling Rules
It is known at least since the 1950's (and probably earlier) that WSJF is the most optimal scheduling mechanism if both value and size are known. The additional condition is that preemption, i.e. interruption of the work, is not allowed.
If either of these 3 conditions (known value, known size, no preemption) is not valid, WSJF is not the best mechanism and other scheduling rules are more optimal. Other mechanisms are (for a more comprehensive overview and background see e.g. Table 3.1, page 146 in [Kle1976]):
No preemption allowed
- no value, no effort: FIFO
- only effort: SJF / SEPT
- only value: on value
- effort & value: WSJF / SEPT/C
- Story map slices: WSJF (no preemption)
FIFO = First in, First out
SEPT = Shortest Expected Processing Time
SJF = Shortest Job First
C = Cost
Examples: (a) user stories on the sprint backlog: WSJF, (b) production incidents: FIFO or SJF, (c) story map slices that represent a minimal marketable feature (or short Feature). Leaving out a single user story from a Feature creates no business value (that's why it is a minimal marketable feature) and starting such a slice also means completing it before starting anything else. These are scheduled using WSJF. (d) User stories that are part of Feature; they represent no value by themselves, but all are necessary to complete the Feature they belong to. Schedule these according to SJF.
- no value: SIRPT (SIJF)
- effort & value: SIRPT/C or WSIJF (preemption)
- SIRPT = Shortest Imminent Remaining Processing Time
SIRPT/C = Shortest Imminent Remaining Processing Time, weighted by Cost
SIJF = Shortest Imminent Job First
WSIJF = Weighted Shortest Imminent Job First
The 'official' naming for WSIJF is SIRPT/C. In this blog I'll use Weighted Shortest Imminent Job First, or WSIJF.
Examples: (a) story map slices that contain more than one Feature (minimal marketable feature). We call these Feature Sets. These are scheduled using WSIJF, (b) (Large) Epics that consist of more than 1 Feature Set, or epics that are located at the top-right of the windshield-wiper-diagram. The latter are usually split in smaller one containing most value for less effort. Use WSIJF.
- User Story (e.g. on sprint backlog and not part of a Feature): WSJF
- User Story (part of a Feature): SJF
- Feature: WSJF
- Feature Set: WSIJF
- Epics, Story Maps: WSIJF
Mathematically, WSIJF is not as simple to calculate as is WSJF. Perhaps in another blog I'll explain this formula too, but in this blog I'll just describe what WSIJF does in words and show how it affects the diagram with colored sections.WSIJF: Work that is very likely to finish in the next periods, has large priority
What does this mean?
Remember that WSIJF only applies to work that is allowed to be preempted in favour of other work. Preemption happens at certain points in time. Familiar examples are Sprints, Releases (Go live events), or Product Increments as used in the SAFe framework.
The priority calculation takes into account:
- the probability (or chance) that the work is completed in the next periods,
- if completed in the next periods, the expected duration, and
- the amount of time already spent.
Example. Consider a Scrum team that has a cadence of 2-week sprints and time remaining to the next release is 3 sprints. For every item on the backlog determine the chance for completing it in the next sprint and if completed, divide by the expected duration. Likewise for completing the same it in the next 2 and 3 sprints. For each item you'll get 3 numbers. The value divided by the maximum of these is the priority of the backlog item.
Qualitatively, the effect of WSIJF is that items with large effort get less priority and items with smaller effort get larger priority. This is depicted in the diagram to the right.Example: Quantifying WSIJF
In the previous paragraph I described the basics of WSIJF and only qualitatively indicated its effect. In order to make this concrete, let's consider large epics that have been estimated using T-shirt sizes. Since WSIJF affects the sizing part and to less extent the value part, I'll not consider the value in this case. In a subtle manner value also plays a role, but for the purpose of this blog I'll not discuss it here.
Teams are free to define T-shirt sizes as they like. In this blog, the following 5 T-shirt sizes are used:
- XS ~ < 1 Sprint
- S ~ 1 – 2 Sprints
- M ~ 3 – 4 Sprints
- L ~ 5 – 8 Sprints
- XL ~ > 8 Sprints
Items of size XL take around 8 sprints, so typically 4 months. These are very large items.
Of course, estimates are just what they are: estimates. Items may take less or more sprints to complete. In fact, T-shirt sizes correspond to probability distributions: an 'M'-sized item has a probability to complete earlier than 3 sprints or may take longer than 4 sprints. For these distributions I'll take:
- XS ~ < 1 Sprint (85% probability to complete within 1 Sprint)
- S ~ 1 – 2 Sprints (85% probability to complete within 3 Sprints)
- M ~ 3 – 4 Sprints (85% probability to complete within 6 Sprints)
- L ~ 5 – 8 Sprints (85% probability to complete within 11 Sprints)
- XL ~ > 8 Sprints (85% probability to complete within 16 Sprints)
As can be seen from the picture, the larger the size of the item the more uncertainty in completing it in the next period.
Note: for the probability distribution, the Wald or Inverse Gaussian distribution has been used.
Based on these distributions, we can calculate the priorities according to WSIJF. These are summarized in the following table:
Column 2 specifies the probability to complete an item in the next period, here the next 4 sprints. In the case of an 'M' this is 50%.
Column 3 shows that, if the item is completed, what the expected duration will be. For an 'M' sized item this is 3.22 Sprints.
Column 4 contains the calculated priority as 'value of column 2' divided by 'value of column 3'.
The last column shows the value as calculated using SJF.
The table shows that items of size 'S' have the same priority value in both the SIJF and SJF schemes. Items larger than 'S' actually have a much lower priority as compared to SJF.
Note: there are slight modifications to the table when considering various period lengths and taking into account the time already spent on items. This additional complexity I'll leave for a future blog.
In practice product owners only have the estimated effort and value at hand. When ordering the backlog according to the colored sections shown earlier in this blog, it is easiest to use a modified version of this picture:
Schedule the work items according to the diagram above, using the original value and effort estimates: green cells from left to right, then the next row from left to right.Conclusion
Most used backlog prioritization mechanisms are based on some variation of ROI (value divided by effort). While this is the most optimal scheduling for items for which preemption is not allowed, it is not the best way to schedule items that are allowed to be preempted.
As a guide line:
- Use WSJF (Weighted Shortest Job First) for (smaller) work items where preemption is not allowed, such as individual user stories with (real) business value on the sprint backlog and Features (minimal marketable features, e.g. slices in a story map).
- Use SJF (Shortest Job First) for user stories within a Feature.
- Use WSIJF (Weighted Shortest Imminent Job First) for larger epics and collections of Features (Feature Set), according to the table above, or more qualitatively using the modified sector chart.
[Kle1976] Queueing Systems, Vol. 2: Computer Applications, Version 2, Leonard Kleinrock, 1976
[Rij2011] A simulation to show the importance of backlog prioritisation, Maurits Rijk, June 2011, https://maurits.wordpress.com/2011/06/08/a-simulation-to-show-the-importance-of-backlog-prioritization/
[Sut2011] Why a Good Product Owner Will Increase Revenue at Least 20%, Jeff Sutherland, June 2011, https://www.scruminc.com/why-product-owner-will-increase-revenue/
“Education is a progressive discovery of our own ignorance.”
Will Durant (1885 – 1981)
I’ve thought about getting an MBA from time to time throughout my professional career. It was always a hard thing for me to justify for a few reasons. First, an MBA is very expensive and you never know if you will get that money back in terms of earnings afterward. The graduate schools certainly want you to think so, but when I look around, there are a disturbing number of MBA’s that aren’t working in management. It seems P.T. Barnum may have been right, there’s one born every minute. Second, getting another degree, MBA or not, requires an enormous expenditure of energy. This is energy that I could be applying to improving my existing career, or perhaps starting my own business, or drinking beer – I just can’t decide. Why go to school to burn energy that might be better spent elsewhere? Finally, an MBA is boring. I mean really boring. I’ve seen the curriculum and thought to myself, “Why would I do that to myself?” Honestly, I’m really not a very good student. I know myself well enough to recognize that if I’m not terribly passionate about something, odds are that I will get bored and then perform poorly. When I look at the average business school curriculum, it’s really hard to give a damn.So getting an MBA has been something I’ve personally found hard to approach.
Of course, these reasons are easily and reasonably countered. I certainly don’t discount that there is value in an MBA, but the question is, “is there value for me?” Everyone has to answer that question for themselves. I can afford the expense, so while the costs are daunting, they aren’t prohibitive. And as far as energy expenditure is concerned, that probably isn’t a problem either. If you can build a boat, you can probably manage an MBA. But what about the boring curriculum? What can I do about that? With existing programs, probably not much. But what if I could make my own curriculum? What if I could customize an MBA to focus on areas that I’m really passionate about? What about an Agile MBA?
It’s probably a silly idea. I’m quite sure that it’s not an original idea either, but I’ve yet to find anything like it. I’ve found this: https://leanmba.wordpress.com and it’s certainly a lot of what I was thinking about (it’s really good), but I think there is more to an MBA. It makes me wonder, what would an Agile MBA be like? What would the curriculum be like? What would the classroom interactions be like? What would the overall experience be like? How could we build this program?
These are all good questions. Let’s start with the curriculum. Let’s take a peek at a few traditional MBA curriculum’s and see what we need to cover (from an agile perspective):
3. Statistics/Data Analysis
4. Technology & Operations Management
7. Problem Solving
…or something like that. That’s what I came up with after a brief survey of a few MBA programs. They all look pretty much the same. Yawn.
So I guess this is a starting point. Looking at the list above, I have to wonder, what would an Agile version of this curriculum look like? What books would I recommend? What courses, classes, or certifications would I require? Looking at this list, I think I just might be able to do that. We’ll take these one at a time over the next few weeks.
Filed under: Uncategorized
I recently finished reading former U.S. Navy Submarine Commander David Marquet’s book “Turn the Ship Around”.
It is a powerful story of learning what leadership means and the struggles Marquet had putting it into place in his role as commander of the Los Angeles-class fast attack submarine USS Santa Fe (SSN 763). Marquet proposes that Leadership should be defined as:
“Embedding the capacity for greatness in the people and practices of an organization, and decoupling it from the personality of the leader”.
The paradox is that more traditional leadership creates more unthinking followership; less top-down leadership creates more engaged leadership – at every level of an organization.
Leadership and productivity guru Stephen Covey took a tour of Marquet’s submarine in 2000, a couple of years into Marquet’s command, and reported that it was the most empowered organization he’d ever experienced, of any type, and wrote more about it in his book “The 8th Habit”.
The hyper-quick summary of Marquet’s approach involves three pillars: Control, Competence, and Clarity. These form the basis for what he calls “Leader-Leader” behavior, as opposed to the much more common “Leader-Follower” culture found in most organizations.
Marquet talks about shifting the psychological ownership of problems and solutions using a simple change in language. I’ll attempt to illustrate the evolution of leadership behavior through a series of conversations:
Traditional leader-follower pattern:
Captain: “Submerge the ship”
Subordinate: “Submerge the ship, aye”
To push Control down in the organization, Marquet began using the following speech pattern:
Captain: “What do you think we should do?”
Subordinate: “I think we should submerge the ship, sir”
Captain: “Then tell me you intend to do that”
Subordinate: “Captain, I intend to submerge the ship”
Captain: “Very well”
Giving control without an assurance of competence could lead to disaster on a nuclear submarine, and so over time, the pattern evolved to include an assurance of technical Competence, becoming:
Subordinate: “Captain, I intend to submerge the ship.”
Captain: “What do you think I’m concerned about?”
Subordinate: “You’re probably concerned about whether it’s safe to do so”
Captain: “Then convince me it’s safe”
Subordinate: “Captain, I intend to submerge the ship. All crew are below decks, the hatches are shut, the ship is rigged for dive, and we’ve checked the bottom depth.”
Captain: “Very Well”
The final evolution of the language added the third pillar – Clarity of mission, becoming:
Subordinate: “Captain, I intend to submerge the ship. All crew are below decks, the hatches are shut, the ship is rigged for dive, and we’ve checked the bottom depth.”
Captain: “Is it the right thing to do?”
Subordinate: “Yes sir, our mission requires that we submerge now in order to (classified reason (-: ) ”
Captain: “Very Well”
The book is highly engaging and I found it to be a fascinating model of leadership extremely well-tuned to the needs of leading complex organizations in the knowledge work era.What does this have to do with Agile?
Empowerment is a core concept of agility, and specifically the scrum framework, but it is something that can be a major challenge to get working well in organizations without decentralized control, insurance of competency, and clarity of mission. Marquet’s approach provides a simple pattern to follow in empowering teams.
Interestingly, empowerment is a term that Marquet dislikes, since it implies that individuals can only be “powerful” once it has been granted by a leader. His claim, and one that I agree with, is that all human beings are naturally powerful, they don’t need to be “empowered”. Rather, leaders simply need to remove cultural norms and processes that are meant to exert control, resulting in people tuning out and becoming disengaged. When the right leadership behaviors are in place, people will naturally bring their whole selves to their jobs. From a lean standpoint, such controls can be viewed as creating waste – people that show up and go through the motions, rather than devoting their creativity and energy to their jobs, and the lean leader’s job is to remove waste from the system.Empowered Product Owners & Teams
Scrum is fundamentally based on the idea that a Product Owner is the single accountable person for setting the priorities of the team(s). Leaders can ensure that Product Owners have this accountability by using the “Intend To” language.
Product Owner: VP, I intend to move this new feature to the top of the Product Backlog and deprioritize this other feature that was in our original plan. Customer validation tests indicate that the new feature would increase retention of existing users by around 4%, more than any other feature we’ve tested, aligning with our highest priority goal for this quarter of increasing existing subscriber retention rates. The team has done some high level scoping and forecast that this feature would be completed within two sprints, a similar size to the feature that we’ll be cutting.
VP: Very Well
The leader gets what they really want: an assurance that the Product Owner is aware of the business concerns and have done their due diligence to address those concerns. The Product Owner gets what they want – mentoring to understand what business leaders are most concerned about (a great career development aspect of this approach), with the autonomy to meet the business need however they see fit.Agile Leadership is the missing link for many organizations
Agile has had a major impact on some organization’s capability to balance delighting customers, keeping people engaged at work, and delivering great business results. It has, however, struggled to make an impact in many organizations where Cargo Cult Scrum, “scrum-but” and other half-hearted implementations of agile are the norm. The difference is in the leadership of these organizations. Where agile is seen as the latest trend, something the developers do, or a bandaid to fix some specific annoyance, agile will have a marginal (if sometimes still improved) result. Where agile is viewed as a mindset for both teams and leaders, it can have a profound impact. Marquet’s book provides some simple rules that leaders can apply to start seeing that bigger impact of agile at the organizational level.
You can find his book and other info on his site: http://davidmarquet.com/Connection to Dan Pink’s Drive
Friend and agile coach Rob Myers (@agilecoach) added,
These align nicely with Dan Pink’s autonomy (decentralized control), mastery (technical competence), and purpose (clarity of the mission). Thanks for the excellent analysis! I’ll be sharing this.
The funny thing is, I’ve probably watched the RSAnimate video of Dan Pink(@DanielPink – check it out below – well worth the time) more than 100 times since I often show it in training classes, and I’ve read the book (Drive) twice. I’m a big fan of Dan’s podcast (Office Hours) and his other books, and I still didn’t make that link! Thanks to Rob for connecting the dots for me between Turn the Ship Around (experience) and Drive (behavioral economics/neuroscience).
Here’s the Dan Pink Ted talk animated, just in case you haven’t seen it yet!
The post Turn The Ship Around – A View Into Agile Leadership appeared first on Agile For All.
Welcome to a brand-new season of FemgineerTV!
We’ve got some great guests lined up for you this season, and we’ll be tackling some tough topics related to startups, design, engineering, product development, and leadership.
To kick the season off, we’re going to be tackling one of the toughest topics that not a lot of people talk about: The Challenges Immigrant Tech Entrepreneurs Face.
To help us out, I’ve invited Agustina Sartori, the CEO and Co-Founder of GlamST.
Agustina began her career as a software engineer in Uruguay working at a startup before deciding to strike out on her own, joining with her co-founder Carolina Bañales to build GlamST.
If you’re an entrepreneur who is thinking about breaking in the U.S. market, then you’ll enjoy hearing Agustina’s story, and how she overcame a number of challenges to make her dreams come true!
Even if you aren’t thinking of becoming an immigrant tech entrepreneur, the episode is worth watching because Agustina talks about some critical topics that impact those of us who are involved in startups, such as:
- how she and her co-founder landed their first big client—L’Oreal—before building anything!
- alternate sources of funding they pursued because there aren’t a lot of investors in Uruguay.
- how cultivating a level of self-awareness when talking to potential customers and investors is beneficial to attracting them.
- why it’s important to build relationships with organizations who can help you grow your business like accelerators.
Be sure to subscribe to our YouTube channel to see all the new episodes when they come out, and catch up on previous ones!
FemgineerTV is also now on iTunes!
You can listen and subscribe to FemgineerTV episodes on iTunes. Please take a moment to leave us a review. Your positive review will help us get featured in the News & Noteworthy and bring more exposure to the work we’re doing as well as the talented guests we feature!
Note: There are many different variables that make up software teams, but no matter the size or industry, we share a lot of the same challenges. And while Tracker is only a small part of the equation, we’ve begun collecting these stories with the hope that we can all learn from them. This case study is the first in a series that will examine what makes our customers tick—who they are, what problems they’re trying to solve, and what their processes and solutions are.
Humana knows a thing or two about transformation. From its inception in 1961 as a nursing home company, through its period owning and running hospitals in the seventies and early eighties, the company founded by David Jones and Wendell Cherry has made a habit of reinventing itself. Now they are in the midst of yet another reinvention—as a healthcare provider with a laser focus on their customers’ well-being.
And there is one corner of the Louisville company where that reinvention takes a shape that mimics a startup, with a small, Agile team. As Practice Leader for Humana’s Digital Experience Center (DEC), Antonio Melo has a front-row seat for—and is an active participant in—Humana’s newest phase. “The biggest differentiation between where Humana was five years ago and where we are today, and where we see ourselves going,” Melo says, “is that we are in the midst of focusing and orienting ourselves directly on the customers and the consumer. Where we differentiate ourselves today from the rest of the industry is this notion of wellness and well-being. If you look at any of our user experiences, that is a common core.”
That user experience is something Humana has put a lot of effort behind, and the DEC is leading the way. Scrum and Kanban were introduced into the company to facilitate that process about eight years ago, but it wasn’t until two years ago—with an assist from Pivotal Labs—that Humana created the DEC to streamline it even further. “We’re investing in Humana’s culture to level up the core competencies of technology, product, and design so that we can better serve our customers nimbly and rapidly and efficiently,” Melo says of the DEC. “We’re really a learning and teaching organization and we use product development as the vehicle to teach others.”
“You don’t have to change everything all at once.”
As Product Manager Nick Hill explains, the DEC is mainly tasked with helping the rest of the IT organization develop and test software more quickly and efficiently, “by partnering with different lines of business, different owners of products, to help them learn the process and try new things. Instead of teaching by standing up and pointing at diagrams or reading books, we teach by pairing, by working with them on certain projects so that they can then—after the engagement has dissolved—take back what they’ve learned and apply it within their organizations.”
It’s a process that may sound familiar to those who run in Lean and Agile circles, but for newcomers or those teams more set in their ways, it can border on revolutionary—or terrifying. For teams considering a switch to more of a Lean or Agile approach, or companies thinking of a larger scale transformation, Melo says not to worry. “Do not allow the series of unanswerable questions and the face of the unknown or the fear that results from that to become obstacles or justification for you not to get started. Start small, but start today. Simply based on where you begin and what your focus is, you will uncover the relevant questions and quickly answer them in virtue of your work.”
Nick agrees with the baby steps approach. “You don’t have to change everything all at once. We do it in sort of Lean way—you start small, you do a couple of projects, you see if it works. It’s not this huge investment, not this corporate-wide, infrastructural change; it’s a small experiment that grows if it works. It’s a lot more palpable when you take those small bites and don’t commit 100% of your resources toward it.”
“You’re only as Agile as the organization around these teams is.”
Even if they now seem comfortable with their trajectory, it hasn’t always been an easy lift for the DEC. In an organization with more than 52,000 employees, with vast, interconnected departments and entrenched workflows, Melo’s team has faced numerous challenges. For one, it can be hard to instill a Lean- or Agile-based, small-team process on a team that isn’t used to thinking of itself—or operating—as small. “One could say we were Agile to some degree, but you’re only as Agile as the organization around these teams is,” Melo says. “Traditionally at Humana, there are departments for each asset you need and you create tickets to communicate with them, and then the speed with which they deliver you what you need to do your job varies anywhere from weeks to months to perhaps longer. That’s the area where the enterprise really needs continued focus.”
Melo’s team of 16 is at once trying to navigate an enterprise ecosystem that’s resistant to change, while at the same time hoping to revamp it from the inside. “We want what we do to be commonplace through the organization. We’re in the process of figuring out our sphere of influence and to what degree these practices remain sticky in the organization without our direct development. We’re just beginning to get a perspective on those questions.”
Despite the occasional minor roadblocks, the DEC has a long list of projects and a process firmly in place. “We work on developing products, whether it’s an iPhone app or a web page. Business comes to us and works with us on a specific problem. We utilize all the product management techniques to figure out the cause and behaviors of the users and get feedback and create these products iteratively and leanly until we’ve validated these assumptions and hypotheses and create an actual solution to the problem.”
Did someone say project management techniques?
One tool that helps the DEC keep their myriad projects on target is Pivotal Tracker. “It’s certainly a core tool for the team that is used on a daily basis and is used by all teams that work here in the DEC,” Melo says. “It helps solidify in a tangible way the way we work—singular focus, ruthless prioritization—and allows for the team to calibrate their own level of description required to deliver their software.”
But Melo’s team isn’t using Tracker in a vacuum; most of the teams they engage with walk away from the project with a bit of a Tracker crush. “What we’ve found is that once our engagements cease, Tracker use continues by our clients as they go back to the traditional core organization.”
“There is this notion that we use here called ‘Tracker Time.’ How much Tracker Time are we getting with the client/product manager is an indication of what opportunity we have to really teach them these Lean, Agile methods, how engaged they are, how productive they are, and what their relationship is to the rest of the team. Tracker Time is an indication of pairing time. How much time do our resident DEC PMs have with our visiting client product managers pairing using Tracker to deconstruct and prioritize their vision into actionable units of ultimately working software?”
“What we’ve found is that once our engagements cease, Tracker use continues by our clients as they go back to the traditional core organization.”
While it functions largely as a means to an end, Tracker is also a powerful metric for how things are progressing. “It’s a tool used to help explain and demonstrate prioritization, story writing, the size of the story, really everything that goes into the creation and prioritization of the Backlog is done with and in Tracker,” Hill says.
But despite the challenge of implementing such an ambitious change, the goal for Melo and his team has remained the same. “We’re leading the next wave of transformation: organizationally, culturally, technologically, and we’re trying to do it in a way in which we are gracefully teaching others and trying to become a trusted advisor to the traditional core business.”
Want to share your team’s story? Please email email@example.com and tell us a bit about your company.
A few years ago we were bringing on board a large group of new developers to the team. Most had a light testing background, some exposure to git and no real pairing experience. It didn’t take long to realize the number of commits on our project slowed down dramatically. Commits still happened, but they were generally large coarse grained commits with hundreds of line changes across many files.
After some gentle nudging about checking in early and often I realized the message wasn’t sticking. For the most part people waiting until they had completed a whole feature story to actually commit the code. So I figured it might be time to give things a bit of a push.
I remembered a plugin we tried out with Jenkins called the Continuous Integration Game. You got points for passing builds and adding tests and losing points for breaking the build and breaking tests. The experiment increased the testing a bit on that team, but it never really caught on. Still you have to keep on trying.
The rules were simple:
- Every day you win by having more commits.
- More commits in a row means you can rib you’re coworkers about it.
- Blocking someone by committing between the time they pulled locally, merged, and ran tests was worthy of extra taunting.
Commits started picking up. After much joking commits were coming early and often. The experiment worked well enough that I wasn’t even giving feedback anymore. Early and often was the default.
Many in the industry are searching for new ways to increase productivity and efficiency with Agile approaches to software development. That’s the theme of a new article on CIO.com by Bruce Harpham, who details the many benefits of Agile methodologies in software development and examines key considerations for implementing the practice. Our very own Scott Rose spoke with Bruce on the topic and discussed the ways organizations can employ Agile practices among a global team, where face-to-face is not feasible.
While Bruce notes that face-to-face interaction is a key element to increased productivity within agile, it’s not always possible with the way global teams are set up. Therefore, it’s important to develop schedules for a global team in order to appease those working in different time zones and keep the culture positive. As Scott says, “Success with agile methodology is 5 percent due to the tools used and 95 percent due to the culture.”
Another important consideration in implementing agile methodologies is an organization’s willingness to adopt new approaches. This includes taking customer feedback into consideration, as those suggestions help enterprises to deliver the results expected of them. Collecting and managing this feedback in a methodical manner is key to this tactic. Scott states, “By delivering new features and enhancements in sprints, we can keep momentum and be responsive to customers.”
Be sure to read the entire CIO.com article here for more considerations on improving productivity with agile.
Want to learn about CollabNet’s Agile development solutions? Visit CollabNet.com/TeamForge.
The post “How to Improve Productivity with Agile Methodologies” (CIO.com Article Recap) appeared first on blogs.collab.net.
“When you make a mistake, there are only three things you should ever do about it: admit it, learn from it, and don’t repeat it.”
Paul “Bear” Bryant
Recently we moved from an annual planning process to a quarterly large scale release planning event. There was a lot going for the idea of the new quarterly planning process:
1. Smaller batches of projects to plan
2. Less wasted time on projects that would never get worked on
3. More collaboration between teams and stakeholders, because we held the planning event in a big room with everyone invited
4. It only lasted 2 days, so the amount of preparation was dramatically reduced
What’s not to love? Our previous annual planning process started in February or March and continued non-stop until late September or even further (once it went through Christmas). The amount of waste in the process was truly astonishing. There were multiple reviews by executives, red-lining exercises, requests for more estimates. It was interminable.
So anything that offered us a way out of that madness was OK with me. We modeled the planning event on the SAFe style large scale planning events. It seemed like a slam dunk. Get everyone in the room, facilitate like hell for two days, and out pops a bouncing baby quarterly plan. Easy.
We held the event and it went pretty well. There were a lot of missing requirements and people had to get used to participating in a high intensity collaborative event, but in the end, we got the job done and walked out of the room after day 2 with a relatively stable set of commitments for the upcoming quarter. So we did a retrospective and started planning for the next event in 3 months. Just like it says on the bottle: re-apply, rinse and repeat.
And this is where things started to go wrong. Some folks were concerned that they hadn’t been prepared enough for the first event, and they decided that they should begin their planning for the next event earlier. While this is not a bad idea on the face of it, there were some consequences that we didn’t see coming. Soon, everyone was screaming that they needed to start planning – in as much detail as possible. All of this happened within about 2 weeks of finishing the first planning event. So the end result was there were a lot more meetings and planning done before we had our second large scale planning event.
So we do the second event, and yes, things were improved. There were more requirements, the teams came to consensus faster, and in general things looked pretty good. However, there was one rather alarming development: executives added a bunch of projects to the backlog the day before the planning event. They were “must do” projects, but they didn’t have any requirements behind them. So we sucked it up, put on our big boy pants, and dealt with it. By the way, I’m not recommending that as a coping strategy.
So of course, product management and others were even more panicked this time to get started on requirements early. So now there was literally no pause in the planning between events. After we finished our second planning event, then we immediately began work for our next planning event.
Did you see what we did there? That’s right, we started off with a brief but effective planning event that replaced our year round nightmare. Then for the next event we added more preparation. And for the next event we added even more…until our quarterly planning process has become virtually indistinguishable from our disastrous old year round planning process. We’ve come full circle!
So what went wrong? I have my suspicions, and I’d like to point out that’s all they are, so take them for what they are worth. I think that in order for changes like this to be effective, there needs to be corresponding changes in the expectations for what you get out of the event. If you change the planning process, but the expectations for detail, control, and commitment/accountability don’t change, then you are going to run into problems. In other words, if the process changes, but the outputs and the expectations around them don’t change, then it’s very likely that sooner or later you will end up right back where you started.
I think you see this most commonly in organizations that implement a change (adopting agile, kanban, or anything else) and the top level management aren’t bought into the change. They are going to be looking for the same outputs that they’ve always had. They aren’t going to accept anything else. That puts pressure on the system to return to the status quo. Eventually, the change may still be there in name (as in our case), but effectively you have returned to the original system.
Filed under: Uncategorized
A few months ago (fall of 2015), my 6 year old daughter saw a crochet kit at a bookstore. It was the kind of kit that comes with a crochet hook, yarn and a book of patterns. There’s enough yarn to create two small projects and hopefully get you started with a new hobby.
My daughter immediately hatched an idea in her head – she would have us buy the kit, then she could learn how to crochet and create some adorable little animals to play with.
It was a perfect plan in a 6 year old’s mind… until we said no.Teaching By Learning
We didn’t completely say no to the idea of her learning how to crochet, though. Really, I thought it was a good idea for her to learn, but I didn’t want to buy that $15 kit when I knew we could buy yarn and a hook at a craft store for $3 total. So, that’s what we did – we went to the craft store and bought few things of yarn and a couple of crochet hooks.
My grandma taught me to crochet when I was about 10 years old, so I knew what I was getting into. I told my daughter I would teach myself how to crochet again, and I would teach her at the same time.Do You Want To Build A Snowman?
Fast forward to Christmas vacation in December of 2015 and I have a storage container full of yarn, crochet hooks, needles, scissors and other accessories. I’ve made a dozen toys and my daughter has learned the basic crochet chain pattern.
Now my mom wants me to make her a snowman.
Only, I’ve never made a snowman.
Sure, I’ve made other round things – an octopus, a few heads for other little dolls, etc – but a snowman? How do I do that? Is it just two round balls and a scarf?
How do I do a hat? Or a nose that sticks out?
As I was figuring out the body sections, I made a guess as to how I could do a hat. The nose was based on turtle legs, and the scarf – well, I found a pattern for a stuffed toy scarf. In the end, the basic patterns I used, modified and repurposed, produced a result that was better than I had expected!
I am quite proud of what I made, and ended up creating a second snowman for my grandma. This one was a little shorter, a little cuter, and had a lot of other improvements in how I made it.
Both my mom and grandma were very pleased with their snowmen, and it was a good Christmas break, over-all.Manipulating Patterns To Create Something New
During the creation of the second snowman, though, something about the reality of crocheting started to sink in.
I realized I don’t need to know how to make a snowman, to make one. All I need are a few basic patterns to manipulate, and I can probably produce the desired result.
So I take this new-found idea and I run with it. I start making things I’ve never seen before, or only have ideas about, including the BB8 droid from Star Wars: The Force Awakens (which I have seen 3 times, in theatre :P)
And somewhere in these moments of manipulating yarn in a single, continuous line, I recognized the parallels between patterns in crochet and software development.I Don’t Know How… Not Yet
Like the snowman or the BB8 that I crochet’d, I don’t know exactly how to build most of the software projects, prior to building them. Rather, I have many small patterns in my mind, with each of these patterns being made of smaller patterns, still.
I know how to use MongoDB, SQL Server, Oracle and other database systems. I know how to handle HTTP requests with Express.js, and serve HTML in response. I can organize files into separate folders based on features and functionality… and so much more.
Each of these things I know how to do represents a pattern – a basic method of solving a particular problem. But it’s not the one small thing that makes the software useful or complete. Neither does one round ball make a complete snowman or a BB8. Rather, it’s the combination of the individual pieces that produce the desired end result.Software Patterns
It’s not that I have a cookie-cutter, copy & paste chunk of code for each part that I need. Instead, I have worked through the basic crochet stitch patterns, the basic expand and reduce patterns, the basic shape of a ball or a hat, and I have begun to understand the purpose for the pattern.
In the same way, I have collected a series of software patterns through repetition, practice and use. I worked through the basics of database connections, HTTP request handling, middleware, messaging systems, and more. These represent patterns of implementation that I can use, modify and implement as needed.
And from there – from the place where I know the technology and the patterns – I can spend my time on the more important aspects of any given project: learning what the business needs, and understanding the problems they are trying to solve.
A couple of years ago I drew this picture and started using it in various presentations about agile and lean development:
Since then the drawing has gone viral! Shows up all over the place, in articles and presentations, even in a book (Jeff Patton’s “User Story Mapping” – an excellent read by the way). Many tell me the drawing really captures the essence of iterative & incremental development, lean startup, MVP (minimum viable product), and what not. However, some misinterpret it, which is quite natural when you take a picture out of it’s original context. Some criticize it for oversimplifying things, which is true. The picture is a metaphor. It is not about actual car development, it is about product development in general, using a car as a metaphor.
Anyway, with all this buzz, I figured it’s time to explain the thinking behind it.First example – Not Like This
The top row illustrates a common misconception about iterative, incremental product development (a.k.a Agile).
Many projects fail badly because they do Big Bang delivery (build the thing until 100% done and deliver at the end). I’ve lost count of the number of failed projects I’ve seen because of this (scroll down for some examples). However, when Agile is presented as an alternative people sometimes balk at the idea of delivering an unfinished product – who wants half of a car?. Imagine this:
“Here sir, here’s our first iteration, a front tire. What do you think?”
Customer is like “Why the heck are you delivering a tire to me? I ordered a CAR! What am I supposed to do with this?”
(By the way, I’m using the term “customer” here as a generic term to represent people like product managers, product owners, and early adopter users).
With each delivery the product gets closer to done, but the customer is still angry because he can’t actually use the product. It’s still just a partial car.
And finally, when the product is done, the customer is like “Thanks! Finally! Why didn’t you just deliver this in the first place, and skip all the other useless deliveries?”.
In this example he’s happy with the final product, because it’s what he ordered. In reality, that’s usually not true. A lot of time has passed without any actual user testing, so the product is most likely riddled with design flaws based on incorrect assumptions about what people need. So that smiley face at the end is pretty idealistic.
Anyway, the first row represents “bastardized agile”. Technically it might be incremental and iterative delivery, but the absence of an actual feedback loop makes it very risky – and definitely not agile.
Hence the “Not Like This” heading.Second example – Like this!
Now for the second row.
Here we take a very different approach. We start with the same context – the customer ordered a car. But this time we don’t just build a car. Instead we focus on the underlying need the customer wants fulfilled. Turns out that his underlying need is “I need to get from A to B faster”, and Car is just one possible solution to that. Remember, car is just a metaphor, think any kind of customized product development situation.
So the team delivers the smallest thing they can think of that will get the customer testing things and giving us feedback. Some might call it an MVP (Minimum Viable Product), but I prefer to call it Earliest Testable Product (more on that further down).
Call it what you like (some even call their first release the “the skateboard version” of the product, based on this metaphor….).
The customer is unlikely to be happy with this. This is nowhere near the car he ordered. But that’s OK! Here’s the kicker – we’re not trying to make the customer happy at this point. We might make a few early adopters happy (or pragmatists in pain), but our main goal at this point is just to learn. Ideally, the team explains this clearly to the customer in advance, so he isn’t too disappointed.
However, as opposed to the front wheel in the first scenario, the skateboard is actually a usable product that helps the customer get from A to B. Not great, but a tiny bit better than nothing. So we tell the customer “don’t worry, the project is not finished, this was just the first of many iterations. We’re still aiming to build a car, but in the meantime please try this and give us feedback“. Think big, but deliver in small functionally viable increments.
We might learn some really surprising things. Suppose the customer says he hates the skateboard, we ask why, and he says “I hate the color”. We’re like “uh…. the color? That’s all?”. And the customer says “Yeah, make it blue! Other than that, it’s fine!”. You just saved *alot* of money not building the car! Not likely, but who knows?
The key question is “What is the cheapest and fastest way we can start learning?” Can we deliver something even earlier than a skateboard? How about a bus ticket?
Will this help solve the customer’s problem? Maybe, maybe not, but we’ll definitely learn something by putting this into the hands of real users. Lean Startup offers a great model based on listing your actual hypotheses about the users and then working systematically to validate or invalidate them.
You don’t need to test the product on all users, and you don’t need to build a product to test something. Testing a prototype on even a single user will teach you more than nothing.
But OK, back to the skateboard example.
After playing around with it in the office, the customer says “OK, kind of fun, and it does get me to the coffee machine faster. But it’s unstable. I fall off too easily”.
So the next iteration we try to solve that problem, or at least learn more about it.
Customer can now get around the office without falling off!
Happy? Not really, he still kind of wants that car. But in the meantime he is actually using this product, and giving us feedback. His biggest complaint is that it’s hard to travel longer distances, like between buildings, due to the small wheels and lack of breaks. So, next release the product morphs into something like a bicycle.
Now the customer can zoom around campus. Yiihaaa!
We learn some things along the way: The customer likes the feeling of fresh air on his face. The customer is on a campus, and transportation is mostly about getting around between buildings.
The bicycle may turn out to be a much better product than the car originally envisioned. In fact, while testing out this product we may learn that the paths are too narrow for a car anyway. We just saved the customer tons of time and money, and gave him a better product in less time!
Now you may be thinking “but shouldn’t we already have known that. via up-front analysis of the customer’s context and needs?” Good point. But in most real-life product development scenarios I’ve seen, no matter how much up-front analysis you do, you’re still surprised when you put the first real release into the hands of a real user, and many of your assumptions turn out to be way off.
So yes, do some up-front analysis, discover as much as you can before starting development. But don’t spend too much time on it and don’t trust the analysis too much – start prototyping and releasing instead, that’s when the real learning happens.
Anyway, back to the story. Perhaps the customer wants more. Sometimes he needs to travel to another city, and the bike ride is too slow and sweaty. So next iteration we add an engine.
This model is especially suitable for software, since software is, well, Soft. You can “morph” the product as you go, as opposed to hardware where you essentially have to rebuild every time. However, even in hardware projects there is a huge benefit to delivering prototypes to observe and learn from how the customer uses your product. It’s just that the iterations tend to be a bit longer (months instead of weeks). Even actual car companies like Toyota and Tesla do a lot of prototyping (sketches, 3d models, full-scale clay models, etc) before developing a new car model.
So now what? Again, maybe the customer is happy with the motorcycle. We could end the project earlier than planned. Most products are riddled with complexity and features that nobody uses. The iterative approach is really a way of delivering less, or finding the simplest and cheapest way to solve the customer’s problem. Minimize the distance to Awesome. Very Zen.
Or, again, the customer could choose to continue, with or without modifications to the requirements. We may in fact end up with the exact same car as originally envisioned. However it is much more likely that we gain vital insights along the way and end up with something slightly different. Like this:
The customer is overjoyed! Why? Because we learned along the way that he appreciates fresh air in his face, so we ended up with a convertible. He did get a car after all – but a better car than originally planned!
So let’s take a step back.What’s your skateboard?
The top scenario (delivering a front tire) sucks because we keep delivering stuff that the customer can’t use at all. If you know what you’re doing – your product has very little complexity and risk, perhaps you’ve built that type of thing hundreds of times before – then go ahead and just do big bang. Build the thing and deliver it when done.
However, most product development efforts I’ve seen are much too complex and risky for that, and the big bang approach all too often leads to huge expensive failures. So the key question is What’s your skateboard?
In product development, one of the first things you should do (after describing what problem you are trying to solve for whom) is to identify your skateboard-equivalent. Think of the skateboard as a metaphor for the smallest thing you can put in the hands of real users, and get real feedback. Or use “bus ticket” if that metaphor works better.
This will give you the vitally needed feedback loop, and will give both you and the customer control over the project – you can learn and make changes, instead of just following the plan and hoping for the best.
Let’s take at some real-life examples.Example 1: Spotify music player
“With over 75 million users, it’s hard to remember a time without Spotify. But there was. A time when we were all mulling the aisles of Target for new CDs. A time in our lives where we all became thieves on Napster. A time when iTunes forced us to buy songs for $2/piece. And then came Spotify.” –Tech Crunch
Spotify is a pretty fancy product now. But it didn’t start that way. I was lucky to be involved pretty early in this amazing journey (and still am).
As a startup in 2006, Spotify was founded on some key assumptions – that people are happy to stream (rather than own) music, that labels and artists are willing to let people do so legally, and that fast and stable streaming is technically feasible. Remember, this was 2006 when music streaming (like Real Player) was a pretty horrible experience, and pirate-copied music was pretty much the norm. The technical part of the challenge was: “Is it even possible to make a client that streams music instantly when you hit the Play button? Is it possible to get rid of that pesky ‘Buffering’ progress bar?”
Starting small doesn’t mean you can’t think big. Here’s one of the early sketches of what they had in mind:
But instead of spending years building the whole product, and then finding out if the assumptions hold, the developers basically sat down and hacked up a technical prototype, put in whatever ripped music they had on their laptops, and started experimenting wildly to find ways to make playback fast and stable. The driving metric was “how many milliseconds does it take from when I press Play to when I hear the music?”. It should play pretty much instantly, and continue playing smoothly without any stuttering! Once they had something decent, they started testing on themselves, their family, and friends.
The initial version couldn’t be released to a wider audience, it was totally unpolished and had basically no features except the ability to find and play a few hard-coded songs, and there was no legal agreement or economic model. It was their skateboard.
But they shamelessly put the skateboard in the hands of real users – friends and family – and they quickly got the answers they needed. Yes, it was technically possible. And yes, people absolutely loved the product (or more like, what the product can become)! The hypotheses were validated! This running prototype helped convince music labels and investors and, well, the rest is history.Example 2: Minecraft
Minecraft is one of the most successful games in the history of game development, especially if you take development cost into consideration. Minecraft is also one of the most extreme examples of the release-early-and-often mindset. The first public release was made after just 6 days of coding, by one guy ! You couldn’t do much in the first version – it was basically an ugly blocky 3d-landscape where you can dig up blocks and place them elsewhere to build crude structures.
That was the skateboard.
The users were super-engaged though (most developer-user communication happened via Twitter, pretty funny). Among the early users were me and my four kids. Over hundred releases were made during the first year. Game development is all about finding the fun (some game companies I’ve worked with use the term “Definition of Fun” instead of “Definition of Done”), and the best way to do that is by having real people actually play that game – in this cases thousands of real people who had actually paid to try the early access version and therefore had a personal incentive to help improve the game.
Gradually a small development team was formed around the game (mostly 2 guys actually), and the game became a smash hit all over the world. I don’t think I’ve met any kid anywhere who doesn’t play Minecraft. And last year the game (well, the company that was formed around the game) was sold to Microsoft for $2.5 billion. Quite amazing.Example 3: Big Government Project
Around 2010 the Swedish Police started a big initiative to enable police to spend more time in the field and less time at the station – PUST (Polisens Utrednings STöd). A fascinating project, I was involved as coach and wrote a book about what we did and what we learned (Lean from the Trenches).
The idea was to put laptops in the cars, and customized software to give police access to the systems they need in real-time, for example while interrogating a suspect (this was the pre-tablet age).
They had tried to build similar systems in the past and failed miserably, mainly because of Big Bang thinking. They told me that one of their previous attempts took 7 years from inception to first release. SEVEN YEARS! By then of course everything had changed and the project was a total failure. So this time they wanted to do it differently.
The 60-person project (later referred to as “PUST Java”) succeeded surprisingly well, especially for being a big government project (it even came second in CIO Awards “Project of the Year”). One of the main success factors was that they didn’t try to build the whole thing at once – they split the elephant along two dimensions:
- By Region. We don’t need to release to ALL of Sweden at once, we could start by releasing to just one region.
- By Crime type. We don’t need to support ALL crime types initially, we could start by just supporting 1-2 crime types.
The first version, 1.0, was their skateboard.
It was a small system, supporting just a couple of crime types, and it was field-tested on a handful of cops in Östergötland (a region in Sweden). Other crime types had to be dealt with the old way – drive to the station and do paperwork. They knew they were guinea pigs, and that the product was nowhere near finished. But they were happy to test it, because they knew the alternative. They had seen what kind of crappy systems come out of processes that lack early user feedback, and now they finally had a chance to influence a system while it was being built!
Their feedback was harsh and honest. Many of our assumptions flew out the window, and one of the big dilemmas was what to do with all the carefully crafted Use Case specifications that were getting less and less relevant as the real user feedback came in (this was an organization with a waterfall history and a habit of doing big upfront analysis).
Anyway, long story short, the early feedback was channeled into product improvements and, gradually, as the those cops in Östergötland started liking the product, we could add more crime types and spread it to more regions. By the time we got to the big release (1.4), with nationwide rollout and training of 12000 police, we weren’t all that worried. We had done so many releases, so much user testing, that we slept well on the night of the nationwide release.
Unfortunately the victory was short-lived. A follow-up project (PUST Siebel) botched it and went back to waterfall-thinking, probably due to old habit. 2 years of analysis and testing without any releases or user-testing, followed by a big-bang release of the “next generation” of the product to all 12,000 police at once. It was an absolute disaster, and after half a year of hemorrhaging they shut the whole thing down. The development cost was about €20 million, but Internal studies estimate that the cost to Swedish society (because the police were handicapped by the horrible system) was on the order of €1 Billion!
Pretty expensive way to learn!Example 4: Lego
I’m currently working at Lego, and I’m amazed by their ability to deliver new smash-hits, year after year without fail. I hear lots of interesting stories about how they do this, and the common theme is prototyping and early user testing! I often see groups of kids in the office, and designers collaborate with local kindergartens and schools and families to field-test the latest product ideas.
Here’s a recent example – Nexo Knights (released Jan 2016):
When they first started exploring this concept, they did paper prototypes and brought to small kids. The kids’ first reaction was “hey, who are the bad guys? I can’t see who’s good and who’s bad!”. Oops. So the designers kept iterating and testing until they found a design that worked with the kids. I bet even you can see who’s good and who’s evil in the picture above…
Not sure exactly where the skateboard is in this story, but you get the idea – early feedback from real users! Don’t just design the product and build the whole thing. Imagine if they had built the product based on their original design assumptions, and learned about the problem after delivering thousands of boxes to stores all over the world!
Lego also has it’s share of hard-earned failures. One example is Lego Universe, a massively multiplayer online Lego world. Sounds fun huh? Problem is, they got overambitious and ended up trying to build the whole thing to perfection before releasing to the world.
About 250 people worked for 4-5 years (because of constant scope creep due to perfectionism), and when the release finally came the reception was… lukewarm. The finished game was beautiful, but not as fun as expected, so the product was shut down after 2 years.
There was no skateboard!
Why not? Because skateboards aren’t Awesome (at least not if you’re expecting a car), and Lego’s culture is all about delivering Awesome experiences! If you work at Lego HQ in Billund, Denmark you walk past this huge mural every day:
Translates roughly to “Only the best is good enough”. It has been Lego’s guiding principle ever since the company started 80+ years ago, and has helped them become one of the most successful companies in the world. But in this case the principle was misapplied. The hunt for perfection delayed vital feedback, which meant mistaken assumptions about what the users like and don’t like. The exact opposite of Minecraft.
Interestingly enough the Lego Universe teams were actually using Scrum and iterating heavily – just like the Minecraft guys did. But the releases were only internal. So there was most likely a skateboard, and a bicycle, and so on, but those products never reached real users. That’s not how Scrum is intended to be used.
It was an expensive failure, but Lego learned from it and they are constantly getting better at early testing and user feedback.Improving on “MVP”
And that (deep breath…) brings me to the topic of MVP – Minimum Viable Product.
The underlying idea is great, but the term itself causes a lot of confusion and angst. I’ve met many customers that are like “No Way do I want an MVP delivery – that’s the last delivery I’ll get!” All too often teams deliver the so-called Minimum Viable Product, and then quickly get whisked away to the next project, leaving the customer with a buggy, unfinished product. For some customers, MVP = MRC (Minimum Releasable Crap).
I know, I know, this comes down to bad management rather than the term MVP, but still… the term invites misunderstanding. “Minimum” and “Viable” mean different things to different people, and that causes problems.
So here’s an alternative.
First of all, replace the word “Minimum” with “Early”. The whole idea behind releasing an MVP is to get early feedback – by delivering a minimum product rather than a complete product, we can get the feedback earlier.
Few customers want “minimum” but most customers want “early”! So that’s our first change:
Minimum => Earliest
Next, remove the word “Viable” since it’s too vague. Your “viable” is my “horrible”. Some people think Viable means “something I can test and get feedback from”, others think it means “something the customer can actually use”. So let’s be more explicit and split it into three different things:
Earliest Testable Product is the skateboard or bus ticket – the first release that customers can actually do something with. Might not solve their problem, but it generates at least some kind of feedback. We make it very clear that learning is the main purpose of this release, and that any actual customer value will be a bonus.
Earliest Usable Product is perhaps the bicycle. The first release that early adopters will actually use, willingly. It is far from done, and it might not be very likeable. But it does put your customers in a better position than before.
Earliest Lovable Product is perhaps the motorcycle. The first release that customers will love, tell their friends about, and be willing to pay for. There’s still lots to improve, and we may still end up with a convertible, or a plane, or something else. But we’ve reached the point where we have a truly marketable product.
I considered adding an even earlier step “Earliest Feedbackable Product”, which is basically the paper prototype or equivalent that you use to get your first feedback from the customer. But four steps seems too many, and the word Feedbackable….. ugh. But nevertheless, that is also an important step. Some would call a paper prototype the Earliest Testable Product, but I guess that comes down to how you define Testable. Check out Martin’s MVP Guide to learn more – he’s got plenty of super-concrete examples of how to get early feedback with minimum investment.
Of course people can still misinterpret Earliest Testable/Usable/Lovable, but it’s at least one step more explicit than the nebulous Minimum Viable Product.Takeaway points
OK time to wrap it up. Never thought it would get this long, thanks for sticking around! Key takeaways:
- Avoid Big Bang delivery for complex, innovative product development. Do it iteratively and incrementally. You knew that already. But are you actually doing it?
- Start by identifying your skateboard – the earliest testable product. Aim for the clouds, but swallow your pride and start by delivering the skateboard.
- Avoid the term MVP. Be more explicit about what you’re actually talking about. Earliest testable/usable/lovable is just one example, use whatever terms are least confusing to your stakeholders..
And remember – the skateboard/car drawing is just a metaphor. Don’t take it too literally :o)
PS – here’s a fun story about how my kids and I used these principles to win a Robot Sumo competition :o)
Thanks Mary Poppendieck, Jeff Patton, Alistair Cockburn, Anders Haugeto, Sophia, colleagues from Crisp, Spotify and Lego, and everyone else who gave tons of useful feedback.
I’ve been playing around with Clojure a bit today in preparation for a talk I’m giving next week and found myself writing the following code to apply the same function to three different scores:
(defn log2 [n] (/ (Math/log n) (Math/log 2))) (defn score-item [n] (if (= n 0) 0 (log2 n))) (+ (score-item 12) (score-item 13) (score-item 5)) 9.60733031374961
I’d forgotten about folding over a collection but quickly remembered that I could achieve the same result with the following code:
(reduce #(+ %1 (score-item %2)) 0 [12 13 5]) 9.60733031374961
The added advantage here is that if I want to add a 4th score to the mix all I need to do is append it to the end of the vector:
(reduce #(+ %1 (score-item %2)) 0 [12 13 5 6]) 12.192292814470767
However, while Googling to remind myself of the order of the arguments to reduce I kept coming across articles and documentation about reducers which I’d heard about but never used.
As I understand they’re used to achieve performance gains and easier composition of functions over collections so I’m not sure how useful they’ll be to me but I thought I’d give them a try.
Our first step is to bring the namespace into scope:
(require '[clojure.core.reducers :as r])
Now we can compute the same result using the reduce function:
(r/reduce #(+ %1 (score-item %2)) 0 [12 13 5 6]) 12.192292814470767
So far, so identical. If we wanted to calculate individual scores and then filter out those below a certain threshold the code would behave a little differently:
(->>[12 13 5 6] (map score-item) (filter #(> % 3))) (3.5849625007211565 3.700439718141092) (->> [12 13 5 6] (r/map score-item) (r/filter #(> % 3))) #object[clojure.core.reducers$folder$reify__19192 0x5d0edf21 "clojure.core.reducers$folder$reify__19192@5d0edf21"]
Instead of giving us a vector of scores the reducers version returns a reducer which can pass into reduce or fold if we want an accumulated result or into if we want to output a collection. In this case we want the latter:
(->> [12 13 5 6] (r/map score-item) (r/filter #(> % 3)) (into )) (3.5849625007211565 3.700439718141092)
With a measly 4 item collection I don’t think the reducers are going to provide much speed improvement here but we’d need to use the fold function if we want processing of the collection to be done in parallel.
One for next time!
Little's Law is a powerful tool that relates the amount the work a team is doing and the average lead time of each work item. Basically there are two main applications involving either 1) the input rate of work entering the team, or 2) the throughput of work completed.
In previous posts (Applying Little's Law in agile games, Why Little's law works...always) I already explained that Little's Law is exact and hardly has any assumptions, other than work entering the team (or system).
This post demonstrates this by calculating Little Law at every project day while playing GetKanban.
The video below clearly shows that Little's Law holds exactly at every project day. For both the input rate and throughput versions. Throughput is based on the subclass of 'completed' items.
E.g. on the yellow post-it the product of lambda and W equals N on every project day.http://blog.xebia.com/wp-content/uploads/2016/01/LittlesLaw_540p.mp4
The set-up is that we run the GetKanban game from day 9 through day 24. The video will show on the right hand side the board and charts whereas the left hand side shows the so-called 'sample path' and Little's Law calculation for both input rate (yellow post-it) and throughput (green post-it).
Sample Path. The horizontal axis shows the project day running from 9 till 24. The vertical axis shows the work item: each row represents a item on the board.
The black boxes mark the days that the work in on the board. For example, item 8 was in the system on project day 9 and completed at the end of project day 12 where it was deployed.
The collection of all black boxes is called a 'Sample Path'.
Little's Law. The average number of items in the system (N) is show on top. This is an average over the project days. Here W denotes the average lead-time of the items. This is an average taken over all work items.
Input rate: on the yellow post-it the Greek lambda indicates the average number of work per day entering the system.
Throughput: the green post-it indicates the average work per day completed. This is indicated by the Greek mu.
Note: the numbers on the green post-it are obtained by considering only the subclass of work that is completed (the red boxes).References