A new bundle of books with agile practices and tips has been released on Leanpub. Buy these books with a 40% discount!
The bundle includes six great books from eleven authors, helping you to make your agile journey easier to travel, more successful, and fun!
- With plenty of exercises for your personal retrospective toolbox, Getting Value out of Agile Retrospectives will help you to become more proficient in doing retrospectives and to get more out of them.
- A Toolbox for the Agile Coach: 96 Visualization Examples showing how great teams visualize their work.
- The tools and techniques provided in the Forming Agile Teams workbook offer an alternative-proven way to add more structure, transparency and visibility to the work that you do when Forming Agile Teams, by combining visual explanations with techniques and tips to support Scrum Masters crucial role within the organization.
- The Scrum Master Workbook Part 1 provides 15 weeks of accelerated learning. It teaches you ways to deal with conflict, bugs, interruptions, meetings and many more topics.
- Patterns of Agile Journeys shares stories and patterns to help you recognize situations you may find yourself in on your own journey. Use the tips in this book to reinforce or counteract the patterns you see.
- The book Continuous Improvement makes you aware of the importance of continuous improvement, explores how it is engrained in agile, and provides suggestions that Scrum masters, agile coaches, well everybody, can use in their daily work to improve continuously and increase team and organizational agility.
Retrospectives Exercises Toolbox - Design your own valuable Retrospectives
Together these books provide many useful tips and practices for your agile journey. Buy them for $40,77 (regular price is $67,95).
There’s also the Agile Retrospectives Books Bundle with six great books that will make your agile retrospectives rock, and the Valuable Agile Retrospectives – All Languages Bundle which contains all language editions of my successful book Getting Value out of Agile Retrospectives.
This is a final post originally published on the Rally Blog which I am reposting here to keep an archived copy.
I live in Brighton, on the south coast of the UK, about 50 miles from London. This means that I regularly catch the train for meetings or engagements “in town”. When making the journey, I always look at the timetable. Trains only run every 30-60 minutes, so if I get the timing wrong, then I’m most likely left hanging around at the station. Not a great use of time, especially with the typical British weather. When I get into London and need to catch the tube somewhere, however, it’s a different story. I just head to the right platform and wait for a train, knowing that one should turn up in a few minutes. There’s no need to check the timetable.
What does this have to do with Agile? I was recently on a Q&A panel and fielded a question about how to deal with fixed date and scope projects. The story above hints at the answer…Managing Variables
Before we come to that, let’s first look at the common ‘Iron Triangle’ variables of time, cost and scope. If the date (and hence time) and scope are fixed, then logic suggests that the only thing we can vary is cost. This typically means adding people, although it could mean throwing money at the problem in some other way. Brooks’s Law, “Adding manpower to a late software project makes it later”, says that this will not work. An Agile approach can mean that any problems meeting the date and scope will be discovered earlier, and hence the effect of Brooks’s Law can be minimised. Colleague Alex Pukinskis recently blogged about how the Rally development team cheated Brooks’s Law with such an approach.
If varying cost isn’t an option, then there are a couple of other options. The first is cutting corners and reducing quality. Note that I am not recommending this option! Having said that, if the date is critical for learning and feedback regarding a value hypothesis, then quality may be less critical, assuming quality will be built in once the value is well understood.
The other variable is fidelity — this is the finesse of the solution. Delivering a low fidelity solution first ensures that scope can be met early. The functionality can then be iterated on to increase fidelity, knowing that when the date arrives — scope is in the bag.The Alternative
There’s a less obvious solution to the problem, however. Date and scope are often fixed as a reaction to the risk of “missing the train”. We want to be sure of what we get, and when we get it, because if functionality doesn’t make it into a release, we don’t want it left on the platform waiting for the next one. We can address that risk in another way.
Here’s another example. We (ok, well, my wife) generally do a weekly shop at a nearby out-of-town supermarket. Because it’s weekly, we spend time planning by putting together a shopping list, thinking of everything we might need during the week. After all, if we don’t get everything we need, it will be another week until the next shop. This often results in over-stocking and the waste of throwing out unused perishable food.
However, when we go and visit a friend who lives in the small village in the Lake District, we just pop into the local shops every day to get whatever we fancy for that day. There is no need to plan ahead or make decisions on what we are going to eat days in advance. The local produce might be slightly more expensive than the big supermarkets, but it’s higher quality and there’s less wasted food. We trade-off a slight increase in cost for higher quality, deferred decisions and less waste.
So instead of worrying about how to deliver to fixed time, scope and cost constraints (not to mention quality), I would recommend figuring out how to release more frequently.
If your releases are like a tube train, arriving every day or so, then the need to plan time and scope lessens. Planning and implementing in smaller batches significantly reduces the cost, allowing more time to build the desired scope by the desired date. If a feature misses a release, it can just go into the next one straight after.
Try this approach and let me know how it goes!
Het manifesto voor agile veranderen helpt organisaties om hun agility te verhogen. Het zorgt voor blijvende verbetering van de resultaten, tevreden klanten, en blije medewerkers. Dit eerste artikel over Agile Veranderen beschijft de uitgangspunten en waarden met behulp van het manifesto voor agile veranderen.
Agile software ontwikkeling is gebaseerd op het Manifesto voor Agile Software Ontwikkeling. Dit manifesto bevat vier waarden en twaalf principes. Het manifesto voor agile veranderen is op een zelfde manier opgebouwd. Het beschrijft mijn visie en werkwijze in organisatieverandering, samengevat in vier waarden. Mijn verander “waarden”
Retrospectives Exercises Toolbox - Design your own valuable Retrospectives
Dit zijn de waarden van mijn Manifesto voor Agile Veranderen:
- Betrekken van professionals en ruimte geven voor ideeën over standaardisatie en voorschrijven van werkprocessen
- Stapsgewijze evolutionaire verbetering van binnen uit over top down opleggen van veranderingen.
- Resultaatgericht en intensief samenwerken over directieve doelen met “command & control” management.
- Prioritiseren en flexibel inspelen op kansen over budgetteren en veranderplannen uitvoeren.
De waarden aan de rechterkant van bovenstaande statements zijn en blijven belangrijk, maar ik geef graag meer aandacht aan de waarden aan de linkerkant. Daarom geef ik bijvoorbeeld de voorkeur aan het in kaart brengen van de bestaande werkprocessen met de medewerkers en samen werken aan verbetering mbv retrospectives in plaats van organisatiebreede uitrol van Scrum met standaard trainingen. En werk ik liever met een veranderbacklog waarin de prioriteiten eenvoudig aan te passen zijn dan met een plan. Ook veranderen veranderd
Anders dan het Agile Manifesto wat al 15 jaar hetzelfde is verwacht ik dat dit manifesto wel zal veranderen. De eerste evolutie is al te zien als je het vergelijkt met het verander manifesto van veranderproject, een samenwerkingsverband van enkele jaren geleden. Bijvoorbeeld woorden als “verbinden” zijn verder uitgewerkt in “resultaatgericht en intensief samenwerken” en het manifesto voor agile veranderen benoemd de rol van de professional en een bottom up aanpak voor verandering.
In de nabije toekomst zal ik diverse artikelen publiceren waarin ik dieper in ga op de waarden van dit manifesto. Ik geef daarin o.a. voorbeelden van betrekken van professionals in verandertrajecten, top-down versus bottom up veranderen, evolutionaire versus revolutionaire verandering en resultaatgericht veranderen.
In Part 1, I talked about small stories/chunks of work, checking in all the time so you could build often and see progress. That assumes you know what done means. Project “done” means release criteria. Here are some stories about how I started using release criteria.
Back in the 70s, I worked in a small development group. We had 5 or 6 people, depending on the time of year. We worked alone on our parts of the system. We all integrated into one instrument, but we worked primarily alone. This is back in the days of microcomputers. I wrote assembler, Fortran, or microcode, depending on the part of the system. I still worked on small chunks, “checked in,” as in I made sure I saved my files. No, we had no real version control then.
We had a major release in about 1979 or something like that. I’d been there about 15 months by then. Our customers called the President of the company, complaining about the software. Yes, it was that bad.
Why was it that bad? We had thought we were working towards one goal. Our customers wanted a different goal. If I remember correctly, these were some of the problems (that was a long time ago. I might have forgotten some details.):
- We did not have a unified approach to how the system asked for information. There was no GUI, but the command line was not consistent.
- Parts of the system were quite buggy. The calculations were correct, but the information presentation was slow, in the wrong place, or didn’t make sense. Our customers had a difficult time using the system.
- Some parts of the system were quite slow. Not the instrument, but how the instrument showed the data.
- The parts didn’t fit together. No one had the responsibility of making sure that the system looked like one system. We all integrated our own parts. No one looked at the whole.
Oops. My boss told us we needed to fix it. I asked the question, “How will we know we are done?” He responded, “When the customers stop calling.” I said, “No, we’re not going to keep shipping more tape to people. What are all the things you want us to do?” He said, “You guys are the software people. You decide.”
I asked my colleagues if it was okay if I developed draft release criteria, so we would know that the release was done. They agreed. I developed them in the next half day, wrote a memo and asked people for a meeting to see if they agreed. (This is in the days before email.)
We met and we changed my strawman criteria to something we could all agree on. We now knew what we had to do. I showed the criteria to my boss. He agreed with them. We worked to the release criteria, sent out a new tape (before the days of disks or CDs!) to all our customers and that project finally finished.
I used the idea of release criteria on every single project since. For me, it’s a powerful idea, to know what done means for the project.
I wrote a release criteria article (see the release criteria search for my release criteria writing) and explained it more in Manage It! Your Guide to Modern, Pragmatic Project Management.
In the 80s, I used it for a project where we did custom machine vision implementations. If I hadn’t, the customer would have continued asking for more features. The customer did anyway, but we could ask for more money every time we changed the release criteria to add more features.
I use release criteria and milestone criteria for any projects and programs longer than three months in duration, so we could see our progress (or lack thereof) earlier, rather than later. To be honest, even if we think the project is only a couple of months, I always ask, “Do we know what done means for this project?” For small projects, I want to make sure we finish and don’t add more to the project. For programs, I want to make sure we all know where we are headed, so we get there.
Here’s how small chunks of work, checking in every day, and release criteria all work together:
- Release criteria tell you what done means for the project. Once you know, you can develop scenarios for checking on your “doneness” as often as you like. I like automated tests that we can run with each build. The tests tell us if we are getting closer or farther away from what we want out of our release.
- When you work in small chunks, check them in every day and build at least as often as every day, you can check on the build progress. You know if the build is good or not.
- If you add the idea of scenarios for testing as you proceed, release becomes a business decision, not a “hardening sprint” or some such.
Here’s a little list that might help you achieve friction-less releases:
- What do you need to do to make your stories small? If they are not one day, can you pair, swarm, or mob to finish one story in one day? What would you have to change to do so?
- If you have curlicue stories, what can you do to make your stories look like the straight line through the architecture?
- What can you do to check in all the time? Is it a personal thing you can do, or do you need to ask your entire team to check in all the time? I don’t know how to really succeed at agile without continuous integration. What prevents you from integrating all the time? (Hint, it might be your story size.)
- Do you know what done means for this release (interim and project)? Do you have deliverable-based planning to achieve those releases?
Solve these problems and you may find frictionless release possible.
When you make releasing externally a business decision—because you can release internally any time you want—you will find your project proceeds more smoothly, regardless of whether you are agile.
Reminder: If you want to learn how to make your stories smaller or solve some of the problems of non-frictionless releases, join my Practical Product Owner workshop, starting August 23, 2016. You’ll practice on your projects, so you can see maximum business value from the workshop.
Would you like to release your product at any time? I like it when releases are a business decision, not a result of blood, sweat, and tears. It’s possible, and it might not be easy for you. Here are some stories that showed you how I did it, long ago and more recently.
Story 1: Many years ago, I was a developer on a moderately complex system. There were three of us working together. We used RCS (yes, it was in the ’80s or something like that). I hated that system. Maybe it was our installation of it. I don’t know. All I know is that it was too easy to lock each other out, and not be able to do a darn thing. My approach was to make sure I could check in my work in as few files as possible (Single Responsibility Principle, although I didn’t know it at the time), and to work on small chunks.
I checked in every day at least before I went to lunch, once in the middle of the afternoon, and before I left for the day. I did not do test-first development, and I didn’t check my tests in at the time. It took me a while to learn that lesson. I only checked in working code—at least, it worked on my machine.
We built almost every day. (No, we hadn’t learned that lesson either.) We could release at least once a week, closer to twice a week. Not friction-less, but close enough for our needs.
Story 2: I learned some lessons, and a few years later, I had graduated to SCCS. I still didn’t like it. Merging was not possible for us, so we each worked on our own small stuff. I still worked on small chunks and checked in at least three times a day. This time, I was smarter, and checked in my tests as I wrote code. I still wrote code first and tests second. However, I worked in really small chunks (small functions and the tests that went with them) and checked them in as a unit. The only time I didn’t do that is if it was lunch or the end of the day. If I was done with code but not tests, I checked in anyway. (No, I was not perfect.) We all had a policy of checking in all our code every day. That way, someone else could take over if one of us got sick.
Each of us did the same thing. This time, we built whenever we wanted a new system. Often, it was a couple of times a day. We told each other, “Don’t go there. That part’s not done, but it shouldn’t break anything.” We had internal releases at least once a day. We released as a demo once a week to our manager.
After that, I worked at a couple of places with home-grown version control systems that look a lot like subversion does now. That was in the later 80s. I became a project manager and program manager.
Story 3: I was a program manager for a 9-team software program. We’d had trouble in the past getting to the point where we could release. I asked teams to do these things: Work towards a program-wide deliverable (release) every month, and use continuous integration. I said, “I want you to check everything in every day and make sure we always have a working build. I want to be able to see the build work every morning when I arrive.” Seven teams said yes. Two teams said no. I explained to the teams they could work in any way they wanted, as long as they could integrate within 24 hours of seeing everyone else’s code. “No problem, JR. We know what we’re doing.”
Well, those two teams didn’t deliver their parts at the first month milestone. They were pissed when I explained they could not work on any more features until they integrated what they had. Until they had everything working, no new features. (I was pissed, too.)
It took them almost three weeks to integrate their four weeks of work. They finally asked for help and a couple of other guys worked with the teams to untangle their code and check everything in.
I learned the value of continuous integration early. Mostly because I was way too lazy (forgetful?, not smart enough?) to be able to keep the details of an entire system in my head for an entire project. I know people who can. I cannot. I used to think it was one of my failings. I now realize many people only think they can keep all the details. They can’t either.
Here’s the technical part of how I got to frictionless releases:
- Make the thing you work on small. If you use stories, make the story a one-day or smaller story. I don’t care if the entire team works on it or one person works on it (well, I do care, and that’s a topic for another post), but being able to finish something of value in one day means you can descend into it. You finish it. You come up for air/more work and descend again. You don’t have to remember a ton of stuff related but not directly a part of this feature.
- Use continuous integration. Check in all the time. Now that I write books using subversion, I check in whenever I have either several paras/one chunk, or it’s been an hour. I check that the book builds and I fix problems right away, when the work is fresh in my mind. It’s one of the ways I can write fast and write well. Our version control systems are much more sophisticated than the ones I used in the early days. I’m not sure I buy automated merge. I prefer to keep the stories small and cohesive. (See this post on curlicue features. Avoid those by managing to implement by feature.)
- Check in all the associated data. I check in automated tests and test data when I write code. I check in bibliographic references when I write books. If you need something else with your work product, do it at the time you create. If I was a developer now, I would check in all my unit tests when I check in the code. If I was really smart, I might even check in the tests first, to do TDD. (TDD helps people design, not test.) If I was a tester, I would definitely check in all the automated tests as soon as possible. I could then ask the developers to run those tests to make sure they didn’t make a mistake. I could do the hard-to-define and necessary exploratory testing. (Yes, I did this as a tester.)
Frictionless releases are not just technical. You have to know what done means for a release. That’s why I started using release criteria back in the 70s. I’ll write a part 2 about release criteria.
This is another post originally published on the Rally Blog which I am reposting here to keep an archived copy. It was part of the same series as the one on annual and quarterly planning, in which we described various aspects of the way the business was run. Again, apart from minor edits to help it make sense as a stand alone piece I have left the content as it was.
Strategy Deployment is sometimes known as Hoshin Kanri, and like many Lean concepts, it originated from Toyota. Hoshin Kanri is a Japanese term whose literal translation can be paraphrased as “compass control.” A more metaphorical interpretation, provided by Pascal Dennis in Getting the Right Things Done, is that of a “ship in a storm going in the right direction.”
Strategy Deployment is about getting everyone involved in the focus, communication, and execution of a shared goal. I described in previous posts how we collaboratively came up with strategies and an initial plan in the form of an X-matrix. The tool that we use for the deployment is the Strategic A3.Strategic A3s
A3 refers to the size of the paper (approximately 11 x 17 inches) used by a number of different formats to articulate and communicate something in a simple, readable way on a single sheet of paper. Each rock or departmental team uses a Strategic A3 to describe its plan. This forms the basis for their problem-solving approach by capturing all the key hypotheses and results, which helps identify the opportunities for improvement.
The different sections of the A3 tell a story about the different stages of the PDSA cycle (Plan, Do, Study, Adjust.) I prefer this latter formulation from Dr. W. Edwards Deming to the original PDCA(Plan, Do, Check, Act) of Walter A. Shewhart, because “Study” places more emphasis on learning and gaining knowledge. Similarly, “Adjust” implies feedback and iteration more strongly than does “Act.”
This annual Strategic A3 goes hand-in-hand with a macro, longer-term (three- to five-year) planning A3, and numerous micro, problem-solving A3s.Anatomy of a Strategic A3
This is what the default template that we use looks like. While it is often good to work on A3s using pencil and paper, for wider sharing across the organisation we’ve found that using a Google document works well too.
Each A3 has a clear topic, and is read in a specific order: down the left-hand side, and then down the right hand side. This flow aligns with the ORID approach (Objective, Reflective, Interpretive, Decisional) which helps avoid jumping to early conclusions.
The first section looks at prior performance, gaps, and targets, which give objective data on the current state. Targets are a hypothesis about what we would like to achieve, and performance shows the actual results. Over time, the gap between the two gives an indication of what areas need investigation and problem-solving. The next section gives the reactions to, and reflections on, the objective data. This is where emotions and gut feelings are captured. Then comes interpretation of the data and feelings to give some rationale with which to make a plan.
The three left-hand sections help us look back into the past, before we make any decisions about what we should do in the future. Having completed that we have much better information with which to complete the action plan, adding high-level focus and outcomes for each quarter. The immediate quarter will generally have a higher level of detail and confidence, with each subsequent quarter afterward becoming less granular. Finally, the immediate next steps are captured and any risks and dependencies are noted so that they can be shared and managed.Co-creating a Strategic A3
As you can probably imagine from reading the previous posts, the process of completing a Strategic A3 can be a highly collaborative, structured, and facilitated process. One team with which I work closely recently had grown to a point where we would benefit from our own Strategic A3, rather than being a part of a larger, international Strategic A3. To create it we all got together for a day in our Amsterdam office. We felt that this would allow us to align more strongly with the corporate strategy and communicate more clearly what we were doing, and where we needed help.
We began by breaking into small groups of three to four people, mostly aligned around a regional territory. These groups spent some time filling in their own copy of the A3 template. We then reconvened together and each group gave a readout of its discussions, presenting the top three items from each section, which we captured with post-it notes on flip charts. Having gone around each group I then asked everyone to silently theme the post-its in each section until everyone seemed happy with the results. This led to a discussion about each theme and identifying titles for them. We still had quite a few themes, so we finished off by ranking them with dot-voting so that we could be clear on which items were most important.
Our last step was to identify the top three items on the A3 that we wanted to highlight to the wider business. This turned out to be a relatively simple conversation. The collaborative nature of the process meant that everyone had a clear and shared understanding of what was important and where we needed focus.
Strategy deployment is not a one-off, top-down exercise. Instead, the Strategic A3 is used as a simple tool that involves everyone in the process. Teams prepare and plan their work, in line with the corporate goals, and each quarter they revisit and revise their A3s as a means of communicating status and progress. As performance numbers become available an A3 will be updated with any changes highlighted, and the updated A3 then becomes a key input into Quarterly Steering.
This post was originally published on the Rally Blog and I am reposting here to keep an archived copy. It was part of a series in which we described various aspects of the way the business was run. Apart from one minor edit to help it make sense as a stand alone piece I have left the content as it was. However, I suspect that since Rally is now part of CA Technologies, much of what I described has changed.
Rally has a regular, quarterly cadence with which we manage corporate planning, and in which we invest heavy preparation so that we get maximum value. For this year’s Annual Planning, preparation included creating market and opportunity maps and a set of potential strategies, as well as crafting an agenda to help facilitate the collaborative co-creation of the outcomes.What is Annual Planning?
At Rally, Annual Planning is a two-day meeting involving around 80 people – roughly 70 Rally employees and 10 invited customer representatives. The employees are a mix of people representing all areas of the business: directors and above always attend these key corporate cadences, and other members of the company take turns participating. The customers chosen to join us are those who have shown a keen interest in seeing how we facilitate these large events, and from whom we can learn and get great feedback. Apart from the confidential opening introduction, the customers are involved throughout: spread out across business groups and breakouts, sitting amongst employees, and actively working and contributing as much as anyone else.
This year, we ran Annual Planning a quarter in advance of the financial year we’re about to start. We’ve learned that the initial plan will need validation and refinement, and thus we need to allow time for that to happen. Therefore, the purpose of the two days was to draft our corporate plan for the next financial year, so that we can validate it in the final quarter of the current financial year.What Do We Do in Annual Planning?
Over the years, we have settled on terminology for corporate planning, inspired by a couple of books. First, Pascal Dennis’ Getting the Right Things Done introduces the terms “True North” and “Mother Strategies.” The True North is the single mantra or slogan that defines where the company wants to be at the end of the year. Mother Strategies are the focus areas that will help us arrive at the True North.
The True North and Mother Strategies guide the day-to-day departmental work, along with cross-departmental initiatives, which are knows as “Rocks.” Rocks are inspired by techniques described in Verne Harnish’s book, Mastering the Rockefeller Habits. The metaphor of a Rock is based on the idea that if you have a bucket, you should fill it first with a few big rocks: these are the big things you want to accomplish. If there is more space you can then put in pebbles, or medium-sized projects. With any remaining space you can put in sand, or the tactical tasks. Finally, you can add water — the ad-hoc things that arise. If you fail to put the big rocks in first, you will inevitably fill your bucket with just sand and water.
For Rally, the annual plan, therefore, consists of a True North, a number of Mother Strategies, and a set of Rocks. In addition, this year we introduced a new tool to help create transparency and align all the elements: the X-matrix, as described in Thomas L. Jackson’s Hoshin Kanri for the Lean Enterprise. This brought with it a further level of discipline by including the business results we’re targeting, and the measurable improvements we will use to track progress.
As you can see from the blank template above, completing the X-matrix involves deciding on strategic goals, tactical rocks (and other departmental initiatives), measurable improvements, and business results. These are entered into the large white sections alongside each section. In addition, filling in the shaded corner cells of the X-matrix indicates the correlation or contribution between each of these elements, as well as how accountable each department will be for the tactical work. The strength of the correlation or accountability is indicated with one of three symbols according to the legend: strong correlation or team leader, important correlation or team member, and weak correlation or rotating team member. An empty cell indicates no correlation or no team member.How Does It Work?
The agenda for the two days of Annual Planning involved exploring and defining all these pieces of the puzzle, ultimately filling in a giant X-matrix created on a wall. The picture below shows this partially completed. Taking the advice from the book, we adapted rather than adopted the technique, changing some of the terminology to better fit our context.
Here’s what each day looked like.
Day one was focused on divergence: generating a range of ideas which could go into the initial draft of the plan. We began with a retrospective on the current year; working individually, in pairs, and then in departments, we reflected on what we’d learned that would guide our work in closing out this year and setting us up for next year. Then, the executive team gave a readout of their perspectives and introduced the proposed potential strategies for next year. This led into an Open Space with breakout sessions focused on exploration of rocks and improvements that could implement those strategies. As a result, by the end of the first day we had a good understanding of the current situation, with a variety of potential work that might be needed to meet our goals.
Day two was focused on convergence: refining all the ideas and getting consensus on a plan that could be validated. Groups initially formed around the proposed strategies to look at the plan through a “strategic lens.” Each group discussed how various rocks and improvements aligned to their strategy, and agreed on a proposal that they wanted to make for inclusion in the plan.
In a high-energy session, the proposals were pitched to three of the executives, who accepted them (with a chime) or rejected them (with a horn). Rejected proposals were updated and re-pitched, until we ended up with the X-matrix containing the top 10 rocks and associated improvement measures, along with the strength of the correlation between all the rocks and strategies. Groups then re-formed around departments to look at the plan through a “departmental lens.” They discussed and filled in the X-matrix with the their department’s level of work alignment to the rocks.
At this point we had the majority of the X-matrix complete for the coming year. This was just a first cut, however, so another Open Space session followed to allow discussion of opportunities and concerns, and what needs to be done in the final quarter of the year to validate our assumptions — resulting in a clear set of actions which were shared with everyone.
By the end of the two days we had a clear and single page visualisation of the potential work for the year, why we were doing it, and how we would measure progress, along with a good understanding of the necessary next steps.What Happens Next?
As an addition to our corporate planning cadence, the X-matrix was a roaring success. It both helped us be disciplined about thinking about measures and results, and gave us great visibility into how all our work is aligned. It still needs refinement, however, and the executive team will look at the final X-matrix and use it to filter and focus on which strategies and rocks can give us the best leverage in meeting our goals. We typically hold ourselves to no more than four mother strategies and we also strive to limit the number of rocks in process.
From the final plan, we’ll craft a True North statement and will begin executing. The regular cadence of quarterly steering meetings will revisit the X-matrix as a focal point to help us inspect and adapt. We’ll check business results and improvement measures and form rocks, which will start and end according to the necessity of the work and the need to make it transparent across this well-defined review cadence.
A colleague asked mobbing last week on Twitter. Here’s the short answer, including pairing so you can see everything in one place:
- Swarming has a WIP (work in progress) limit of 1, where the team collaborates to get the one item to done.
- Mobbing has a WIP limit of 1 for an entire team with one keyboard.
- Pairing is two people and one keyboard, often with a WIP limit of 1.
A WIP limit of one means the team or pair works on just one story/feature at a time. Sometimes, that feature is large as in the team who worked as a swarm on very large stories. (See the post Product Owners and Learning, Part 2 for how one team finishes very large features over a couple of days.)
Here are some examples of what I’ve seen on projects.
Team 1 swarms. They get together as a team and discuss the item (WIP limit of 1) with the product owner. They talk among themselves for a couple of minutes to decide on their “plan of attack” (their words) and then scatter. The testers develop the automated and take notes on the exploratory tests. The developers work on the code.
Aside: On another team I know, the UI, platform, and middleware devs get together to discuss for a couple of minutes and then write code together, each on their own computer. (They collaborate but do not pair/mob together.) On another team, those people work together, on one keyboard for the platform/middleware work. The UI person works alone, checking in when she is done. Everyone checks their work into the code base, as they complete the work. Teams develop their own “mobbing” as sub-teams, which works, too.
Team 1 has an agreement to return every 25 minutes to check in with each other. They do this with this kind of a report: “I’m done with this piece. I need help for this next piece. Who’s available?” Or “I’m done as much as I can be for now. Anyone need another pair of eyes?” (Note: you might want more or less than 25 minutes. They chose that time because they have smallish stories and want to make sure they maintain a reasonable pace/momentum.
As people finish their work, they help other people in whatever way they can. Early in Team 1’s agile days, they had a ton of automated test “technical debt.” (I would call it insufficient test automation, but whatever term you like is fine.) The developers finished their stories and helped the testers bootstrap their test automation.
Team 2 mobs. The entire team sits around a table with one keyboard. The monitor output goes to a projector so everyone can see what the person typing is doing. This team has a guideline that they trade off keyboarding every 15 minutes. (You might like a slightly longer or slightly shorter time. In my experience, shorter times are better, but maybe that’s just me.) Sometimes, the tester leads, developing automated tests. Sometimes, the developer leads. This team often uses TDD, so the tests guide their development.
Team 2 checks in at least as often as they change keyboarders. Sometimes, more often.
Notice that the work in progress (WIP) is small, one story. In both swarming and mobbing, the teams work on one story. That’s it. Their focus is doing the work that gets that story to done. that story getting to done.
Pairing is one keyboard, one machine, two pairs of eyes.The keyboarder is the driver, the watcher is the navigator. You get continuous review on the work product as you proceed. I often ask what I consider “stupid” questions when I am the navigator. Sometimes, the questions aren’t stupid—they prompt us as a pair to understand the item better. Sometimes, they are. I’m okay with that. I find that when I pair, I learn a ton about the domain.
Here’s the value of swarming or mobbing:
- The team limits their WIP, which helps them focus on getting work to done.
- The team can learn together in swarming and does learn together in mobbing.
- The team collaborates, so they reinforce their teamwork. They learn who can do what, and who learns what.
- The team has multiple eyes on small chunks of work, so they get the benefit of review.
If you work feature-by-feature, I urge you to consider swarming or mobbing. (Yes, you can swarm or mob in any life cycle, as long as you work feature-by-feature.) Either will help you move stories to done faster because of the team focus on that one story.
I wrote a post about pairing and swarming and how they will help your projects a couple of years ago.
A fourth post exploring the relationship between Strategy Deployment and other approaches (see Strategy Deployment and Fitness for Purpose, Strategy Deployment and AgendaShift and Strategy Deployment and Spotify Rhythm).
Directed Opportunism is the approach described by Stephen Bungay in his book The Art of Action, in which he builds on the ideas of Field Marshall Helmuth von Moltke, Chief of Staff of the Prussian Army for 30 years from 1857, and applies them to leading businesses. This also follows on from the earlier post on alignment and autonomy in Strategy Deployment.
Bungay starts by describing three gaps between desired Outcomes, the Plans made to achieve them, and the Actions taken which create actual Outcomes. These gaps (the Knowledge Gap, Alignment Gap and Effects Gap) are shown in the diagram below, and together cause organisational friction – resistance of the organisation to meeting its goals.
Given this model, Bungay explains how the usual approach to reducing this friction, and closing the gaps, is to attempt to reduce uncertainty by pursuing more detail and control, as show below.
This generally makes the situation worse, however, because the problem is not linear, reductionistic or deterministic. In Cynefin terms, this is a Complicated approach in a Complex domain. Instead, Bungay recommends reducing detail and control and allowing freedom to evolve with feedback. This is what he calls Directed Opportunism.
This definition of Directed Opportunism seems to me to meet my definition of Strategy Deployment as a form of organisational improvement in which solutions emerge from the people closest to the problem. There is clear communication of intent (the problem) with each level (the people closest) defining how they well achieve the intent (the solution) and having freedom to adjust in line with the intent (the emergence).
From an X-Matrix perspective, being clear on results, strategies and outcomes limits direction to defining and communicating intent, and leaving tactics to emerge (through Catchball) allows different levels to define how they will achieve the intent and gives them freedom to adjust actions in line with the intent.
This is the third in what has turned into a mini series exploring the relationship between Strategy Deployment and other approaches (see Strategy Deployment and Fitness for Purpose and Strategy Deployment and AgendaShift).
Last month, Henrik Kniberg posted slides from a talk he gave at Agile Sverige on something called Spotify Rhythm which he descibes as “Spotify’s current approach to getting aligned as a company”. While looking through the material, it struck me that what he was describing was a form of Strategy Deployment. This interpretation is based purely on those slides – I haven’t had a chance yet to explore this more deeply with Henrik or anyone else from Spotify. I hope I will do some day, but given that caveat, here’s how I currently understand the approach in terms of the X-Matrix Model.
The presentation presents the following “taxonomy” used in “strategic planning”:
Company Beliefs – While this isn’t something I don’t talk about specifically, the concept of beliefs (as opposed to values) does tie in nicely with the idea that Strategy Deployment involves many levels of nested hypotheses and experimentation (as I described in Dynamics of Strategy Deployment). Company Beliefs could be considered to be the highest level, and therefore probably strongest hypotheses.
North Star & 2-Year Goals – A North Star (sometimes called True North) is a common Lean concept (and one I probably don’t talk about enough with regard to Strategy Deployment). It is an overarching statement about a vision of the future, used to set direction. Decisions can be made based on whether they will move the organisation towards (or away from) the North Star. Strategy Deployment is ultimately all in pursuit of enabling the organisational alignment and autonomy which will move it towards the North Star. Given that, the 2-Year Goals can be considered as the Results that moving towards the North Star should deliver.
Company Bets – The Company Bets are the “Big Bets” – “large projects” and “cross-organisation initiatives”. While these sound like high level tactics, I wonder whether they can also be considered to be the Strategies. As mentioned already, Strategy Deployment involves many levels of nested hypothesis and experimentation, and therefore Strategy is a Bet in itself (as are Results , and also Beliefs).
Functional & Market Bets – If the Company Bets are about Strategy, then the Functional and Market Bets are the Tactics implemented by functional or market related teams.
DIBB – DIBB is a framework Spotify use to define bets and “make the chain of reasoning explicit” by showing the relationships between Data, Insights, Beliefs and Bets. Part of that chain of reasoning involves identifying success metrics for the Bets, or in other words, the Outcomes which will indicate if the Bet is returning a positive payoff.
— Henrik Kniberg (@henrikkniberg) July 8, 2016
While this isn’t an exact and direct mapping it feels close enough to me. One way of checking alignment would be the ability for anyone to answer some simple questions about the organisations’ journey. I can imagine how Spotify Rhythm provides clarity on how to answer these questions.
- Do you know where you are heading? North Star
- Do you know what the destination looks like? 2 Year Goals (Results)
- Do you know how you will get there? Company Bets (Strategies)
- Do you know how you will track progress? DIBBs (Outcomes)
- Do you know how you will make progress? Functional & Market Bets (Tactics)
One final element of Spotify Rhythm which relates to Strategy Deployment is implied in its name – the cadence with which the process runs. Company Bets are reviewed every quarter by the Strategy Team (another reason why they could be considered to be Strategies) and the Functional and Market Bets – also called TPD (Tech-Product-Design) Bets – are reviewed every 6 weeks.
I’d be interested in feedback on alternative interpretations of Spotify Rhythm. Or if you know more about it than I do, please correct anything I’ve got wrong!
The successful book Getting Value out of Agile Retrospectives has been translated in many languages. The leanpub bundle Valuable Agile Retrospectives – All Languages contains all language editions, you can get 9 books for the reduced price of $24,99 (excluding VAT).
People from all over the world approach us for translating our book into their native language. We love to work together with local agile communities and agile practitioners to make our book available in their local language.
Retrospectives Exercises Toolbox - Design your own valuable Retrospectives
The normal price for the 9 books is $89,91, together these books are now available for the price of $24,99, a discount of more that 70%. When you buy the bundle you will get all translations that are released in the future for free. With this bundle you will always have the latest version of our book in every language.
Our mission is to help many teams all around the world to get more value out of agile retrospectives.
You can buy all local language editions of Getting Value out of Agile Retrospectives at Amazon.com (and all other Amazon shops), Leanpub, iTunes, Smashwords, Lulu, Barnes & Noble, Kobo, Scribd, Oyster and Blio. Paperback editions can be bought in my webshop and on bol.com and Managementbook.nl.
I’ve been the technical editor for agileconnection.com for the past five years. It popped up to my LinkedIn network. Several people congratulated me on my work anniversary.
I have learned many things in the past five years:
- Sometimes, people need “permission” to write what they feel. (They’re concerned they will be too bold, too loud, too something.)
- Some people need help finding the “right” structure for their writing. Sometimes, that structure is about how to find the time to write. Sometimes, that’s the article structure.
- Some people need help learning what and how people read on the web.
The biggest thing I have learned is this:
If I tell people the results I need, they will then deliver those results.
It’s the same way on your projects, too. Tell people the results you want. I bet they will deliver those results, if at all possible.
I have asked these questions:
- Can you tell a story here to illustrate your point?
- Can you expand this bullet to tell the story? I am sure there is something quite interesting here.
- I’m confused. Passive voice does that to me. Can you make this active?
I have more questions up my sleeve, and that’s fine. Notice that I don’t “criticize” the writing. I don’t like criticism. I much prefer knowing what to do to improve. I bet you do, too. That’s why I ask for what I want.
If you use agile and have a story to tell, I’m interested. Let me help you publish your story. Send me an email.
If you would like to write better, let me know if you would like to be a part of my next non-fiction writing workshop. I have a wait list for the August workshop. I’ll definitely run it again.
I thank you for all your good wishes, and I do hope I can continue this (part-time) gig. It’s quite fun!
When I think of POs and the team, I think of learning in several loops:
- The PO learns when the team finishes small features or creates a prototype so the PO can see what the team is thinking/delivering.
- The team learns more about its process and what the PO wants.
- If the Product Manager sees the demo, the Product Manager sees the progress against the roadmap and can integrate that learning into thinking about what the product needs now, and later.
Note that no one can learn if they can’t see progress against the backlog and roadmap.
There are two inter-related needs: Small stories so the team can deliver and seeing how those small stories fit into the big picture.
I don’t know how to separate these two needs in agile. If you can’t deliver something small, no one, the team, the PO, the customer, can’t learn from it. If you don’t deliver, you can’t change the big picture (or the small picture) of where the product is headed. If you can’t change, you find yourself not delivering the product you want when you want. It’s a mess.
When you don’t have small stories and you can’t deliver value frequently, you end up with interdependent features. These features don’t have to be interdependent. The interdependencies arise from the organization (who does what) and think they are talking about interdependencies in the features, but a root cause of those interdependencies are the fact that those features are not small and coherent. See my curlicue features post.
That means that the PO needs to learn about the features in depth. BAs can certainly help. Product Managers can help. And, the PO is with the team more often than the Product Manager. The PO needs to help the team realize when they have a structure that does not work for small features. Or, when the PO can’t know how to create feature sets out of a humungous feature. The team and the PO have to work together to get the most value from the team on a regular basis.
This is why I see the learning at several levels:
- The Product Manager works with the customers to understand what customers need when, and when to ignore customers. It is both true that the customer is always right and the customer does not know what she wants. (I won’t tell you how long it took me to get a smart phone. Now, I don’t know how I could live without one. You cannot depend on only customers to guide your product decisions.)
- The PO Value Team discusses the ranking/when the customers need which features. When I see great PO Value teams, they start discussing when to have which features from the feature sets.
- The PO (and BA) work with the team to learn what the team can do when so they can provide small stories. They also learn from the team when the team delivers finished work.
The larger the features the less feedback and the less learning.
So, I’ve written a lot here. Let me summarize.
Part 1 was about the “problem” of only addressing features, not the defects or technical debt. If you have a big picture, you can see the whole product as you want it, over time. For me, the PO “problem” is that the PO cannot be outside-facing and inward-working at the same time. It is not possible for one human to do so.
Part 2 was about how you can think about making smaller stories, so you have feature sets, not one humungous feature.
Part 3 was about ranking. If you think about value, you are on the right track. I particularly like value for learning. That might mean the team spikes, or delivers some quick wins, or several features across many feature sets (breadth-first, not depth-first). Or, it could mean you attack some technical debt or defects. What is most valuable for you now? (If you, as a PO have feature-itis, you are doing yourself and your team a disservice. Think about the entire customer experience.)
Part 4 talked about how you might want to organize a Product Owner value team. Only the PO works with the team on a backlog, and the PO does not have to do “everything” alone.
If you would like to learn how to be a practical, pragmatic Product Owner, please join me at the Practical Product Owner workshop, beginning Aug 23, 2016. You will learn by working on your roadmaps, stories, and your particular challenges. You will learn how to deliver what your customers value and need—all your customers, including your product development team.
If you specify deliverables in your big picture and small picture roadmaps, you have already done a gross form of ranking. You have already made the big decisions: which feature/parts of features do you want when? You made those decisions based on value to someone.
I see many POs try to use estimation as their only input into ranking stories. How long will something take to complete? If you have a team who can estimate well, that might be helpful. It’s also helpful to see some quick wins if you can. See my most recent series of posts on Estimation for more discussion on ranking by estimation.
Estimation talks about cost. What about value? In agile, we want to work (and deliver) the most valuable work first.
Once you start to think about value, you might even think about value to all your different somebodies. (Jerry Weinberg said, “Quality is value to someone.”) Now, you can start considering defects, technical debt, and features.
The PO must rank all three possibilities for a team: features, defects, and technical debt. If you are a PO who has feature-itis, you don’t serve the team, the customer, or the product. Difficult as it is, you have to think about all three to be an effective PO.
The features move the product forward on its roadmap. The defects prevent customers from being happy and prevent movement forward on the roadmap. Technical debt prevents easy releasing and might affect the ease of the team to deliver. Your customers might not see technical debt. They will feel the effects of technical debt in the form of longer release times.
Long ago, I suggested that a specific client consider three backlogs to store the work and then use pair-wise comparison with each item at the top of each queue. (They stored their product backlog, defects, and technical debt in an electronic tool. It was difficult to see all of the possible work.) That way, they could see the work they needed to do (and not forget), and they could look at the value of doing each chunk of work. I’m not suggesting keeping three backlogs is a good idea in all cases. They needed to see—to make visible—all the possible work. Then, they could assess the value of each chunk of work.
You have many ways to see value. You might look at what causes delays in your organization:
- Technical debt in the form of test automation debt. (Insufficient test automation makes frictionless releasing impossible. Insufficient unit test automation makes experiments and spikes impossible or quite long.)
- Experts who are here, there, and everywhere, providing expertise to all teams. You often have to wait for those experts to arrive to your team.
- Who is waiting for this? Do you have a Very Important Customer waiting for a fix or a feature?
You might see value in features for immediate revenue. I have worked in organizations where, if we released some specific feature, we could gain revenue right away. You might look at waste (one way to consider defects and technical debt).
Especially in programs, I see the need for the PO to say, “I need these three stories from this feature set and two stories from that other feature set.” The more the PO can decompose feature sets into small stories, the more flexibility they have for ranking each story on its own.
Here are questions to ask:
- What is most valuable for our customers, for us to do now?
- What is most valuable for our team, for us to do now?
- What is most valuable for the organization, for us to do now?
- What is most valuable for my learning, as a PO, to decide what to do next?
You might need to rearrange those questions for your context. The more your PO works by value, the more progress the team will make.
The next post will be about when the PO realizes he/she needs to change stories.
If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.
Part 1 was about how the PO needs to see the big picture and develop the ranked backlog. Part 2 was about the learning that arises from small stories. Part 3 was about ranking. In this part, I’ll discuss the product owner value team and how to make time to do “everything,” and especially how to change stories.
Let’s imagine you started developing your product before you started using agile. Your product owners (who might have been a combination of product managers and business analysts) gave you a list of features, problems, and who knows what else for a release. They almost never discussed your technical debt with you. In my experience, they rarely discussed defects unless a Very Important Customer needed something fixed. Now, they’re supposed to provide you a ranked backlog of everything. It’s quite a challenge.
Let’s discuss the difference between a product manager and a product owner.
A product manager faces outward, seeing customers, asking them what they want, discussing dates and possibly even revenue. The product manager’s job is to shepherd the customer wishes into the product to increase the value of the product. In my world, the product manager has the responsibility for the product roadmap.
A product owner faces inward, working with the team. The PO’s job is to increase the value of the product. In my world, the PO works with the product manager (and the BAs if you have them) to create and update the product roadmap.
A business analyst might interview people (internal and external) to see what they want in the product. The BA might write stories with the PO or even the product manager.
The product manager and the product owners and any BAs are part of the Product Owner value team. The product owner value team works together to create and update the product roadmap. In a large organization, I’ve seen one product manager, several product owners and some number of BAs who work on one product throughout its lifetime. (I’ve also seen the BAs move around from product to product to help wherever they can be of use.)
What about you folks who work in IT and don’t release outside the company? You also need a product manager, except, with any luck, the product manager can walk down the hall to discuss what the customers need.
If you work in a small organization, yes, you may well have one person who does all of this work. Note: a product manager who is also a product owner is an overloaded operator. Overloaded people have trouble doing “all” the work. Why? Because product management is more strategic. Product ownership is more tactical. You can’t work at different levels on an ongoing basis. Something wins—either the tactical work or the strategic work. (See Hiring Geeks That Fit for a larger discussion of this problem.)
When one person tries to do all the work, it’s possible that many other things suffer: feedback to the team, story breakdown, and ranking.
The Product Owner Value team takes the outside-learned information from customers/sponsors, the inside-learned information from the product development team (the people who write and test the product), and develop the roadmap to define the product direction.
In agile, you have many choices for release: continuous delivery, delivery at certain points (such as at the end of the iteration or every month or whenever “enough” features are done), or monthly/quarterly/some other time period.
Here’s the key for POs and change: the smaller the stories are or the more often the team can release stories, the more learning everyone gains. That learning informs the PO’s options for change.
In this example roadmap, you can see parts of feature sets in in the first and second iterations. (I’m using iterations because they are easy to show in a picture and because people often want a cadence for releasing unless you do continuous delivery.)
If the Product Development team completes parts of feature sets, such as Admin, Part 1, the PO can decide if Admin, Part 2 or Diagnostics, Part 1 is next up for the team. In fact, if the PO has created quite small stories, it’s really easy to say, “Please do this story from Admin and that story from Diagnostics.” The question for the PO is what is most valuable right now: breadth or depth?
The PO can make that decision, if the PO has external information from the Product Manager and internal information from the BA and the team. The PO might not know about breadth or depth or some combination unless there is a Product Owner Value team.
Here are some questions when your PO wants everything:
- What is more valuable to our customers: breadth across which parts of the product, or depth?
- What is more valuable for our learning: breadth or depth?
- Does anyone need to learn something from any breadth or depth?
- What cadence of delivery do we need for us, our customers, anyone else?
- What is the first small step that helps us learn and make progress?
These questions help the conversation. The roadmaps help everyone see where the Product Owner Value team wants the product to go. I’ll do a summary post next. (If you have questions I haven’t answered, let me know.)
Someone needs to learn about what the customers want. That person is outward-facing and I call that person a Product Manager. Someone needs to create small stories and learn from what the team delivers. I call that person a Product Owner. Those people, along with BAs compose the Product Owner Value team, and guide the business value of the product over time. The business value is not just features—it is also when to fix defects for a better customer experience and when to address technical debt so the product development team has a better experience delivering value.
I’ll do a summary post next. (If you have questions I haven’t answered, let me know.)
If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.
When I work with clients, they often have a “problem” with product ownership. The product owners want tons of features, don’t want to address technical debt, and can’t quite believe how long features will take. Oh, and the POs want to change things as soon as they see them.
I don’t see this as problems.To me, this is all about learning. The team learns about a feature as they develop it. The PO learns about the feature once the PO sees it. The team and the PO can learn about the implications of this feature as they proceed. To me, this is a significant value of what agile brings to the organization. (I’ll talk about technical debt a little later.)
One of the problems I see is that the PO sees the big picture. Often, the Very Big Picture. The roadmap here is a 6-quarter roadmap. I see roadmaps this big more often in programs, but if you have frequent customer releases, you might have it for a project, also.
I like knowing where the product is headed. I like knowing when we think we might want releases. (Unless you can do continuous delivery. Most of my clients are not there. They might not ever get there, either. Different post.)
Here’s the problem with the big picture. No team can deliver according to the big picture. It’s too big. Teams need the roadmap (which I liken to a wish list) and they need a ranked backlog of small stories they can work on now.
In Agile and Lean Program Management, I have this picture of what an example roadmap might look like.
This particular roadmap works in iteration-based agile. It works in flow-based agile, too. I don’t care what a team uses to deliver value. I care that a team delivers value often. This image uses the idea that a team will release internally at least once a month. I like more often if you can manage it.
Releasing often (internally or externally) is a function of small stories and the ability to move finished work through your release system. For now, let’s imagine you have a frictionless release system. (Let me know if you want a blog post about how to create a frictionless release system. I keep thinking people know what they need to do, but maybe it’s as clear as mud to you.)
The smaller the story, the easier it is for the team to deliver. Smaller stories also make it easier for the PO to adapt. Small stories allow discovery along with delivery (yes, that’s a link to Ellen Gottesdiener’s book). And, many POs have trouble writing small stories.
That’s because the PO is thinking in terms of feature sets, not features. I gave an example for secure login in How to Use Continuous Planning. It’s not wrong to think in feature sets. Feature sets help us create the big picture roadmap. And, the feature set is insufficient for the frequent planning and delivery we want in agile.
I see these problems in creating feature sets:
- Recognizing the different stories in the feature set (making the stories small enough)
- Ranking the stories to know which one to do first, second, third, etc.
- What to do when the PO realizes the story or ranking needs to change.
I’ll address these issues in the next posts.
If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.
In Part 1, I talked about the way POs think about the big picture and the ranked backlog. The way to get from the big picture to the ranked backlog is via deliverables in the form of small (user) stories. See the wikipedia page about user stories. Notice that they are a promise for a conversation.
I talked about feature sets in the first post, so let me explain that here. A feature set is several related stories. (You might think of a feature set as a theme or an epic.) Since I like stories the team can complete in one day or less, I like those stories to be small, say one day or less. I have found that the smaller the story, the more feedback the team gets earlier from the product owner. The more often the PO sees the feature set evolving, the better the PO can refine the future stories. The more often the feedback, the easier it is for everyone to change:
- The team can change how they implement, or what the feature looks like.
- The PO can change the rest of the backlog or the rank order of the features.
I realize that if you commit to an entire feature set or a good chunk for an iteration, you might not want to change what you do in this iteration. If you have an evolving feature set, where the PO needs to see some part before the rest, I recommend you use flow-based agile (kanban). A kanban with WIP limits will allow you to change more often. (Let me know if that part was unclear.)
Now, not everyone shares my love of one-day stories. I have a client whose team regularly takes stories of size 20 or something like that. The key is that the entire team swarms on the story and they finish the story in two days, maybe three. When I asked him for more information, he explained this it in this way.
“Yes, we have feature sets. And, our PO just can’t see partial finishing. Well, he can see it, but he can’t use it. Since he can’t use it, he doesn’t want to see anything until it’s all done.”
I asked him if he ever had problems where they had to redo the entire feature. He smiled and said,
“Yes. Just last week we had this problem. Since I’m the coach, I explained to the PO that the team had effectively lost those three days when they did the “entire” feature instead of just a couple of stories. The PO looked at me and said, “Well, I didn’t lose that time. I got to learn along with the team. My learning was about flow and what I really wanted. It wasn’t a waste of time for me.”
“I learned then about the different rates of learning. The team and the PO might learn differently. Wow, that was a big thing for me. I decided to ask the PO if he wanted me to help him learn faster. He said yes, and we’ve been doing that. I’m not sure I’ll ever get him to define more feature sets or smaller stories, but that’s not my goal. My goal is to help him learn faster.”
Remember that PO is learning along with the developers and testers. This is why having conversations about stories works. As the PO explains the story, the team learns. In my experience, the PO also learns. It’s also why paper prototypes work well. Instead of someone (PO or BA or anyone) developing the flow, when the team develops the flow in paper with the PO/BA, everyone learns together.
Small stories and conversations help the entire team learn together.
Small features are about learning faster. If you, too, have the problem where the team is learning at a different rate than the PO, ask yourself these questions:
- What kind of acceptance criteria do we have for our stories?
- Do those acceptance criteria make sense for the big feature (feature set) in addition to the story?
- If we have a large story, what can we do to show progress and get feedback earlier?
- How are we specifying stories? Are we using specific users and having conversations about the story?
I’ve written about how to make small stories in these posts:
- Make Stories Small When You Have “Wicked” Problems
- Three Alternatives for Making Smaller Stories
- Feature sets in How to Use Continuous Planning
- Reasons for Continuous Planning
The smaller the story, the more likely everyone will learn from the team finishing it.
I’ll address ranking in the next post.
Agendashift is the approach used by Mike Burrows, based on his book Kanban from the Inside, in which he describes the values behind the Kanban Method. You can learn more by reading Mike’s post Agendashift in a nutshell. As part of his development of Agendashift, Mike has put together a values based delivery assessment, which he uses when working with teams. Again, I recommend reading Mike’s posts on using Agendashift as a coaching tool and debriefing an Agendashift survey if you are not familiar with Agendashift.
After listening to Mike talk about Agendashift at this year’s London Lean Kanban Day I began wondering how his approach could be used as part of a Strategy Deployment workshop. I was curious what would happen if I used the Agendashift assessment to trigger the conversations about the elements of the X-Matrix model. Specifically, how could it be used to identify change strategies, and the associated desired outcomes, in order to frame tactics as hypotheses and experiments. Mike and I had a few conversations, and it wasn’t long before I had the opportunity to give it a go. This is a description of how I went about it.
Assessment & Analysis
The initial assessment followed Mike’s post, with participants working through individual surveys before spending time analysing the aggregated results and discussing strengths, weaknesses, convergence, divergence and importance.
Having spent some time having rich conversations about current processes and practices, triggered by exploring various perspectives suggested by the survey prompts and scores, the teams had some good insights about what they considered to be their biggest problems worth solving and which required most focus. Getting agreement on what the key problems that need solving are can be thought of as agreeing the key strategies for change.
Thus this is where I broke away from Mike’s outline, in order to first consider strategies. I asked the participants to silently and individually come up with 2 to 3 change strategies each, resulting in around 20-30 items, which we then collectively grouped into similar themes to end up with 5-10 potential strategies. Dot voting (with further discussion) then reduced this down to the 3 key change strategies which everyone agreed with.
To give some examples (which I have simplified and generalised), we had strategies around focussing collaboration, communication, quality, product and value.
Having identified these key strategies, the teams could then consider what desired outcomes they hoped would be achieved by implementing them. By answering the questions “what would we like to see or hear?” and “what would we measure?”, the teams came up with possible ways, both qualitative and quantitative, which might give an indication of whether the strategies, and ultimately the tactics, were working.
Taking the 3 key strategies, I asked small groups of 3-5 people to consider the outcomes they hope to achieve with those strategies, and then consolidated the output. One reassuring observation from this part of the workshop was that some common outcomes emerged across all the strategies. This means that there were many-to-many correlations between them, suggesting a messy coherence, rather than a simplistic and reductionist relationship.
Some examples of outcomes (again simplified and generalised) were related to culture, responsiveness, quality, understanding and feedback.
Once we have strategies and outcomes, the next step is to create some hypotheses for what tactics might implements the strategies to achieve the outcomes. To do this I tweaked Mike’s hypothesis template, and used this one:
We believe that <change>
and will result in <outcomes>
With this template, the hypotheses are naturally correlated with both strategies and outcomes (where the outcomes already consist of both subjective observations and objective measures).
I asked each participant to come up with a single hypothesis, creating a range of options from which to begin defining experiments.
For example (vastly simplified and generalised!):
We believe that a technical practice
implements a quality related strategy
and will result in fewer defects
This as far as we got in the time available, but I hope its clear that once we have hypotheses like this we can start creating specific experiments with which to move into action, with the possibility that each hypotheses could be tested with multiple experiments.
While we didn’t formally go on to populate an X-Matrix, we did have most of the main elements in place – strategies, outcomes and tactics (if we consider tactics to be the actions required to test hypotheses) – along with the correlations between them. Although we didn’t discuss end results in this instance, I don’t believe it would take much to make those explicit, and come up with the correlations to the strategies and outcomes.
On a recent call with Mike he described Agendashift in terms of making the agenda for change explicit. I think that also nicely describes Strategy Deployment, and why I think there is a lot of overlap. Strategy Deployment makes the results, strategies, outcomes and tactics explicit, along with the correlations and coherence between them, and it seems that Agendashift is one way of going about this.
David Anderson defines fitness for purpose in terms of the “criteria under which our customers select our service”. Through this lens we can explore how Strategy Deployment can be used to improve fitness for purpose by having alignment and autonomy around what the criteria are and how to improve the service.
In the following presentation from 2014, David describes Neeta, a project manager and mother who represents two market segments for a pizza delivery organisation.
As a project manager, Neeta wants to feed her team. She isn’t fussy about the toppings as long as the pizza is high quality, tasty and edible. Urgency and predictability is less important. As a mother, Neeta want to feed her children. She is fussy about the toppings (or her children are), but quality is less important (because the children are less fussy about that). Urgency and predictability are more important. Thus fitness for purpose means different things to Neeta, depending on the market segment she is representing and the jobs to be done.
We can use this pizza delivery scenario to describe the X-Matrix model and show how the ideas behind fitness for purpose can be used with it.
Results describe what we want to achieve by having fitness for purpose, or alternatively, they are the reasons we want to (and need) to improve fitness for purpose.
Given that this is a pizza delivery business, its probably reasonable to assume that number of pizzas sold would be the simplest business result to describe. We could possibly refine that to number of orders, or number of customers. We might even want a particular number of return customers or repeat business to be successful. At the same time operational costs would probably be important.
Strategies describe the areas we want to focus on in order to improve fitness for purpose. They are the problems we need to solve which are stopping us from having fitness for purpose.
To identify strategies we might choose to target one of the market segments that Neeta represents, such as family or business. This could lead to strategies to focus on things like delivery capability, or menu range, or kitchen proficiency.
Outcomes describe what we would like to happen when we have achieved fitness for purpose. They are things that we want to see, hear, or which we can measure, which indicate that the strategies are working and which provide evidence that we are likely to deliver the results.
If our primary outcome is fitness for purpose, then we can use fitness for purpose scores, along with other related leading indicators such as delivery time, reliability, complaints, recommendations.
Tactics describe the actions we take in order to improve fitness for purpose. They are the experiments we run in order to evolve towards successfully implementing the strategies, achieving the outcomes and ultimately delivering the results. Alternatively they may help us learn that our strategies need adjusting.
Given strategies to improving fitness for purpose based around market segments, we might try new forms of delivery, different menus or ingredient suppliers, or new alternative cooking techniques.
I hope this shows, using David’s pizza delivery example, how fitness for purpose provides a frame to view Strategy Deployment. The X-Matrix model can be used to tell a coherent story about how all these elements – results, strategies, outcomes and tactics – correlate with each other. Clarity of purpose, and what it means to be fit for purpose, enables alignment around the chosen strategies and desired outcomes, such that autonomy can used to experiment with tactics.