As I was preparing for my Agile Denver session on Unscaling, which leaned heavily on the Cynefin Framework, I reread Liz Keogh’s excellent post, “Cynefin for Devs.” I realized that I use my story splitting patterns in a few different ways depending on the domain, and I’ve never been explicit about this (which probably confuses people I’m coaching).
Unless you’re already familiar with Cynefin, go read Liz’s post. I’ll wait.
Here’s how story splitting looks different for each Cynefin domain:
Simple/Obvious – Just build it. Or, if it’s too big, find all the stories, and do the most valuable ones first.
Complicated – Find all the stories, and do the most valuable and/or most risky ones first.
Complex – Don’t try to find all the stories. Find one or two that will provide some value and teach you something about the problem and solution, build those and use what you learn to find the rest.
Chaotic – Put out the fire; splitting stories probably isn’t important right now.
Disordered – Figure out which domain you’re in before splitting so you don’t take the wrong approach.
The most important nuance is in the complex domain, where starting the work will teach you about the work. In this situation, it doesn’t make sense to try to find all the small stories that add up to the original, big one. Instead, it’s more productive to find one or two you can start right away in order to learn.
Some are uncomfortable with this approach, wanting all the stories enumerated and sized to be able to project time over the backlog. But if you’re really in the complex domain, this only gives you the illusion of predictability—the actual stories are likely to change as you get into the work. Better to be transparent about the uncertainty inherent in complex work.
I was thinking about the Glen Alleman’s post, All Things Project Are Probabilistic. In it, he says,
Management is Prediction
as a inference from Deming. When I read this quote,
If you can’t describe what you are doing as a process, you don’t know what you’re doing. –Deming
I infer from Deming that managers must manage ambiguity.
Here’s where Glen and I agree. Well, I think we agree. I hope I am not putting words into Glen’s mouth. I am sure he will correct me if I am.
Managers make decisions based on uncertain data. Some of that data is predictive data.
For example, I suggest that people provide, where necessary, order-of-magnitude estimates of projects and programs. Sometimes you need those estimates. Sometimes you don’t. (Yes, I have worked on programs where we didn’t need to estimate. We needed to execute and show progress.)
Now, here’s where I suspect Glen and I disagree:
- Asking people for detailed estimates at the beginning of a project and expecting those estimates to be true for the entire project. First, the estimates are guesses. Second, software is about learning, If you work in an agile way, you want to incorporate learning and change into the project or program. I have some posts about estimation in this blog queue where I discuss this.
- Using estimation for the project portfolio. I see no point in using estimates instead of value for the project portfolio, especially if you use agile approaches to your projects. If we finish features, we can end the project at any time. We can release it. This makes software different than any other type of project. Why not exploit that difference? Value makes much more sense. You can incorporate cost of delay into value.
- If you use your estimate as a target, you have some predictable outcomes unless you get lucky: you will shortchange the feature by decreasing scope, incur technical debt, or increase the defects. Or all three.
What works for projects is honest status reporting, which traffic lights don’t provide. Demos provide that. Transparency about obstacles provides that. The ability to be honest about how to solve problems and work through issues provides that.
Much has changed since I last worked on a DOD project. I’m delighted to see that Glen writes that many government projects are taking more agile approaches. However, if we always work on innovative, new work, we cannot predict with perfect estimation what it will take at the beginning, or even through the project. We can better our estimates as we proceed.
We can have a process for our work. Regardless of our approach, as long as we don’t do code-and-fix, we do. (In Manage It! Your Guide to Modern, Pragmatic Project Management, I say to choose an approach based on your context, and to choose any lifecycle except for code-and-fix.)
We can refine our estimates, if management needs them. The question is this: why does management need them? For predicting future cost for a customer? Okay, that’s reasonable. Maybe on large programs, you do an estimate every quarter for the next quarter, based on what you completed, as in released, and what’s on the roadmap. You already know what you have done. You know what your challenges were. You can do better estimates. I would even do an EQF for the entire project/program. Nobody has an open spigot of money.
But, in my experience, the agile project or program will end before you expect it to. (See the comments on Capacity Planning and the Project Portfolio.) But, the project will only end early if you evaluate features based on value and if you collaborate with your customer. The customer will say, “I have enough now. I don’t need more.” It might occur before the last expected quarter. It might occur before the last expected half-year.
That’s the real ambiguity that managers need to manage. Our estimates will not be correct. Technical leaders, project managers and product owners need to manage risks and value so the project stays on track. Managers need to ask the question: What if the project or program ends early?
Do you know which cards are stuck and require attention, what’s due within the next week/month/quarter, or what cards are assigned to you? These are just a few of questions that can be answered with LeanKit’s advanced filter functionality. Upcoming filter enhancements will make it even easier for you to get a focused view of […]
I have a new workshop which I’m running in the next few months – “Kanban Thinking – Becoming a Learning Organisation“. Its an evolution of the work I have been doing privately with organisations, taking the Kanban Thinking model, and teaching how to apply it using the Kanban Canvas.
Registration is now open, so if you’d like to attend either of the follow dates, follow the links:
- London, October 20-21, with Unboxed Consulting
- Hamburg, November 13-14, with Lean Kanban Central Europe
If you’d like me to consider other locations, or would like me to run the workshop privately in your organisation, please let me know.
Here's my talk from the London Lean Kanban Day 2014. Enjoy!
A number of years ago I worked with an EDI (Electronic Data Interchange) team that was troubled with a large level of WIP (Work In Process) and slow movement of work through a system with many external dependencies. Work was regularly blocked waiting for unresponsive peers from the other companies. Work would languish in partially completed states and eventually be abandoned, either because the business relationship changed or the team gave up and turned their attention towards more likely prospects.Looked like great place to apply kanban
These sounded like great problems to solve with the help of the Kanban Method and I was eager to get started. This thinking, however, set us on an educational but somewhat fruitless path. The results weren’t what we expected.Applied the Kanban Method
We mapped out the value stream, modeled it appropriately in a kanban, and used the kanban system for a while to get visibility into the process and gain understanding. That was a success.
The Kanban Method helped us confirm that each analysis, development and QA step in their value stream took little value-added time. Nothing unusual or out of the ordinary was happening in any of the stages. We weren’t worried about vacations or someone getting hit by a bus. There were no bottlenecks.
The kanban helped us discuss Lean concepts and apply them to our situation.Got some benefits but nothing real or substantial – no WIP reduction or lead time reduction
Our initial goal was to visualize the process, reduce WIP, simplify prioritization, and look for improvement in lead-time, especially through reduction in delay. We got the visualization and felt we didn’t need as much as we got. We more fully understood the amount of WIP, but limiting it would not enable any additional swarming. Perhaps we got the simplified prioritization, but that came more from understanding the system and lean principles than from the Kanban Method or the kanban (the board) itself.
We didn’t achieve the lead-time reduction. We couldn’t. The delay was outside of our control.Outside of our control
The root cause was that the team was waiting for external IT shops to complete their part of the equation. It took several rounds of interactions with multiple people outside our company to complete a job. After our team did a little processing, the work item went to sleep in a blocked state and (maybe, someday) would be woken up by the external IT shop if they should decide to respond to our pleading.
Our group would benefit greatly from setting up EDI with their numerous business partners, but the other companies had no incentive to do their part of the connection. (Perhaps you are thinking that we needed to give them a financial incentive. We explored many ideas that just didn’t fit into our sales or business model. Changing the business model was another avenue of exploration but let’s save that for another post.) The times that the external group did respond, it was likely because they had some slack time and/or thought the task was interesting or challenging.It didn’t matter anyway
It really didn’t matter anyway.
Every integration with a 3rd party is of high value to our business — worth the cost of abandonment of work on a large percentage of opportunities. Each successful company pushed through the process is so valuable that it more than covers the costs incurred with those that fail.
Therefore, it was more important to have many “at bats” or “sales calls” than it was to have a short lead-time. Well, maybe ‘important’ isn’t the right word. It’s very important to complete work quickly, but in this case, it’s achievable to have more sales calls. We couldn’t improve our lead-time no matter how important that is.
A better way to look at it is we had extra capacity and used that to push more companies through the process (or, to create demand). We reached equilibrium where a hit suspends additional tries at-bat, and more at-bats ultimately increase hits. Work with an engaged external IT guy competes with making more sales calls. More sales calls yield additional willing external partners.
Yes, that will build up significant WIP over time. Whether we control WIP or not, many of the external companies aren’t going to cooperate and finish the process. We will abandon a lot of work. We try hard to finish whatever we start, but we can’t tell which work will be abandoned before we start.
Ultimately, we considered the cost of being a low priority and deemed it worth it.Lesson learned
Until this experience, I didn’t fully understand the lean principle of Value trumping Flow and Waste Reduction, a lean principle. I was aware of it, but I only thought I understood it. I knew that sometimes we should accept uneven flow if it helps us get value, but I was thinking that would only ever be exception cases. I was thinking that the norm should be to optimize smooth flow. The implication I had missed is that smooth flow isn’t always a general prerequisite for value. Considerable waste and huge WIP and horrible flow might just get you the most value.
(Anyone can get value if you have flow. This kind of environment isn’t for sissies.)
Finally, I began to think about this situation with the EDI team as having options, rather than as a great place to apply the Kanban Method or Lean Principles. I began to see the options in the system trumping the considerable waste, huge WIP and horrible flow.
The moral of the story is that real options thinking, systems thinking and many other such concepts present or yet to come may be more appropriate in some cases than Lean/Kanban thinking. Lean/Kanban thinking is useful, but it isn’t all there is.
Somewhere in the last decade, I had a similar revelation about eXtreme Programming.
With every shiny new hammer, I find more things that look like nails.
On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.
If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.
We are coaching at a new client, helping them transition to Scrum. As a result we have a group of totally new ScrumMasters. Any new role with new responsibilities is a challenge. As an experienced ScrumMaster there are many lists of what you should be doing all day everyday. But if you’re brand new and so is you team – it’s a bit like the blind leading the blind.
Our particular challenge was that none of our new ScrumMasters could understand how this could possibly be a full time role. They have 1 team each.
The first exercise we did was to ask them to write down all the impediments they had noticed in the first 2 weeks. Anything that they thought had slowed their team down. Anything they think might become a problem. Anything they had heard others say. We gave them 5 minutes in silence to make notes and then went around having each explain.
WOW! This worked so well! They had noticed a lot of impediments and potential problems. And they were big, hairy, scary ones. Next we asked – so if your job is to ensure impediments are removed, and we look at this list … do any of you still think this is a part time role? A few chuckles and looks of surprise, the exercise helped open their minds to the scope of the role.
We didn’t want to totally overwhelm them though – so we came up with the mindmap below of what they need to do in the first few sprints. If you can think of some others – please leave a comment for us
So ein ScrumMaster-Job kann gehörig aufwühlend sein. Je nach Projektphase geht es zum Teil hoch her. Widerstand gegen die Veränderung, Unverständnis über das neue Vorgehen, Kollegen, die immer noch einen drauf setzen müssen. Da bleibt die gewünschte Anerkennung von außen oft aus und es fehlt die innere Balance, sie aus dem eigenen Herzen zu holen. Wo kommt sie dann her – die Zuversicht, dass alles gut wird und man auf dem richtigen Weg ist? Wie baut er sich auf, dieser Fels in der Brandung aka ScrumMaster?Hast Du es schon mal mit Meditation versucht?
Es gibt Schulen, die sagen, ein moderner Leader kommt ohne zu innerer Balance verhelfender Meditation nicht mehr aus. Beim Verweilen im gedankenlosen Raum der Leere tauchen sie auf – die unbequemen Sätze vom Chef heute, die ungelösten Probleme aus dem Projekt, die Reue über die harsche Antwort an den Kollegen. Dann sitzen wir da und irgendwann (mit der Übung wird es immer schneller) merken wir, was wir da gerade so denken. Jetzt wird uns bewusst, wo uns etwas aus der Bahn gebracht hat, auch wenn wir es zunächst geschafft hatten, es zur Seite zu schieben.
Es gibt viele Varianten von Meditation. Einige konzentrieren sich in erster Linie auf das Schaffen von Bewusstheit. Sind wir das nächste Mal in einer ähnlichen Situation, merken wir vielleicht schon im selben Moment: “Ach, das ist doch wie beim letzten Mal”. Und wenn uns das ein paar Mal passiert, schaffen wir es vielleicht irgendwann, uns anders zu verhalten. Jeder, der schon einmal eine Fremdsprache gelernt hat, kennt diesen Effekt. Man wird auf einen Fehler hingewiesen und wenn man ihn das nächste Mal macht, wird man unsicher. Irgendwas war hier, doch weiß man noch nicht so recht, welche Variante die richtige ist. Irgendwann weiß man Bescheid, sobald man sich die falsche Variante sagen hört und man kann sich sofort korrigieren. Und zu guter Letzt sagt man es dann gleich beim ersten Mal korrekt.
Was ich damit ausdrücken will: Verhaltensänderungen brauchen Geduld, und Meditation ist ein möglicher erster Schritt auf dem Weg dahin, indem sie uns hilft, uns jene Themen, die uns aus dem Tritt bringen, bewusst zu machen. Als zweiten Schritt braucht es die Entscheidung, es in der Zukunft anders zu machen. Und dann braucht es Geduld und Beharrlichkeit.
Und Meditation kann noch mehr. Neben der Bewusstmachung als Basis für die Veränderung hilft sie uns, gleich jetzt Frieden zu machen mit dem was ist und war. Der Raum ohne Gedanken ist ein guter Ort, um seine Wunden und Schrammen abzulegen. Man kann sie wie ein Paket einfach in die Leere geben. Ohne weiter darüber nachzudenken, taucht man selbst wieder in den gedankenlosen Raum ein. Dort verweilt man so lange, bis der nächste Gedanke auftaucht, der uns so sehr bewegt, dass er es schafft, uns aus der Leere hinaus zurück in unsere Gedankenkreise zu tragen. Und der Kreislauf beginnt von Neuem, bis das Herz Frieden hat und ruhig sitzen kann.
Wenn das Gemüt in gar zu großer Unruhe ist, braucht es eine explizite Bearbeitung der Situation. Hier hilft oft ein Gespräch mit Freunden oder der Familie. Bei beharrlichen Themen empfiehlt sich die Zusammenarbeit mit einem Coach und mit Selbstcoaching-Methoden, wie z.B. einem Tagebuch. Um kurzfristig Dampf abzulassen hat sich ebenfalls Sport sehr bewährt.
Liebe ScrumMaster, was sind Eure Rezepte bei innerer Unruhe, bei mangelndem Gleichmut und verschwundener Gelassenheit, oder gar dem Verlust der Freude und Begeisterung für Eure Arbeit?
- Führung in Scrum | der Manager | Verhalten | Teil 2
- Summer School Weekly Cartoon: Teamführung
- 25. und 26. März | offenes CSM Training in Berlin
Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!
No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.
But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.
In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.The steps you already took
This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.Step 0: Your very first lines of code
Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.
None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.Step 1: Version control
At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.
It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.Step 2: Release process
Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.
Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.
In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.
So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.Step 3: Scripts
Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?
Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.
My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?
These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.
If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.Following steps
In the next parts, we will look at the following steps towards becoming world champion delivering software:
- Step 4: Continuous Delivery
- Step 5: Continuous Deployment
- Step 6: "Hands-off"
- Step 7: High Scalability
Stay tuned for the following posts.
My 7 year old son is at home today… he’s not sick or anything. He’s just in between things for a day. His summer care ended on Friday, and the school year starts on Tuesday. Today, being Monday when I’m writing this, he is at home with me.
Well… “home” is a bit wrong… so far, we’ve gone to a restaurant for breakfast and to the park to play. Shortly after I finish writing this, we’re going to the library and then lunch. We’ll likely end up at the children’s museum this afternoon, and somewhere in the mix we’ll watch a movie together. It’s going to be a fun day for me and for him – spending time with daddy and doing lots of fun things.Except
I have to be honest with you – days like today are difficult for me. I’m an obsessive worker, impatient to get to the next thing and constantly thinking about what I want to be doing instead of what I am currently doing. My brain just won’t shut off. It’s a curse (and a blessing at times) and it frustrates me because I actually want to spend time with my son, focused on what we are doing together. But there it is… in the back of my brain, trying to fight it’s way forward… the thing I want to do instead of what I’m currently doing. And it’s always there, always looking for whatever the next thing is.
Part of the problem is my inability to plan correctly. If you’ve been listening to the Entreprogrammers podcast recently, you’ll have heard me talk about this some. I have a hard time looking at the big picture and determining what is / is not important, when I’m stuck in the weeds of doing the actual work. It just doesn’t work for me to switch back and forth between high level planning and detailed task work.
I do my best and plan specific things for specific days – like this email. I have on my task list to write this every Monday so I can schedule it and have it emailed to you on Wednesday. But sometimes life gets in the way, and I end up torn between my scheduled things to do and spending time with my son.Do Your Best. Spend Time With Family.
Sometimes I need to put the schedule down and take care of what life throws my way. I spent all last week being sick and barely getting things done. Today my son is with me. And I’m torn between the things I should be doing, and the things I really should be doing. It’s especially difficult with our ever-increasingly-connected lives. Smart phones, tablet PCs, free WiFi everywhere…
It’s easy to obsess over work related things. We have schedules, we have habits. We work, we go home and spend time with the family. But life throws us curve balls and wrecks our plans every day. Yet planning is indispensable, even if plans and schedules end up being useless.
The important thing to remember in all of this, is that family should come first. In spite of what I want to be doing, I need to spending time with my son today. I’ll likely find a few moments in which I can take care of a few things, like this email, but I am going to do my best to spend time with my son and not worry about the projects and work.
– DerickRelated Stories
Personas and scenario can be a powerful tool for driving adoption and business value realization.
All too often, people deploy technology without fully understanding the users that it’s intended for.
Worse, if the technology does not get used, the value does not get realized.
Keep in mind that the value is in the change.
The change takes the form of doing something better, faster, cheaper, and behavior change is really the key to value realization.
If you deploy a technology, but nobody adopts it, then you won’t realize the value. It’s a waste. Or, more precisely, it’s only potential value. It’s only potential value because nobody has used it to change their behavior to be better, faster, or cheaper with the new technology.
In fact, you can view change in terms of behavior changes:
What should users START doing or STOP doing, in order to realize the value?
Behavior change becomes a useful yardstick for evaluating adoption and consumption of technology, and significant proxy for value realization.What is a Persona?
I’ve written about personas before in Actors, Personas, and Roles, MSF Agile Persona Template, and Personas at patterns & practices, and Microsoft Research has a whitepaper called Personas: Practice and Theory.
A persona, simply defined is a fictitious character that represents user types. Personas are the “who” in the organization. You use them to create familiar faces and to inspire project teams to know their clients as well as to build empathy and clarity around the user base.
Using personas helps characterize sets of users. It’s a way to capture and share details about what a typical day looks like and what sorts of pains, needs, and desired outcomes the personas have as they do their work.
You need to know how work currently gets done so that you can provide relevant changes with technology, plan for readiness, and drive adoption through specific behavior changes.
Using personas can help you realize more value, while avoiding “value leakage.”What is a Scenario?
When it comes to users, and what they do, we're talking about usage scenarios. A usage scenario is a story or narrative in the form of a flow. It shows how one or more users interact with a system to achieve a goal.
You can picture usage scenarios as high-level storyboards. Here is an example:
In fact, since scenario is often an overloaded term, if people get confused, I just call them Solution Storyboards.
To figure out relevant usage scenarios, we need to figure out the personas that we are creating solutions for.Workforce Analysis with Personas
In practice, you would segment the user population, and then assign personas to the different user segments. For example, let’s say there are 20,000 employees. Let’s say that 3,000 of them are business managers, let’s say that 6,000 of them are sales people. Let’s say that 1,000 of them are product development engineers. You could create a persona named Mary to represent the business managers, a persona named Sally to represent the sales people, and a persona named Bob to represent the product development engineers.
This sounds simple, but it’s actually powerful. If you do a good job of workforce analysis, you can better determine how many users a particular scenario is relevant for. Now you have some numbers to work with. This can help you quantify business impact. This can also help you prioritize. If a particular scenario is relevant for 10 people, but another is relevant for 1,000, you can evaluate actual numbers.Persona 1
”Mary” Persona 2
”Sally” Persona 3
”Bob” Persona 4
”Jill” Persona 5
”Jack” User Population 3,000 6,000 1,000 5,000 5,000 Scenario 1 X Scenario 2 X X Scenario 3 X Scenario 4 X X Scenario 5 X Scenario 6 X X X X X Scenario 7 X X Scenario 8 X X Scenario 9 X X X X X Scenario 10 X X Analyzing a Persona
Let’s take Bob for example. As a product development engineer, Bob designs and develops new product concepts. He would love to collaborate better with his distributed development team, and he would love better feedback loops and interaction with real customers.
We can drill in a little bit to get a get a better picture of his work as a product development engineer.
Here are a few ways you can drill in:
- A Day in the Life – We can shadow Bob for a day and get a feel for the nature of his work. We can create a timeline for the day and characterize the types of activities that Bob performs.
- Knowledge and Skills - We can identify the knowledge Bob needs and the types of skills he needs to perform his job well. We can use this as input to design more effective readiness plans.
- Enabling Technologies – Based on the scenario you are focused on, you can evaluate the types of technologies that Bob needs. For example, you can identify what technologies Bob would need to connect and interact better with customers.
Another approach is to focus on the roles, responsibilities, challenges, work-style, needs and wants. This helps you understand which solutions are appropriate, what sort of behavior changes would be involved, and how much readiness would be required for any significant change.
At the end of the day, it always comes down to building empathy, understanding, and clarity around pains, needs, and desired outcomes.Persona Creation Process
Here’s an example of a high-level process for persona creation:
- Kickoff workshop
- Interview users
- Create skeletons
- Validate skeletons
- Create final personas
- Present final personas
Doing persona analysis is actually pretty simple. The challenge is that people don’t do it, or they make a lot of assumptions about what people actually do and what their pains and needs really are. When’s the last time somebody asked you what your pains and needs are, or what you need to perform your job better?A Story of Using Personas to Create the Future of Digital Banking
In one example I know of a large bank that transformed itself by focusing on it’s personas and scenarios.
It started with one usage scenario:
Connect with customers wherever they are.
This scenario was driven from pain in the business. The business was out of touch with customers, and it was operating under a legacy banking model. This simple scenario reflected an opportunity to change how employees connect with customers (though Cloud, Mobile, and Social).
On the customer side of the equation, customers could now have virtual face-to-face communication from wherever they are. On the employee side, it enabled a flexible work-style, helped employees pair up with each other for great customer service, and provided better touch and connection with the customers they serve.
And in the grand scheme of things, this helped transform a brick-and-mortar bank to a digital bank of the future, setting a new bar for convenience, connection, and collaboration.
Here is a video that talks through the story of one bank’s transformation to the digital banking arena:
In the video, you’ll see Blessing Sibanyoni, one of Microsoft’s Enterprise Architects in action.
If you’re wondering how to change the world, you can start with personas and scenarios.You Might Also Like
Eine Situation unlängst in einem meiner Kundenprojekte: Die aufwändig in mehr als 5 Monaten entwickelte Software-Lösung befindet sich im großen abschließenden Test durch alle Fachbereiche. Es ist ein zeitintensives Verfahren, da viele Stakeholder involviert und jede Menge Testfälle abgearbeitet werden müssen. Ein in der Mitte des Integrationstest gefundenes und zunächst klein aussehendes Problem entwickelt sich während der Analyse zusehends zum „Show-Stopper“. Wichtige Plausibilisierungsprüfungen für den Geschäftsprozess finden wegen einer falschen Logik keine Anwendung. Das To-Do ist schnell klar: Die Methode muss umgeschrieben und verbessert werden, die Anpassungen sind überschaubar.
Was zunächst einfach klingt, gestaltet sich auf den zweiten Blick äußerst problematisch. Die Anpassung muss in einem zentralen Baustein der Geschäftsprozesslogik angepasst werden – das hat große Auswirkungen auf die Geschäftsprozesse. Die Software und Prozesse sind nur sehr mangel- und lückenhaft mit Unit Tests und automatisierten Testskripts abgesichert. Änderungen könnten großflächige Nachtests aller Geschäftsprozesse bedeuten, welche die Produktivsetzung der IT-Lösung um Wochen verzögern wurde. Gute Ideen sind nun gefragt …
Nach einem hektischen und diskussionsreichen Brainstorming hat das Team einen vermeintlich vernünftigen Weg aus der Misere gefunden:
- Die betroffene Methode soll in einem ersten Schritt umfangreich durch zahlreiche Unit Tests an den Schnittstellen abgesichert werden.
- In einem zweiten Schritt wird die Methode überarbeitet und mit Hilfe der Unit Tests auf deren gleiche Funktionsweise getestet.
- In einem dritten Schritt sollen einzelne Testfälle des Integrationstests stichprobenartig wiederholt werden.
Müsste ich eine Analogie für diese Vorgehensweise finden, würde ich die Operation am offenen Herzen wählen. Auch in der Medizin wird versucht, das Herz so gut wie möglich vom Rest der Organe bei gleichzeitiger Beibehaltung seiner Funktion zu isolieren. Beide Vorgehensweisen bergen Risiken, aber sie sind deutlich geringer als bei einem rigorosen Vorgehen ohne vorherige Absicherung bzw. Isolierung.
Wenig überraschend ist, dass die nachträgliche Absicherung der Methode deutlich aufwändiger ist, als wenn sie gleich von Beginn an testautomatisiert aufgebaut worden wäre. Gerade dieses Beispiel hat mir wieder gezeigt, wie wichtig es ist, bestehende Funktionalität gewissenhaft abzusichern, um Änderungen im Code vornehmen zu können. Denn diese Änderungen werden kommen, ganz gleich ob durch Fehler oder neue Anforderungen bedingt.
Respond to change … Start code test-driven!
- Unit Testing: Still Widely Informal / Methods and Tools
- Test Driven Development (TDD) und Scrum | Teil 1
- Certification Test is Postponed
It’s that time of year again to hold our annual Austin Code Camp, hosted by the Austin .NET User Group:
We’ve got links on the site for schedule, registration, sponsorship, location, speaker submissions and more.
Hope to see you there!
And because I know I’m going to get emails…Charging for Austin Code Camp? Get the pitchforks and torches!
In the past, Austin Code Camp has been a free event with no effective cap on registrations. We could do this because the PEC had a ridiculous amount of space and could accommodate hundreds of people. With free registration, we would see 50% drop-off attending from registrations. Not very fun to plan food with such uncertainty!
This year we have a good amount of space, but not infinite space. We can accommodate the typical number of people that come to our Code Camp (150-175), but for safety reasons we can’t put an unlimited cap on registrations as we’ve done in the past.
Because of this, we’re charging a small fee to reserve a spot. It’s not even enough to cover lunch or a t-shirt or anything, but it is a small enough fee to ensure that we’re fair to those that truly want to come.
Don’t worry though, if you can’t afford the fee, send me an email, and we can work it out.
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
For those who attended last night’s Agile Denver meetup, here are the slides and some additional resources for you…
For those who couldn’t make it, my slides aren’t intended to tell the whole story on their own, but you may be able to get some value from them. The most common question I’m getting from people who see just the slides is about the source for the charts on pages 10-12.
The line chart is from J. Richard Hackman and is simply math—the number of unique links between individuals in a group of size N is N(N-1)/2. My stacked bar chart was an argument from that based on reason rather than an empirical argument. Given an exponentially increasing number of links, a non-zero coordination cost per link, and fixed capacity per person, at some point coordination overtakes other activities and continues to do so. This is one of those situations where collecting actual data is near impossible but where we can still reason about the shape of the curves. We can do things to move the inflection point of the curve, which is why there are no numbers, but the shape will stay the same.
An excellent overview of Cynefin from Liz Keogh
More from me on Conway’s Law and T-shaped people
The post I referenced from Chris Matts on staff liquidity
Towards the end of the session, I mentioned Geonetric’s company-wide Agile adoption. Here’s the latest post about it on their blog.
Were you at the meetup? Share your biggest takeaway in the comments below…
Whenever I run a Certified Scrum Product Owner training session, one concept stands out as critical for participants: the relationship of the Product Owner to the technical demands of the work being done by the Scrum team.
The Product Owner is responsible for prioritizing the Product Backlog. This responsibility is, of course, also matched by their authority to do so. When the Product Owner collaborates with the team in the process of prioritization, there may be ways which the team “pushes back”. There are two possible reasons for push-back. One is good, one is bad.
Bad Technical Push-Back
The team may look at a product backlog item or a user story and say “O gosh! There’s a lot there to think about! We have to build this fully-architected infrastructure before we can implement that story.” This is old waterfall thinking. It is bad. The team should always be thinking (and doing) YAGNI and KISS. Technical challenges should be solved in the simplest responsible way. Features should be implemented with the simplest technical solution that actually works.
As a Product Owner, one technique that you can use to help teams with this is that when the team asks questions, that you aggressively keep the user story as simple as possible. The questions that are asked may lead to the creation of new stories, or splitting the existing story. Here is an example…
Suppose the story is “As a job seeker I can post my resume to the web site…” If the technical team makes certain assumptions, they may create a complex system that allows resumes to be uploaded in multiple formats with automatic keyword extraction, and even beyond that, they may anticipate that the code needs to be ready for edge cases like WordPerfect format. The technical team might also assume that the system needs a database schema that includes users, login credentials, one-to-many relationships with resumes, detailed structures about jobs, organizations, positions, dates, educational institutions, etc. The team might insist that creating a login screen in the UI is an essential prerequisite to allowing a user to upload their resume. And as for business logic, thy might decide that in order to implement all this, they need some sort of standard intermediate XML format that all resumes will be translated into so that searching features are easier to implement in the future.
It’s all CRAP, bloat and gold-plating.
Because that’s not what the Product Owner asked for. The thing that’s really difficult for a team of techies to get with Scrum is that software is to be built incrementally. The very first feature built is built in the simplest responsible way without assuming anything about future features. In other words, build it like it is the last feature you will build, not the first. In the Agile Manifesto this is described as:
Simplicity, the art of maximizing the amount of work not done, is essential.
The second feature the team builds should only add exactly what the Product Owner asks for. Again, as if it was going to be the last feature built. Every single feature (User Story / Product Backlog Item) is treated the same way. Whenever the team starts to anticipate the business in any of these three ways, the team is wrong:
- Building a feature because the team thinks the Product Owner will want it.
- Building a feature because the Product Owner has put it later on the Product Backlog.
- Building a technical aspect of the system to support either of the first types of anticipation, even if the team doesn’t actually build the feature they are anticipating.
Okay, but what about architecture? Fire your architects. No kidding.¹
Good Technical Push-Back
Sometimes stuff gets non-simple: complicated, messy, hard to understand, hard to change. This happens despite us techies all being super-smart. Sometimes, in order to implement a new feature, we have to clean up what is already there. The Product Owner might ask the Scrum Team to build this Product Backlog Item next and the team says something like: “yes, but it will take twice as long as we initially estimated, because we have to clean things up.” This can be greatly disappointing for the Product Owner. But, this is actually the kind of push-back a Product Owner wants. Why? In order to avoid destroying your business! (Yup, that serious.)
This is called “Refactoring” at it is one of the critical Agile Engineering practices. Martin Fowler wrote a great book about this about 15 years ago. Refactoring is, simply, improving the design of your system without changing it’s business behaviour. A simple example is changing a set of 3 radio buttons in the UI to a drop-down box with 3 options… so that later, the Product Owner can add 27 more options. Refactoring at the level of code is often described as removing duplication. But some types of refactoring are large: replacing a relational database with a NoSQL database, moving from Java to Python for a significant component of your system, doing a full UX re-design on your web application. All of these are changes to the technical attributes of your system that are driven by an immediate need to add a new feature (or feature set) that is not supported by the current technology.
The Product Owner has asked for a new feature, now, and the team has decided that in order to build it, the existing system needs refactoring. To be clear: the team is not anticipating that the Product Owner wants some feature in the future; it’s the very next feature that the team needs to build.
This all relates to another two principles from the Agile Manifesto:
Continuous attention to technical excellence and good design enhances agility.
The best architectures, requirements, and designs emerge from self-organizing teams.
In this case, the responsibilities of the team for technical excellence and creating the best system possible override the short-term (and short-sighted) desire of the business to trade off quality in order to get speed. That trade-off always bites you in the end! Why? Because of the cost of fixing quality problems increases exponentially as time passes from when they were introduced.
Refactoring is not a bad word.
Keep your code clean.
Let your team keep its code clean.
Oh. And fire your architects.
¹ I used to be a senior architect reporting directly to the CTO of Charles Schwab. Effectively, I fired myself and launched an incredibly successful enterprise architecture re-write project… with no up-front architecture plan. Really… fire your architects. Everything they do is pure waste and overhead. Someday I’ll write that articleTry out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?
The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses.
The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?
Specify the story
In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.
Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks. Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.
You can use a tool like Fitnesse or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.
Design the solution
Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.
The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.
The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.
The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.
The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.
It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.
There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.