Skip to content

Feed aggregator

8 hours a day of Scrum Master stuff ?

Growing Agile - Thu, 06/16/2016 - 13:02
When we coach teams new to agile we inevitable get asked how many teams a Scrum Master should have. We then use the quote “A good Scrum Master can have two teams, a great Scrum Master will only have one team.”. This is met with blank stares. The Scrum Master role is so different to […]
Categories: Companies

The Purpose Alignment Model

Xebia Blog - Thu, 06/16/2016 - 10:45
When scaling agile / Scrum, we invariable run into the alignment vs autonomy problem. In short: you cannot have autonomous self directing teams if they have no clue in what direction they should go, or even shorter: Alignment breeds autonomy. But how do we create alignment? and what tools can we use to quickly evaluate
Categories: Companies

Scrum Day Europe, Amsterdam, Netherlands, July 7 2016

Scrum Expert - Thu, 06/16/2016 - 10:00
Scrum Day Europe is a one-day conference dedicated to Scrum. The 2016 edition of this conference dedicated to Agile project management and Scrum will take place in Amsterdam and will features local and international expert speakers. This is a main event for all European Scrum practitioners. In the agenda of Scrum Day Europe you can find topics like “Scrum Turns 21, what is next for Scrum for the next 20 years”, “When Agile is not enough, responsiveness to the rescue”, “The Lean Startup powerpack, extending the model for easy practical use”, “Saying goodbye to command and control for good: how completely hierarchy-free companies can take Scrum to the next level”, “A retrospective on Leading Agile Transformations”, “Acceptance test driven development @ Scale”, “The systemic Scrum Master”, “Implementing Agile for embedded software development in Lely”, “Serious Play on the Path to Agility”, “Using Lean UX to build the right things”, “Product Owner Value Game” or “Agile Myth Busters”. Web site: http://www.scrumdayeurope.com/ Location for the Scrum Day Europe conference: Pakhuis de Zwijger, Piet Heinkade 179, 1019 HC, Amsterdam, Netherlands
Categories: Communities

Links for 2016-06-14 [del.icio.us]

Zachariah Young - Wed, 06/15/2016 - 09:00
Categories: Blogs

Agile Retrospectives workshop op 21 juni

Ben Linders - Wed, 06/15/2016 - 07:44

UtrechtVoor de workshop Valuable Agile Retrospectives in Utrecht op 21 juni zijn nog plaatsen beschikbaar. Meld je nu aan voor deze succesvolle workshop. Elke 2e en volgende collega ontvangt 25% korting bij gelijktijdig aanmelden.

Retrospectives helpen je om Agile Practices effectief toe te passen en je teams continu te verbeteren. Scrum masters en Agile coaches halen met behulp van een toolbox met retrospective oefeningen meer uit teams, en zorgen voor blije medewerkers.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

In deze workshop leer je de waarom, wat en hoe van retrospectives en oefen je in teams met diverse manieren om retrospectives uit te voeren in Scrum, Kanban of SAFe, Je gebruikt het boek Getting Value out of Agile Retrospectives / Waardevolle Agile Retrospectives. Iedere deelnemer ontvangt een persoonlijk exemplaar van dit Engelstalig of Nederlandstalige ebook.

De workshop wordt in het Engels gegeven zodat professionals die geen Nederlands spreken ook deel kunnen nemen. Als alle deelnemers Nederlands spreken geef ik de workshop uiteraard in het Nederlands :-).

Schrijf je nu in voor deze succesvolle workshop. Aanmelden.

Deze workshop wordt ook gegeven in Athene en Kladno (bij Praag). Ik geef ook interne maatwerk workshops. Interesse? Neem dan contact met mij op!

 +31 6 2901 3863
 info@benlinders.com
@BenLinders
Ben Linders
Ben Linders Advies
Icons-mini-icon_home Tilburg, The Netherlands

Categories: Blogs

The Retro Game

Learn more about transforming people, process and culture with the Real Agility ProgramThe Hunt for Better Retrospectives

The rumours had started to spread, retrospectives at our organization were flat, stale and stuck in a rut. The prevailing thought was that this was stalling the pace of continuous improvement across our teams. In truth, I wasn’t sure if this was at all true, it’s a complex problem that has many possible contributing factors. Here are just some possible alternative or co-contributing causes: how the teams are organized, the level of safety, mechanisms to deal with impediments across the organization, cultural issues, levels of autonomy and engagement, competence & ability and so on…

Despite this, it didn’t hurt to have a look for some inspiration on good retrospectives. I really liked Gitte Klitgaard’s talk called Retrospectives are Boring and Useless – Or are They? In particular the parts around preparing and establishing safety.

On the theme of safety, I thought we could try to go as far as having fun; we’d already had lots of success with the getKanban game (oh Carlos you devil!). Where it all tied together for me, was being inspired by the great question-based approach from cultureqs.com that I’d had a chance to preview at Spark.

If I could create a game with the right prepared questions, we could establish safety, the right dialogue and maybe even have some fun.

The Retro Game

This is a question-based game that I created that you could use to conduct your next retro for teams of up to 10 people. The rules of the game are fairly simple and you could play through a round or two in about 1 to 2 hours depending on team size and sprint duration. Prep time for the facilitator is about 2-4 hours.

theretrogame

Prepping to play the game

You, as facilitator, will need to prepare for 3 types of questions that are thought of ahead of time and printed (or written) on the back of card-stock paper cards.

One question per card. Each question type has its unique colour card. About 8 questions per category is more than enough to play this game.

The 3 types of questions are:

In the Moment – These are questions that are currently on the mind of the team. These could be generated by simply connecting with each team member ahead of time and asking, “if you could only talk about one or two things this retro, what would it be?” If for example they responded “I want to talk about keeping our momentum”, you could create a question like “what would it take to keep our momentum going?”

Pulse Check – These are questions that are focused on people and engagement. Sometimes you would see similar questions on employee satisfaction surveys. An example question in this category could be “What tools and resources do we need to continue to be successful?”

Dreams and Worries – This is a longer-term view of the goals of the team. If the team has had any type of Lift Off or chartering exercise in the past, these would be questions connected to any goals and potential risks that have been previously identified. For example if one of a team’s goal is to ship product updates every 2 weeks, a question could be “What should we do next to get closer to shipping every 2 weeks?”

On the face-up side of the card it should indicate the question type as well as have room to write down any insights and actions.

You will also need:

  • To print out the game board
  • To print out the rule card
  • Bring a 6-sided dice
Playing the Game

Players sit on the floor or at a table around the game board. The cards are in 3 piles, grouped by type, with the questions face down.

therules

  • The person with the furthest birthday goes first.
  • It is their turn and they get to roll the dice.
  • They then choose a card from the pile based on the dice roll. A roll of 1 through 3 is an “In the Moment” card, 4 is a “Pulse Check” and 5 to 6 “Dreams & Worries”
  • They then read the card question on the card out loud and then pass the card to the person on the right.
    • The person on your right is the scribe, they will capture notes in the Insight and Actions boxes of the card for this round.
  • Once they have read the question, they have a chance to think and then answer the question out loud to the group. Nobody else gets to talk.
  • Once they’ve answered the question, others can provide their thoughts on the subject.
  • After 3 minutes, you may wish to move on to the next round.
  • At the end of each round the person whose turn it was chooses the person who listened and contributed to the discussion best. That person is given the card to keep.
  • The person to the left is given the dice and goes next.
Winning the Game
  • The game ends at 10 minutes prior to the end of the meeting.
  • At the end of the game, the person with the most cards wins!
  • The winner gets the bragging rights (and certificate) indicating they are the retrospective champion!
  • You should spend the last 10 minutes reflecting on the experience and organizing on the action items identified.
Concepts at Play

players-playing

Context & Reflection – Preparation is key, particularly for the “In the Moment” section. The topics will be relevant and connect with what the team wants to talk about. Also when presented in the form of a question they will likely trigger reflection for all those present.

Sharing the Voice – Everyone gets a chance to speak and be heard without interruptions. The game element also incentivises quality participation.

Coverage of topic areas – The 3 question categories spread the coverage across multiple areas, not just the items in the moment. The probabilities are not however equal, for example there is a 50% chance of “In the Moment” being chosen in each turn.

Fun & Safety – The game element encourages play and friendlier exchanges. You are likely to have dialogue over debate.

Want to play the game?

I’d love to hear how this game worked out for you. I’ve included everything you need here to setup your own game. Let me know how it went and how it could be improved!

Resources:
Retro Game – Game Board
Retro Game – Rules
Retro Game – Card Template
Retro Game – Champion Certificate

Martin aziz

Martin Aziz
Blog
@martinaziz
LoyaltyOne

 

Business vector designed by Freepik

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post The Retro Game appeared first on Agile Advice.

Categories: Blogs

Adapting Change Management Models to Support Agile Transformation

BigVisible Solutions :: An Agile Company - Tue, 06/14/2016 - 20:45

In my last blog post, we explored some of the classic models and frameworks that have been used successfully in the past to deal with large organizational change initiatives, such as ADKAR and Kotter’s 8-step model. We also advocated that organizations who are undergoing Agile transformation use one of these classic change models to manage the change.

In this post we shine a light on the weaknesses of the traditional change models in the context of Agile transformation. The fundamental flaw in all of these models is that they emerged to address phase-based, sequential project management and delivery with all of its upfront planning, impractical governance constraints and long phases with little feedback and actual user input. In short, none of the traditional change management models are Agile because they were designed in a much stabler world than the one we live in today. For the purposes of this blog, what we mean by “Agile” or “an Agile mindset” can be summed up in five separate but interwoven dimensions:

  1. Iterative and Incremental
  2. Transparency
  3. Collaboration
  4. Rapid Feedback
  5. Empowerment

Now let’s imagine for a moment an organization that has successfully transformed its software organization to Agile, and has adopted and internalized these Agile values. (Please don’t be confused by the fact that we use the word “values” in this work a little differently than how that word was used in the Agile Manifesto, the work that originally inspired much of our thinking.) In this brave new world, the organization is able to release valuable working software in an almost continuous state and rapidly respond to changing conditions. As software is delivered faster than ever, new challenges emerge. Downstream from software delivery, new bottlenecks emerge in testing and deployment and upstream delays mount as software delivery waits for new requirements. The value of speeding up software delivery is lost if the rest of the value stream is still bound to stage-based, linear processes and a waterfall-like mentality. The solution is to apply the Agile principles and practices used in software development more broadly throughout the organization.

For this to happen, new stakeholders will need to begin the process of adopting an Agile mindset and practices. However traditionally trained change agents will likely not have the tool set that is needed to adaptively pivot the change program that they designed for software delivery teams to meet the emergent needs of these unanticipated stakeholders. The lack of the change agents’ ability to effectively adapt the change program limits the value that could otherwise be delivered and in some scenarios jeopardize the entire program.

The problem is, as I said above, traditional change management models were originally conceived to manage large batches of change sequentially over a long duration. However, in the same way that the world of work has been transformed by Agile to produce smaller increments of value more quickly and frequently, change management must also be adapted to work at the speed of Agile. In the example given above, traditional change agents cannot hope to effect a successful Agile transformation without transforming themselves to become Agile. Before they can help others undergo the necessary personal and interpersonal transformations that will ensure the longevity and success of the Agile organization, change management practices and indeed the traditional mindset guiding them must first become Agile. The place to start, then, is with the models themselves, though not any particular one.

This blog post looks into how we can make change management models more Iterative and Incremental. We will explore some of the other Agile concepts and how they can be brought into change management models in future blog posts.

Iterative and Incremental

To understand the benefit of combining iterative and incremental, we have to understand the constituents. Steven Thomas has an excellent blog post that compares the “iterative”, “incremental” and “iterative and incremental” processes using painting the Mona Lisa as an example.

Using an iterative approach you create a work (for example, the Mona Lisa) through a process of continuous elaboration: starting with a prototype, then a rough design, then a more detailed draft, and so forth, until the entire picture is completed. An incremental approach instead works on completing one section at a time (for example, the upper left corner of the Mona Lisa) before moving onto another section of the picture (for example, the upper middle section), before moving onto the next section, and the next until you complete the entire picture. The two approaches complement each other when combined. With the iterative and incremental approach, you would prototype the entire picture while also working to deliver a small increment of value. For example, you may need to sketch out the entire picture so you see how the parts work together (like setting up software architecture). But you can also focus on painting a complete increment of value so that your fans (i.e., customers) have something that they can put to use and appreciate quickly (e.g., a new product feature).

What is Iterative and Incremental Change Management?

How might change management shift to be more iterative and incremental? The iterative aspect would be laying down a rough framework for the change, with rough estimates of how long certain organizational shifts might take and that also acknowledge that people are dynamic creatures who change at individual paces and intensities. Traditional change management models, designed to riff off of waterfall processes, had big upfront plans that were simply rolled out, the assumption being that people would just change because they were told to. An Agile model would instead plot out a rough map of the transformation required for an Agile mindset to take root and flourish, with nearer-term goals more clearly defined that farther-term ones. In terms of increments of value, change agents should focus on delivering a single increment of valuable change. For example, each sprint may have a mindset theme, such as transparency, and the change agent(s) can focus on ways that the team can measurably and visibly improve transparency, for example through visible work boards. In this example, the change organization can outline the long-term goals of long-lasting, sustainable change at the individual, group (e.g., team, department) and enterprise levels while also demonstrating the ability to instantiate some Agile change at the individual and/or group level. The overall change roadmap could be projected to take a year, but only Q1’s goals would be decently detailed, with only the first month’s goals actually broken down into user stories.

A further example of how an organization might apply the concepts of iterative and incremental to their change management would be by using a backlog to track and prioritize change activities. We could be iterative by having a series of related change activities on a backlog oriented around some of the early aspects of ADKAR, such as awareness and desire, while later items on the backlog might be oriented around knowledge, ability and reinforcement. Likewise the concept of incremental could be used by going through a sequence of iterations of ADKAR for different parts of the organization. Maybe we would first iterate on an increment of change through the Northeast region, followed by a series of backlog items for an increment of change through the Europe region.

Iterative and Incremental-01

Two increments of change management iteration

Agile Change Patterns

The question still stands: how does the Agile team interface and interact and thus benefit from the change agent(s). I have seen two distinct patterns used with some success, namely Synchronized Agile Change Management and Embedded Agile Change Management.

Synchronized Agile Change Management

The first pattern I’ve observed, I would describe as Synchronized Agile Change Management. In this pattern, we create a separate team of change management specialists who operate like their own Scrum team and work off their own distinct backlog of change management activities, and their release plan is synchronized with the release plans of the set of Agile teams they support. The stories on their backlog would likely have relationships to specific user stories on the backlogs of each of the individual Scrum teams and during release planning we would ensure that the change management backlog is synchronized to the Scrum team backlogs. Essentially the change management team, being a downstream consumer of the work being produced by the Agile team, has to do their change management work often a sprint in arrears, before the software can be released to users. This pattern might be the easiest to use to start out, and it’s quite similar to the pattern that many organizations were using for QA in early attempts at scaled Agile. One criticism of this pattern is that it introduces a handoff from Scrum teams to a separate Change Management team, thus introducing a time lag from when the Scrum team does the work to when the work is “done done.” This could be introducing some waste in the system with the delay and handoff. In some organizations, however, due to having a scarce set of talented and credentialed change management professionals, this might be the only feasible option in the short term.

Embedded Agile Change Management

The second pattern I’ve observed I would describe as Embedded Agile Change Management. In this pattern, we strive for one truly cross-functional team, and we embed the necessary change management skills and disciplines directly into the Scrum team itself. At first it might be that we have a dedicated change management professional working on each Agile team, or less desirably they could be shared across teams, provided each team has a designated change management professional. Then longer term we might start cross-training Scrum team members in the different change management skills so that eventually the team is self-sufficient again (i.e., not relying on external stakeholders to deliver value). This is a more desirable pattern because it empowers the Scrum team with all the necessary change management skills and it does not introduce any further handoffs downstream allowing the team to strive towards delivering truly shippable value at the end of the iteration. However this is a future state that will probably take much higher degree of investment and time to achieve.

Summary

Traditional change management frameworks were set up to be long-running, sequential processes to deal with large batches of change. Generally speaking the traditional Scrum team does not have any formal change management skills directly on the team. When organizations successfully transform to Agile, they will need to adapt these change management models to make them as Agile as their newly adopted Agile frameworks so that the organization can successfully manage the continuous flow of valuable changes that they are introducing into their environments and ecosystems. As we continue to improve our understanding and implementation of Agile change management, we will need to figure out how to provision Agile teams with the requisite change management skills either by integrating the existing change management professionals into the Agile flow of work, or by directly embedding those change management professionals onto the Agile teams.

 

This blog post focused on rethinking traditional change management to be iterative and incremental. This leaves other Agile values like transparency, collaboration, rapid feedback and empowerment. To learn more about these Agile values and about Agile change management in general, including proven practices SolutionsIQ consultants use in the field, join Dan Fuller for our upcoming webinar “Leading Agile Change: Proven Change Management Approaches for Agile Transformation”.

Register Now

The post Adapting Change Management Models to Support Agile Transformation appeared first on SolutionsIQ.

Categories: Companies

Article Review: Sometimes Waterfall is Needed to Become Agile, Scott Granieri

Learn more about transforming people, process and culture with the Real Agility Program

I like reading success stories. In fact, I wish there were more of them in the agile literature because a success story is “evidence” of doing something that works and it is not just an abstract idea or concept with potential. That’s one of the reasons I like Scott Granieri’s article featured on scrumalliance.org entitled, “Sometimes it just may take a waterfall to go agile.” In this article, Granieri describes a situation occurring at a corporate level to create software for a federal customer. He presents the background, the problem, the solution, the results and the lessons learned. I find this article to be well-written, thorough and engaging.

Here is an excerpt from his conclusion:

“The solution for creating a successful environment for Agile adoption lies within one of the principal tenets of the methodology itself: Inspect and adapt.” He also quotes Ken Schwaber, co-founder of Scrum, who Mishkin Berteig trained with more than a decade ago. But that can be something for you to discover when you read the article.

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Article Review: Sometimes Waterfall is Needed to Become Agile, Scott Granieri appeared first on Agile Advice.

Categories: Blogs

Sonar ecosystem upgrades to Java 8

Sonar - Tue, 06/14/2016 - 17:55

With the release of SonarQube version 5.6, the entire Sonar ecosystem will drop support for Java 7. This means you won’t be able to run new versions of the SonarQube server, execute an analysis, or use SonarLint with a JVM < 8.

Why? Well, its been over two years since Java 8′s initial release, and a year since Oracle stopped supporting Java 7, so we figured it was time for us to stop to. Doing so allows us to simplify our development processes and begin using the spiffy new features in Java 8. Plus, performance is up to 20% better with Java 8!

Of course, we’ll still support running older versions of ecosystem products, e.g. SonarQube 4.5, with Java 7, and you’ll still be able to compile your project with a lower version of Java. You’ll just have to bump up the JVM version to run the analysis.

Categories: Open Source

PTC Launches PTC AgileWorx

Scrum Expert - Tue, 06/14/2016 - 17:00
PTC has announced the launch of its PTC AgileWorx solution, the industry’s first Agile solution designed to help product engineering teams improve innovation and time-to-market. Today’s smart, connected products are transforming how companies create and capture value. The speed with which organizations respond to these opportunities will increasingly determine their ability to outperform in the marketplace. Companies who successfully deploy agile practices can accelerate their innovation by up to 80 percent. Yet many organizations struggle to apply Agile principles to the realm of product engineering. Product engineering teams must collaborate with diverse teams of specialists, safeguard product quality and safety, and manage product lines over decades. PTC AgileWorx is specifically designed to meet the unique needs of manufacturers building complex and smart, connected products. It provides a central hub where engineering teams can visualize work in progress, prioritize activities, identify dependencies, and remove impediments. PTC AgileWorx enables manufacturers to respond faster to feedback from customers and other market disruptors. It helps product development organizations: * Unify the cross-discipline team – provides visibility into team activities and links to CAD, ALM, and PLM systems of record for more informed decision-making. * Safeguard quality and safety – enables teams to leverage and extend their existing quality and compliance frameworks. * Organize for product variants – supports component-based development and product line engineering to maximize reuse and reduce cost. * Connect to real world insight – supports the integration of IoT and other data streams into core product development feedback loops.
Categories: Communities

Agile Engineering for the Web

TV Agile - Tue, 06/14/2016 - 10:43
Test-driven development, refactoring, evolutionary design… these Agile engineering techniques have long been established for back-end code. But what about the front-end? For too many teams, it’s dismissed with a “JavaScript sucks!” and unreliable, brittle code. This session explores what it takes to bring the best of Agile development to front-end code: test-drive your JavaScript and […]
Categories: Blogs

Agile Metrics: Velocity is not the Goal

Scrum Expert - Tue, 06/14/2016 - 10:35
Velocity is one of the most common metrics used – and one of the most commonly misused – on Scrum and Agile projects. Velocity is simply a measurement of speed in a given direction, the rate at which a team is delivering toward a product release. As with a vehicle en route to a particular destination, increasing the speed may appear to ensure a timely arrival. However, that assumption is dangerous because it ignores the risks with higher speeds. And while it’s easy to increase a vehicle’s speed, where exactly is the accelerator on a software team? This presentation walks you through the Hawthorne Effect and Goodhart’s Law to explain why setting goals for velocity can actually hurt a project’s chances. Take a look at what can negatively impact velocity, ways to stabilize fluctuating velocity, and methods to improve velocity without the risks. Leave with a toolkit of additional metrics that, coupled with velocity, give a better view of the project’s overall health. Video producer: http://oredev.org/
Categories: Communities

Do You Have 5 Minutes? The True Cost of Context Switching

The dreaded question, “Do you have five minutes?” seems to be relatively harmless on the surface; who...

The post Do You Have 5 Minutes? The True Cost of Context Switching appeared first on Blog | LeanKit.

Categories: Companies

Version 7 Beta 2

IceScrum - Mon, 06/13/2016 - 19:49
A week ago, we were glad to publish the first Beta of the version that embodies the future of iceScrum! iceScrum Version 7 Beta If you did not hear about it yet, you can read the blog post named “A bright future for iceScrum”. First, we would like to thank our early users for their…
Categories: Open Source

Strategy Deployment and Fitness for Purpose

AvailAgility - Karl Scotland - Mon, 06/13/2016 - 19:27

David Anderson defines fitness for purpose in terms of the “criteria under which our customers select our service”. Through this lens we can explore how Strategy Deployment can be used to improve fitness for purpose by having alignment and autonomy around what the criteria are and how to improve the service.

In the following presentation from 2014, David describes Neeta, a project manager and mother who represents two market segments for a pizza delivery organisation.

"Fitness for Purpose" – Resilience & Agility in Modern Business from David Anderson

As a project manager, Neeta wants to feed her team. She isn’t fussy about the toppings as long as the pizza is high quality, tasty and edible. Urgency and predictability is less important. As a mother, Neeta want to feed her children. She is fussy about the toppings (or her children are), but quality is less important (because the children are less fussy about that). Urgency and predictability are more important. Thus fitness for purpose means different things to Neeta, depending on the market segment she is representing and the jobs to be done.

We can use this pizza delivery scenario to describe the X-Matrix model and show how the ideas behind fitness for purpose can be used with it.

Results

Results describe what we want to achieve by having fitness for purpose, or alternatively, they are the reasons we want to (and need) to improve fitness for purpose.

Given that this is a pizza delivery business, its probably reasonable to assume that number of pizzas sold would be the simplest business result to describe. We could possibly refine that to number of orders, or number of customers. We might even want a particular number of return customers or repeat business to be successful. At the same time operational costs would probably be important.

Strategies

Strategies describe the areas we want to focus on in order to improve fitness for purpose. They are the problems we need to solve which are stopping us from having fitness for purpose.

To identify strategies we might choose to target one of the market segments that Neeta represents, such as family or business. This could lead to strategies to focus on things like delivery capability, or menu range, or kitchen proficiency.

Outcomes

Outcomes describe what we would like to happen when we have achieved fitness for purpose. They are things that we want to see, hear, or which we can measure, which indicate that the strategies are working and which provide evidence that we are likely to deliver the results.

If our primary outcome is fitness for purpose, then we can use fitness for purpose scores, along with other related leading indicators such as delivery time, reliability, complaints, recommendations.

Tactics

Tactics describe the actions we take in order to improve fitness for purpose. They are the experiments we run in order to evolve towards successfully implementing the strategies, achieving the outcomes and ultimately delivering the results. Alternatively they may help us learn that our strategies need adjusting.

Given strategies to improving fitness for purpose based around market segments, we might try new forms of delivery, different menus or ingredient suppliers, or new alternative cooking techniques.

Correlations

I hope this shows, using David’s pizza delivery example, how fitness for purpose provides a frame to view Strategy Deployment. The X-Matrix model can be used to tell a coherent story about how all these elements – results, strategies, outcomes and tactics – correlate with each other. Clarity of purpose, and what it means to be fit for purpose, enables alignment around the chosen strategies and desired outcomes, such that autonomy can used to experiment with tactics.

Categories: Blogs

10 Lessons from a Long Running DDD Project – Part 1

Jimmy Bogard - Mon, 06/13/2016 - 18:14

Round about 7 years ago, I was part of a very large project which rooted its design and architecture around domain-driven design concepts. I’ve blogged a lot about that experience (and others), but one interesting aspect of the experience is we were afforded more or less a do-over, with a new system in a very similar domain. I presented this topic at NDC Oslo (recorded, I’ll post when available).

I had a lot of lessons learned from the code perspective, where things like AutoMapper, MediatR, Respawn and more came out of it. Feature folders, CQRS, conventional HTML with HtmlTags were used as well. But beyond just the code pieces were the broader architectural patterns that we more or less ignored in the first DDD system. We had a number of lessons learned, and quite a few were from decisions made very early in the project.

Lesson 1: Bounded contexts are a thing

Very early on in the first project, we laid out the personas for our application. This was also when Agile and Scrum were really starting to be used in the large, so we were all about using user stories, personas and the like.

We put all the personas on giant post-it notes on the wall. There was a problem. They didn’t fit. There were so many personas, we couldn’t look at all of them at one.

So we color coded them and divided them up based on lines of communication, reporting, agency, whatever made sense

image

Well, it turned out that those colors (just faked above) were perfect borders for bounded contexts. Also, it turns out that 72 personas for a single application is way, way too many.

Lesson 2: Ubiquitous language should be…ubiquitous

One of the side effects of cramming too many personas into one application is that we got to the point where some of the core domain objects had very generic names in order to have a name that everyone agreed upon.

We had a “Person” object, and everyone agreed what “person” meant. Unfortunately, this was only a name that the product owners agreed upon, no one else that would ever use the system would understand what that term meant. It was the lowest common denominator between all the different contexts, and in order to mean something to everyone, it could not contain behavior that applied to anyone.

When you have very generic names for core models that aren’t actually used by any domain expert, you have something worse than an anemic domain model – a generic domain model.

Lesson 3: Core domain needs consensus

We talked to various domain experts in many groups, and all had a very different perspective on what the core domain of the system was. Not what it should be, but what it was. For one group, it was the part that replaced a paper form, another it was the kids the system was intending to help, another it was bringing those kids to trial and another the outcome of those cases. Each has wildly different motivations and workflows, and even different metrics on which they are measured.

Beyond that, we had directly opposed motivations. While one group was focused on keeping kids out of jail, another was managing cases to put them in jail! With such different views, it was quite difficult to build a system that met the needs of both. Even to the point where the conduits to use were completely out of touch with the basic workflow of each group. Unsurprisingly, one group had to win, so the focus of the application was seen mostly through the lens of a single group.

Lesson 4: Ubiquitous language needs consensus

A slight variation on lesson 2, we had a core entity on our model where at least the name meant something to everyone in the working group. However, that something again varied wildly from group to group.

For one group, the term was in reference to a paper form filed. Another, something as part of a case. Another, an event with a specific legal outcome. And another, it was just something a kid had done wrong and we needed to move past. I’m simplifying and paraphrasing of course, but even in this system, a legal one, there were very explicit legal definitions about what things meant at certain times, and reporting requirements. Effectively we had created one master document that everyone went to to make changes. It wouldn’t work in the real world, and it was very difficult to work in ours.

Lesson 5: Structural patterns are the least important part of DDD

Early on we spent a *ton* of time on getting the design right of the DDD building blocks: entities, aggregates, value objects, repositories, services, and more. But of all the things that would lead to the success or failure of the project, or even just slowing us down/making us go faster, these patterns were by far the least important.

That’s not to say that they weren’t valuable, they just didn’t have a large contribution to the success of the project. For the vast majority of the domain, it only needed very dumb CRUD objects. For a dozen or so very particular cases, we needed highly behavioral, encapsulated domain objects. Optimizing your entire system for the complexity of 10% really doesn’t make much sense, which is why in subsequent systems we’ve moved towards a more CQRS model, where each command or query has complete control of how to model the work.

With commands and queries, we can use pretty much whatever system we want – from straight up SQL to event sourcing. In this system, because we focused on the patterns and layers, we pigeonholed ourselves into a singular pattern, system-wide.

Next up – lessons learned from the new system that offered us a do-over!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

The Case for and Against Estimates, Part 5

Johanna Rothman - Mon, 06/13/2016 - 18:01

If you’ve been following the conversation, I discussed in Part 1 how I like agile roadmaps and gross estimation and/or targets for projects and programs. In Part 2, I discussed when estimates might not be useful. In Part 3, I discussed how estimates can be useful. In Part 4, I discussed #noestimates.  Let me summarize my thinking and what I do here.

This series started because Marcus Blankenship and I wrote two articles: Stay Agile With Discovery, which is how to help your client see benefit and value early from a small project first, before you have to estimate the whole darn thing; and Use Demos to Build Trust, how to help your client see how they might benefit from agile in their projects, even if they want an upfront estimate of “all” the work.

Let me clarify my position: I am talking about agile projects. I am not talking about waterfall, iterative, or even incremental approaches. I have used all those approaches, and my position in these posts are about agile projects and programs.

In addition, I have a ton of experience in commercial, for-profit organizations. I have that experience in IT, Engineering, R&D. I have very little experience in non-profit or government work. Yes, I have some, but those clients are not the bulk of my consulting business. As with everything I write (or anyone else does), you need to take your context into account. I see wide project and program variation in each client, never mind among clients.

That said, in agile, we want to work according to the agile principles (look past the manifesto at the principles). How can we welcome change? How can we show value? How can we work with our customer?

Many people compare software to construction. I don’t buy it. Here’s a little story.

In my neighborhood, the gas utility is replacing gas mains.The project is supposed to about three months or so. We received a letter in May, saying which streets they expected to work on and when. The reality is quite different.

They work on one street, and have to go around the corner to another street? Why? Because the mains and lines (water, gas, electric) are not where the drawings said they would be. Instead of a nice grid, they go off at 45-degree angles, cross the street, come back, etc. Nothing is as the plans suggested it should be. During the day, I have no idea what streets will be open for me to drive on. The nice folks put everything back to some semblance of normal each night. They patch the roads they work on during each day.

And yet, they are on budget. Why? Because they accounted for the unknowns in the estimate. They padded the estimate enough so that the contractor would make money. They accounted for almost everything in their estimate. How could they do this? The company doing the work has seen these circumstances before. They knew the 50-year-old plans were wrong. They didn’t know how, but they’ve seen it all before, so they can manage their risks.

The learning potential in their work is small. They are not discovering new risks every day. Yes, they are working around local technical debt. They knew what to expect and they are managing their risks.

Here’s another story of a software project. This is also a three-to-four month project (order of magnitude estimate). The product hasn’t been touched in several years, much to the software team’s dismay. They have wanted to attack this project for a couple of years and they finally got the go-ahead.

Much has changed since they last touched this product. The build system, the automated testing system, the compiler—all of those tools have changed. The people doing the work have changed. The other products that interact with this product have changed.

The team is now working in an agile way. They deliver demonstrable product almost every day. They show the working product every week to senior management.

They are learning much more than they thought they would. When they created the estimate, they had assumptions about underlying services from other products. Well, some of those assumptions were not quite valid. They asked what was driving the project? They were told the date to release. Well, that changed to feature set. (See Estimating the Unknown, Part 1 for why that is a problem.)

They feel as if the project is a moving target. In some ways, it is. The changes arose partly because of what the team was able to demonstrate. The PO decided that because they could do those features over there and release those features earlier, they could reduce their Cost of Delay. Because they show value early and often, they are managing the moving target changes in an agile way. I think they will be able to settle down and work towards a target date once they get a few more features done and released.

Why is this team in such project turmoil? Here are some reasons:

  • Their assumptions about the product and its interactions were not correct. They had spent three days estimating “everything.” They knew enough to start. And, they uncovered more information as they started. I asked one of the team members if they could have estimated longer and learned more. He said, “Of course. It wasn’t worth more time to estimate. It was worth our time to deliver something useful and get some feedback. Sure, that feedback changed the order of the features, so we discovered some interesting things. But, getting the feedback was more useful than more estimation.” His words, not mine.
  • The tooling had changed, and the product had not changed to keep up with the tooling. The team had to reorganize how they built and tested just to get a working build before they built any features.
  • The technical debt accumulated in this product and across the products for the organization. Because the management had chosen projects by estimated duration in the past, they had not used CoD to understand the value of this project until now.

The team is taking one problem at a time, working that problem to resolution and going on to the next. They work in very small chunks. Will they make their estimate of 3-4 months? They are almost 3 months in. I don’t think so, and that’s okay. It’s okay because they are doing more work than they or their management envisioned when the project started. In this case, the feature set grew. It partly grew because the team discovered more work. It partly grew because the PO realized there was more value in other features not included in the original estimate.

In agile, the PO can take advantage of learning about features of more value. This PO works with the team every day. (The team works in kanban, demos in iterations.)

The more often we deliver value, the more often we can reflect on what the next chunk of value should be. You don’t have to work in kanban. This team likes to do so.

The kinds of learning this team gains for the software project is different that what the gas main people are learning in my neighborhood. Yes, the tools have changed since the gas mains were first installed. The scope of those changes are much less than even the tools changes for the software project.

The gas main project does “finish” something small every day, in the sense that the roads are safe for us to drive on when they go home at night. However, the patches are just that—patches for the road, not real paving. The software team finishes demonstrable value every day. If they had to stop the project at any time, they could. The software team is finishing. (To be fair to the gas people, it probably doesn’t make monetary sense to pave a little every day to done. And, we can get to done, totally done, in software.)

The software team didn’t pad the estimate. They said, “It’s possible to be done in 3 months. It’s more likely to be done in 4 months. At the outside, we think it will take 5 months.” And, here’s what’s interesting. If they had completed just what was in their original estimate, they might well be done by now. And, because it’s software, and because they deliver something almost every day, everyone—the PO, management, the team—see where there is more value and less value.

The software team’s roadmap has changed. The product vision hasn’t changed. Their release criteria have changed a little, but not a lot. They have changed what features they finish and the order in which they finish them. That’s because people see the product every day.

Because the software team, the PO and the management are learning every day, they can make the software product more valuable every day. The gas main people don’t make the project more valuable every day.

Is estimation right for you? Some estimation is almost always a good decision. If nothing else, the act of saying, “What will it take us to do this thing?” helps you see the risks and if/how you want to decompose that thing into smaller chunks.

Should you use Cost of Delay in making decisions about what feature to do first and what project to do first? I like it because it’s a measure of value, not cost. When I started to think about value, I made different decisions. Did I still want a gross estimate? Sure. I managed projects and ran departments where we delivered by feature. I had a ton of flexibility about what to do next.

Are #noestimates right for you? It depends on what your organization needs. I don’t need estimates in my daily work. If you work small and deliver value every day and have transparency for what you’re doing, maybe you don’t need them either.

Estimates are difficult. I find estimation useful, the estimates not so much. I find that looking at the cost and risks are one way to look at a project. Looking at value is another way.

I like asking if what my management wants is commitment or resilience. See When You Need to Commit. Many organizations want to use agile for resilience, and then they ask for long commitments. It’s worthwhile asking questions to see what your organization wants.

Here are my books that deal with managing projects, estimation, Cost of Delay and program management:

For me, estimates are not inherently good or bad. They are more or less useful. For agile projects, I don’t see the point of doing a lot of estimation. Why? Because we can change the backlog and finish different work. Do I like doing some estimation to understand the risks? Yes.

I don’t use cost as a way to evaluate projects in the project portfolio. I prefer to look at some form of value rather than only use an estimate. For agile projects, this works, because we can see demonstrable product soon. We can change the project portfolio once we have seen delivered value.

Remember. my context is not yours. These ideas have worked for me on at least three projects. They might not work for you. On the other hand, maybe there is something you can use from here in  your next agile project or program.

Please do ask more questions in the comments. I might not do a post in a while. I have other writing to finish and these posts are way too long!

Categories: Blogs

TriAgile, Raleigh, USA, June 29-30 2016

Scrum Expert - Mon, 06/13/2016 - 17:09
The TriAgile Conference is a one-day event for Agile practitioners in the Research Triangle Park of North Carolina. This conference features over 30 sessions on Agile topics like Leadership, Value, Culture, Technical Practices, Scaling and more. Workshops will be available on June 29th preceding the TriAgile Conference. In the agenda of TriAgile Conference you can find topics like “Quotes from the trenches about Agile and Scrum”, “An Empirical Analysis of Agile Methodologies and Firm Financial Performance”, “Syncing Up Agile Testing – Automation from the Inside Out”, ” Overcoming Resistance – How To Engage Developers In Agile Adoption”, “The Agile/Project Management/DevOps Leveraged Triangle”, “The Relationship Between Agility and Expertise”, “Getting Value from Agile Feedback Systems: Every Day, Every Sprint and Every Release”. Web site: http://triagile.com/ Location for the TriAgile Conference: McKimmon Conference and Training Center at NC State, 1101 Gorman St, Raleigh, NC 27606, USA
Categories: Communities

Customer-Centric to the Core

At LeanKit, Lean isn’t just the product we sell. It’s in our name because we passionately believed...

The post Customer-Centric to the Core appeared first on Blog | LeanKit.

Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.