Skip to content


Did the math on my contribution to global warming

Henrik Kniberg's blog - Tue, 12/13/2016 - 00:49

I was curious about how many tons of carbon dioxide that my family pumps into the atmosphere (= global warming). Looked at the most direct variables: flying, driving, and home electricity. There are obviously more variables to look at (like beef!), but I’m starting with these three, as the data is readily available and I gotta start somewhere.

Result (updated):

  • Flying = 14.6 tons per year
  • Driving = 4.1 tons per year
  • Electricity = 0.5 tons per year

So, 19 tons of CO2 per year. Damn! Sorry about that, earth and future generations. Good news is that I now know how to reduce it by ALOT (like 5 times less)!

CO2e emission before and after

Here’s what I learned:

  1. I thought electricity consumption would be an important thing to optimize. But it’s NOTHING compared to driving and flying (at least not here in Sweden)! No more bad conscience for forgetting to turn off lights and computers.
  2. BIG aha: Buying a plugin hybrid car will reduce our carbon footprint by at least 3.5 tons per year! Because our driving pattern is almost 100% local (carting kids around to school & activities), we’ll almost never need to burn gasoline. Good, cuz I can’t find a fully electric car that fits our big family comfortably. And our current car is breaking down anyway.
  3. A big part of my flying footprint has been just going back and forth to Billund in Denmark every month or two (working with Lego). But actually, it would take only 9 hours for me to get there by train. Train is basically zero carbon footprint. So if I continue travelling to Billund I’ll probably to do it mostly by train. First class, working along the way. Train = 3 ton reduction per year!
  4. Biofuel is the only effective way of reducing flight emissions (other than not flying of course). Biofuel can reduce aviation CO2 emission by about 80%. That compensation will cost me about 400kr per flight hour via Most of my flights are well-paid business trips to do conference keynotes, so I can definitely afford to pay that. Will do that for all flights from now on. Biofuel compensation = 9 ton reduction per year! I asked to invoice me SEK 26,000 today, to cover this year.
  5. I was surprised to learn that the electricity I use is clean (from CO2 perspective). 54% hydro, 45% nuclear, 1% wind. Sweden in general has mostly clean electricity.
  6. Despite (5), I’m exploring options to install solar cells on our property. Might not significantly reduce my carbon footprint, but I see it as more a long-term thing. It’s an investment that hopefully will pay off in 10 years or so, it is a way of supporting clean energy in general, and I will learn things along the way. About 10% of Swedens total energy is imported fossil fuel (roughly – hard to find consistent data about it). The more people who use solar energy at home, the less they use the grid, the less dirty electricity Sweden needs to import, the more clean electricity Sweden can export. Haven’t done the math on that yet though, so no numbers.

These improvements amount to 16 tons less CO2e emission per year, or a 5 times reduction!

So here’s my goal for 2017:

  • Flying = max 3 tons per year
  • Driving = max 1 ton per year
  • Electricity = max 0.5 tons per year

Total: 4.5 tons of CO2, instead of 19! Still not good, but definitely better.

I’m definitely not an expert on these things, but it took just an evening of googling around to learn how to cut my carbon footprint 4-5 times, without any major lifestyle change. Pretty cool!

I made sure to be picky about the sources of data. No reporters, tabloids, or social media bubbles! Checked multiple sources for everything, and fiddled around in a spreadsheet to double-check the math. But do let me know if I’ve got anything badly wrong (and if so, please include references).

Here is the spreadsheet if you are interested. I listed most sources there too.


Categories: Blogs

Dealing with Duplication in MediatR Handlers

Jimmy Bogard - Mon, 12/12/2016 - 22:37

We’ve been using MediatR (or some manifestation of it) for a number of years now, and one issue that comes up frequently is “how do I deal with duplication”. In a traditional DDD n-tier architecture, you had:

  • Controller
  • Service
  • Repository
  • Domain

It was rather easy to share logic in a service class for business logic, or a repository for data logic (queries, etc.) When it comes to building apps using CQRS and MediatR, we remove these layer types (Service and Repository) in favor of request/response pairs that line up 1-to-1 with distinct external requests. It’s a variation of the Ports and Adapters pattern from Hexagonal Architecture.

Recently, going through an exercise with a client where we collapsed a large project structure and replaced the layers with commands, queries, and MediatR handlers brought this issue to the forefront. Our approaches for tackling this duplication will highly depend on what the handler is actually doing. As we saw in the previous post on CQRS/MediatR implementation patterns, our handlers can do whatever we like. Stored procedures, event sourcing, anything. Typically my handlers fall in the “procedural C# code” category. I have domain entities, but my handler is just dumb procedural logic.

Starting simple

Regardless of my refactoring approach, I ALWAYS start with the simplest handler that could possibly work. This is the “green” step in TDD’s “Red Green Refactor” step. Create a handler test, get the test to pass in the simplest means possible. This means the pattern I choose is a Transaction Script. Procedural code, the simplest thing possible.

Once I have my handler written and my test passes, then the real fun begins, the Refactor step!

WARNING: Do not skip the refactoring step

At this point, I start with just my handler and the code smells it exhibits. Code smells as a reminder are indication that the code COULD exhibit a problem and MIGHT need refactoring, but is worth a decision to refactor (or not). Typically, I won’t hit duplication code smells at this point, it’ll be just standard code smells like:

  • Large Class
  • Long Method

Those are pretty straightforward refactorings, you can use:

  • Extract Class
  • Extract Subclass
  • Extract Interface
  • Extract Method
  • Replace Method with Method Object
  • Compose Method

I generally start with these to make my handler make more sense, easier to understand and the like. Past that, I start looking at more behavioral smells:

  • Combinatorial Explosion
  • Conditional Complexity
  • Feature Envy
  • Inappropriate Intimacy
  • and finally, Duplicated Code

Because I’m freed of any sort of layer objects, I can choose whatever refactoring makes most sense.

Dealing with Duplication

If I’m in a DDD state of mind, my refactorings in my handlers tend to be as I would have done for years, as I laid out in my (still relevant) blog post on strengthening your domain. But that doesn’t really address duplication.

In my handlers, duplication tends to come in a couple of flavors:

  • Behavioral duplication
  • Data access duplication

Basically, the code duplicated either accesses a DbContext or other ORM thing, or it doesn’t. One approach I’ve seen for either duplication is to have common query/command handlers, so that my handler calls MediatR or some other handler.

I’m not a fan of this approach – it gets quite confusing. Instead, I want MediatR to serve as the outermost window into the actual domain-specific behavior in my application:

Excluding sub-handlers or delegating handlers, where should my logic go? Several options are now available to me:

  • Its own class (named appropriately)
  • Domain service (as was its original purpose in the DDD book)
  • Base handler class
  • Extension method
  • Method on my DbContext
  • Method on my aggregate root/entity

As to which one is most appropriate, it naturally depends on what the duplicated code is actually doing. Common query? Method on the DbContext or an extension method to IQueryable or DbSet. Domain behavior? Method on your domain model or perhaps a domain service. There’s a lot of options here, it really just depends on what’s duplicated and where those duplications lie. If the duplication is within a feature folder, a base handler class for that feature folder would be a good idea.

In the end, I don’t really prefer any approach to the another. There are tradeoffs with any approach, and I try as much as possible to let the nature of the duplication to guide me to the correct solution.

Categories: Blogs

Changing gears in how we think about Agile Transformation

Leading Agile - Mike Cottmeyer - Mon, 12/12/2016 - 15:00

Changing the way we think about Agile Transformation

Addressing culture, practices, and systems are all essential to successfully navigate an Agile transformation.  All three work together, but where you begin the transformation process is the difference between success and frustration.

agile transformation infographic

Get a high res version of the infographic to share. Download

The post Changing gears in how we think about Agile Transformation appeared first on LeadingAgile.

Categories: Blogs

Making the Quantum Leap from Node.js to Microservices

Derick Bailey - new ThoughtStream - Mon, 12/12/2016 - 14:30

I recently received a question, via email, about how to get up and running with microservices in Node.js.

The person asking was interested in my Microservices w/ RabbitMQ Guide over at WatchMeCode, but was having a hard time wrapping their head around some of the aspects of microservices, discovery, shared resources, etc.

Quantum leap From the person asking:

I do have a question about something. Curious if it is covered in your rabbitMQ/Services section.


How do you make that leap to node micro-services. That seems so foreign to me. I have no idea what they are… and how to set up multiple micro-services that may need to “share resources”.


Like, having a node-sdk for your application etc..  then each of your micro-services registers with it… then other microservices can ‘request’ those other micro services.. by bringing in the sdk, then just calling of the registrations you want. How is that even possible? Is that even the right way to do it?

Your thoughts?

Honestly, there’s not much of a secret or trick to this.

Most of the hype and hand waving around microservice SDKs, service discovery, etc. is just that – hype and hand waving.

It makes people look good because it’s “big” or “at scale”, to talk about microservices and the surrounding tools, SDKs and services.

Microservices are a mostly simple thing.

They typically don’t need anything special, or complex.

And chances are, you already know how to build microservices.


But our industry is so fixated on companies like Netflix, AWS, Heroku, Docker, etc. that we can’t see past the hype of “web scale” and Docker (which is a great tool – I love it, but it’s way overhyped).

When it comes down to doing the work, the idea of a “microservice” is just a renaming of existing concepts – concepts that have been around for a very long time.

Call it “Service Oriented Architecture” or whatever you want, the purpose is to keep an individual service small and focused, allowing it to be used by larger systems.

A microservice is small, focused.

It will do one thing, and do it well.


If you’ve read much about the unix philosophy, this is just an extension of that idea.

A single unix / linux command line tool will do one thing, taking text as input and producing text as output. A microservice will do one thing, typically taking a message as input and producing a message as output.

The exact format and protocol of those messages can vary greatly – HTTP, SOAP, AMQP (rabbitmq) and other protocols, with JSON, XML or other common (or proprietary) data formats.

There’s nothing new or different, here. These are tools and technologies that you’ve been using for years, as a developer.

When it comes to “shared resources”, there are a number of different aspects to consider.

But once again, it’s usually easier than most people want to admit.

Do you have a database server that multiple apps use? That’s a shared resource, already.

Did you build a back-end service that handles all of your authentication needs, and it’s just an HTTP call away? That’s a shared resource.

The notion of shared resources goes well beyond the infrastructure and plumbing, though. And frankly, a database is a terrible integration layer. 

But when it comes down to it, sharing resources amongst services is once again, just passing messages around like we have always done.

Where it gets interesting / complicated, is the point where we start talking about scaling services and making them discoverable.

There are a lot of very fancy “secret management” and “service discovery” systems around.

Again, this isn’t a new problem. We’ve had “service discovery” in one form or another for many, many years now.

Complex services

Remember WSDL? WCF-Discovery? Aspects of CORBA, DCOM, and countless other distributed systems architecture are all built around the idea of service discovery. 

These tools make scalable microservices much easier – and there are a lot of new players in this realm, taking advantage of the current technology trends. 

Things like consul or Eureka – and plenty of others – will help your applications “discover” the services they need.

There’s no magic in this, other than the “discovery service” providing the correct connection string for you (basically – it’s a bit more complex than that).

There’s also “secret stores” like Vault and Torus that make the storage and retrieval of authentication / authorization credentials for services, centralized.

Instead of using JSON documents or environment variables to store API keys for a service, you connect to the “secret store” and ask for the API keys that you need.

The are great tools to use when you have need to centrally manage services and API keys.

Most of the time, you won’t need these things.

Dont need them

I run a few systems built with a microservice architecture, in a very large healthcare network. These are critical systems that run nightly batch processes for the financial data warehouse.

I don’t bother with these large scale system architectures and scalability problems because I don’t need to, at this point.

I point my web app, my individual services, my scheduling system, and everything else at a known RabbitMQ and MongoDB instance, and that’s it.

I use JSON configuration files to store connection strings, and I let RabbitMQ be the central hub of communication for me.

It may not be “web scale” and won’t win me any awards, but I sleep well at night, knowing my system works.

The examples I show in the Microservices w/ RabbitMQ Guide are what I use in my production apps.

I built a simple Express web app that sends a message across RabbitMQ.

Rmq central

I built a simple Node.js “back-end” service that receives the message from RabbitMQ, and uses the data in the message to run some other process.

Sometimes there is a return message, sometimes there’s an update to the database.

Sometimes the database in question is “shared” across mutliple apps (though I try to avoid this).

And to run all of these services, I use simple tools like forever and pm2.

I don’t get into fancy “orchestration” and “service discovery” because I don’t need it. I just need to make sure my “node web/bin/www” command line is run when my server reboots, or when the node process crashes.

Building a system with a microservice architecture is not a quantum leap.

It’s just building small apps that do something useful, and coordinating them with HTTP, AMQP, etc.

The “trick” is figuring out which parts of your apps can and should be put into a separate node executable process.

That knowledge only comes from experience… a.k.a trial and error.

For example, stop sending email from your web app. Instead, push a message across RabbitMQ and have another node process read that message and send the actual email.

Find simple places where the web app can stop being responsible for “simple” things, like sending email, and you’ll be well on your way to building a proper microservices architecture.

Getting Started Is As Simple As Learning RabbitMQ

A part of WatchMeCode, the Guide to Building Microservices w/ Node.js and RabbitMQ will get you up and running fast.


You’ll start with an introduction to RabbitMQ, and how to install and configure it. You’ll learn the basics of messaging with RabbitMQ, and see the most common application design patterns for messaging.

You’ll learn how to integrate RabbitMQ into your own services, your web application, and more. And you’ll do it with simplicity and easy, reducing lines of code while increasing features and capabilities in your applications.

From there, this guide will show you how to build a simple, production-ready microservice using Express as a front-end API and RabbitMQ as the communication protocol to and back-end service. 

It’s code that comes straight from my production needs, used by thousands of developers around the world as part of my WatchMeCode services infrastructure.

There’s no better way to get started, and no better resource than real-world code that exemplifies simplicity, running in a production environment.

The post Making the Quantum Leap from Node.js to Microservices appeared first on

Categories: Blogs

The Simple Leader: Gratitude

Evolving Excellence - Sun, 12/11/2016 - 11:40

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen

Be thankful for what you have; you’ll end up having more.
If you concentrate on what you don’t have, you will never, ever have enough.
– Oprah Winfrey

Oftentimes we become so focused on fixing problems and resolving issues that our entire sense of reality shifts. We begin to live in a bubble that encompasses the negative and blocks the positive. Because they demand our attention, the negative aspects of work and life consume a disproportionate amount of our thinking, and eventually distorts our perceived reality.

You can re-center your perspective by grounding yourself in thanks for what is good with you or your team. What are you thankful for? Think about your health, your relationships, your business success. There will be more to be thankful for than you realize. Use a few minutes in the shower each morning, the first few minutes of your meditation, or even the first few minutes of each staff meeting to identify specific people and situations to be thankful for. Try to say thanks to at least one person each day, meaning fully and mindfully. Even better, write someone a thank you note by hand. Make it a self-sustaining habit, a routine.

I have much to be thankful for: my parents teaching me the joy of learning, which eventually led me to discover Lean and Zen; my wife teaching me how to be more compassionate, which has completely changed my perspective on life; and business partners and associates that have put up with some of my wild ideas.

Reflecting on gratitude at the beginning and end of each day creates calm bookends to what can be chaos for me. As problem solvers, we are naturally predisposed to focus on the negative, taking for granted the positive to the extent that we often become oblivious and unaware of just how much positive there is in our lives. Intentionally focusing on gratitude brings that perspective back to reality. Expressing gratitude in daily life, complimenting and helping others, or just smiling, reinforces the power of being thankful. Intentionally finding gratitude every day, has changed my perspective on life more than any other personal or professional leadership habit. I’ve discovered I have a lot to be thankful for, which helps me be more generous, sympathetic, and empathetic.

Categories: Blogs

Links for 2016-12-10 []

Zachariah Young - Sun, 12/11/2016 - 10:00
Categories: Blogs

Announcement: We are Hiring a Training Sales Person

Learn more about transforming people, process and culture with the Real Agility Program

We are looking for a highly-motivated person to help us take our training business to the next level! This position is focused on sales, but includes other business development activities. The successful candidate for our training sales position will help us in several areas of growth including:

  • direct sales of our existing training offerings
  • expand our training loyalty program
  • launch and expand new training offerings
  • expand the market to new locations outside the GTA: Vancouver, Montreal, Ottawa, etc.
  • expand our partner/reseller network

Please check out the full job posting for a Training Sales Person here.  You can send that link to others who might be interested!

BERTEIG World Mindware Logo - Training Sales Person

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!

The post Announcement: We are Hiring a Training Sales Person appeared first on Agile Advice.

Categories: Blogs

Cost Accounting is a Problem for Agile (and Knowledge Work)

Johanna Rothman - Fri, 12/09/2016 - 16:30

The more I work with project portfolio teams and program managers, the more I understand one thing: Cost accounting makes little sense in the small for agile, maybe for all knowledge work. I should say that I often see cost accounting in the form of activity-based accounting. Each function contributes to some of the cost of the thing you’re producing.

Here’s why. Cost accounting is about breaking down a thing into its component parts, costing all of them, and then adding up the costs as you build it. Cost accounting makes sense for manufacturing, where you build the same thing over and over. Taking costs out of components makes sense. Taking costs out of the systems you use to create the product makes sense, too.

Cost accounting makes some sense for construction because the value is in the finished road or building. Yes, there is certainly knowledge work in parts of construction, but the value is in the final finished deliverable.

Software is neither construction nor manufacturing. Software is learning, where we deliver learning as pieces of finished product. When we have finished learning enough for now, we stop working on the deliverable.

Agile allows us to deliver pieces of value as we proceed. Other life cycles, such as incremental life cycles or even releasing your product in short waterfalls allows us to see value as we proceed before the whole darn thing is “done.”

We might need to know about costs as we proceed. That’s why we can calculate the run rate for a team and see how our feature throughput is working. If we start looking at feature throughput and the cost per feature, we can put pressure on ourselves to reduce the size of each feature. That would reduce the cost per feature. Smaller features allows us to learn faster and see value faster.

Cost accounting has a big problem. It does not account for Cost of Delay, which is a huge cost in many organizations. (See Diving for Hidden Treasures: Uncovering the Cost of Delay in Your Project Portfolio to read all about Cost of Delay.)

This is one of the reasons I like feature sizes of one day or less. You count features for throughput and use run rate as a way to estimate cost.

I don’t know enough about accounting to say more. As I learn with my clients and especially learn about the pressures on them, I’ll post more. I do know this: the more we talk about the cost of a feature without talking about its value, the less we understand about how our software can help us. The more we talk about velocity instead of feature throughput, the less we know about what it really costs for us to develop our software.

Cost accounting is about surrogate measures (earned value, the cost of tasks, etc.) instead of valuing the whole. Agile gives us a way to see and use value on a daily basis. It’s opposed to the way cost accounting works. Cost accounting is a problem. Not insurmountable, but a problem nevertheless.

There are alternatives to cost accounting: throughput accounting and lean accounting. To use those ideas, you need to enlist the finance people in an agile transformation. As always, it depends on your goals.

Categories: Blogs

Scrum Immersion workshop at GameStop - Case Study

Agile Complexification Inverter - Thu, 12/08/2016 - 21:10
Here's a overview of a Scrum Immersion workshop done at GameStop this month. A case study example.

Normally these workshops start with the leadership (the stakeholders or shareholders) which have a vision for a product (or project). This time we skipped this activity.

The purpose of the Workshop is to ensure alignment between the leadership team and the Agile Coaches with regards to the upcoming scrum workshop for the team(s). Set expectations for a transition from current (ad-hoc) practices to Scrum. Explain and educate on the role of the Product Owner.

Expected Outcomes:
  • Create a transition plan/schedule
  • Set realistic expectations for transition and next release
  • Overview of Scrum & leadership in an Agile environment
  • Identify a Scrum Product Owner – review role expectations
  • Alignment on Project/Program purpose or vision
  • Release goal (within context of Project/Program & Scrum transition)

Once we have alignment on the Product Owner role and the Project Vision we typically do a second workshop for the PO to elaborate the Product Vision into a Backlog. This time we skipped this activity.

The purpose of the Workshop is to educate the Product Owner (one person) and prepare a product backlog for the scrum immersion workshop. Also include the various consultants, SME, BA, developers, etc. in the backlog grooming process. Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Overview of Scrum & Product Owner role (and how the team supports this role)
  • Set PO role responsibilities and expectations
  • Alignment of Release goal (within context of Project/Program & Scrum transition)
  • Product Backlog ordered (prioritized) for the first 2 sprints
  • Agreement to Scrum cadence for planning meetings and grooming backlog and sprint review meetings

Once we have a PO engaged and we have a Product Backlog it is time to launch the team with a workshop - this activity typically requires from 2 to 5 days. This is the activity we did at GameStop this week.
The primary purpose of the workshop is to teach just enough of the Scrum process framework and the Agile mindset to get the team functioning as a Scrum team and working on the product backlog immediately after the workshop ends (begin Sprint One). Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Basic mechanics of Scrum process framework
  • Understanding of additional engineering practices required to be an effective Scrum team A groomed / refined product backlog for 1- 3 iterations
  • A backlog that is estimated for 1 – 3 iterations
  • A Release plan, and expectations of its fidelity – plans to re-plan
  • Ability to start the very next day with Sprint Planning

Images from the workshop

The team brainstormed and the prioritized the objectives and activities of the workshop.

Purpose and Objectives of the WorkshopThe team then prioritized the Meta backlog (a list of both work items and learning items and activities) for the workshop.

Meta Backlog of workshop teams - ordered by participants
Possible PBI for Next Meta Sprint
Possible PBI for Later Sprints
Possible PBI for Some Day
Possible PBI for Another Month or Never
A few examples of work products (outcomes) from the workshop.

Affinity grouping of Persona for the user role in stories
Project Success Sliders activityTeam Roster (# of teams person is on)
A few team members working hardThree stories written during elaboration activity
A few stories after Affinity Estimation
Release Planning:  Using the concept of deriving duration based upon the estimated effort.  We made some assumptions of the business desired outcome;  that was to finish the complete product backlog by a fixed date.
The 1st iteration of a Release Plan That didn't feel good to the team, so we tried a different approach.  To fix the scope and cost, but to have a variable timeframe.
The 2nd iteration of a Release Plan That didn't feel good to the PO, so we tried again.  This time we fixed the cost and time, but varied the features, and broke the product backlog into milestones of releasable, valuable software.
The 3rd iteration of a Release PlanThis initial release plan feels better to both the team and the PO, so we start here.  Ready for sprint planning tomorrow.

Categories: Blogs

Announcement: Additional Writing Workshop

Johanna Rothman - Thu, 12/08/2016 - 19:47

I have enough people in the Writing Workshop 1: Write Non-Fiction to Enhance Your Business and Reputation to add a second section. You are right for this workshop if:

  • You are thinking about writing more
  • You want to improve your writing
  • You want to develop a regular habit of writing

If a blank piece of paper scares you, this is the right workshop for you.

If you are an experienced writer and want to bring your skills to the next level, you want Writing Workshop 2: Secrets of Successful Non-Fiction Writers.

If you’re not sure which workshop is right for you, email me. I can help you decide.

Oh, and if you’re a product owner, please do consider my Practical Product Owner workshop. I still have room in that workshop.

The early bird registration ends December 16, 2017. If I fill both sections earlier, I will stop registration then.

Categories: Blogs

Why Visual Management Techniques are so Powerful

Agile Complexification Inverter - Wed, 12/07/2016 - 23:58

How does the brain process visual clues to the environment and synthesize meaning about an ever changing landscape?  Tom Wujec explains the creation of mental models and why AutoDesk invest in visual management techniques to plan their strategic roadmaps.

Also in one of Tom Wujec's talks on How to Make Toast, he explains another important point of visual management - system's thinking and group work.

Don't worry... the mind will do all the work.  It will fill in the missing details, and abstract the patterns into the concept.  Here's an exercise, Squiggle Birds by David Gray, to experience this.

See Also:
Your Brain on ScrumMichael de la Maza on InfoQ

Visual Management Blog

Visual Thinking - Wikipedia

David Gray on Visual Thinking

Ultimate Wallboard Challenge 2010  time-lapse of Vodafone Web Team's board

iPad Interactive Whiteboard Remote

 This is your brain on Media
Multitasking: This is your brain on Media - Infographic by Column Five Media.
Categories: Blogs

Want to Engage Managers in Agile? Stop Using Agile Jargon

Bruno Collet - Agility and Governance - Wed, 12/07/2016 - 23:15
If you want to engage managers and executives in Agile, avoid agile jargon from existing agile frameworks and models such as: sprint (Scrum), servant-leader, release train (SAFe), or made-up words such as anticipaction (Agile Profile).

Not only do managers and executives not have common understanding of agile terminology, but more importantly they do not necessarily have interest for "becoming agile" in the first place. At first this might seem like an intractable obstacle but it's actually quite refreshing. Indeed it helps talking in terms of real business and management challenges instead of focusing on agile concepts and methodology. And guess what? Agile values and practices can be expressed quite well in common management language.
"Agile values and practices can be expressed quite well in common management language."
Actually, the broader and higher my interventions for agile transformation, the less I talk about agile.

In my opinion, promoters of agile methods, tools and frameworks have packaged agile commercially, creating a layer of opacity that backfires when we address management and organizational agility.

"The higher my level of intervention for agile transformation, the less I talk about agile".
Agile jargon originated from two broad sources that have shaped the general (mis)understanding of agile today. First the software development grassroots becoming popular more than a decade ago with frameworks such as Scrum and Extreme Programming (XP), now going more enterprise-level with SAFe and many other siblings. And second, strategy management firms and business schools such as Gartner and Harvard, as well as leading management authors such as Steve Denning (Radical Management) and Jurgen Appelo (Management 3.0) who joined the trend a few years ago and developed their own agile terminology.

Let's have a look at a list of agile wordings that have created confusion in many, many discussions (I've been there…) and find an equivalent in common management language.

  • VUCA = volatile (leave UCA for long-form explanation)
  • Bimodal = organized both for efficiency and innovation (or for predictability and exploration)
  • Velocity = speed of value delivery
  • User story, epic = product feature
  • Sprint = x-week iteration (where x is a number)
  • MVP, minimum viable product = first partial solution
  • Increment, release = new features delivered to users
  • Product owner = client representative
  • Scrum Master = team facilitator
  • Synchronize = fast decision-making across hierarchy and functions
  • Servant-leader = coach-style manager
  • Product backlog = prioritized list of features
  • Whole team, squad = end-to-end team
  • Budget-boxed = fixed budget
  • Time-boxed = fixed delivery date
  • Variable scope = dynamic prioritized list of features
  • Delivered value = must refer to context-specific metrics (that's a tricky one - unexplained, many see it as ROI which it's not)
Purists will find that these are not entirely equivalent. I can live with that. The goal is to get a message through, not to write a glossary.

Share your experience!


Bruno Collet helps organizations benefit from agility, mainly in the area of digital transformation. He develops the Metamorphose framework to accelerate transformation results. His career has led him to collaborate with organizations in Montreal, Belgium and elsewhere, including the Société de Transport de Montréal (STM), National Bank of Canada, Loto-Québec and Proximus (formerly Belgacom). Holder of MBA, MScIT, PMP and PMI-ACP degrees, Bruno Collet is also author of a blog on agile transformation, and speaker at PMI, Agile Tour Montréal and Agile China.

Disclaimer: trademarks used in this article are used for illustrative purpose only. The article is not sponsored by, or associated with, the trademark owners.
Categories: Blogs

Stable Teams – Predictability Edition

Leading Agile - Mike Cottmeyer - Wed, 12/07/2016 - 15:00

Today I am addressing a key component to help teams succeed. Stable Teams.

Stable Teams Defined

I find that many people interpret those two words in many different ways. Here’s my definition.

A stable team is a team that stays together for ideally multiple releases or quarters. They have a few characteristics:

  1. Individuals on the team only belong to one team.
  2. The Team has one backlog.
  3. The backlog is formed by a consistent entity.
  4. The team stays together for multiple releases or quarters barring life events like illness, HR issues, etc.

Stable teams can be applied at any level in an organization. Communities of practice, agile delivery teams, programs and portfolios all have a backlog.


Plenty of studies have been done on the appropriate size of the team or how productive the teams can be if they are stable. Here are the key points:

  1. Teams that stay together have a higher throughput.
  2. Teams that stay together are more predictable.
  3. Teams that stay together are happier.
The Softer Side

Our community is multi-dimensional in how agile is implemented.  I implement a structure first approach. Some implement a cultural approach so that they change the mindset of the organization. Still, others implement a practices approach where practices are taught and the teams are responsible for the outcome.

The reason I choose a structure approach is fairly straightforward. It’s not that I don’t care about culture and practices, but it’s out of respect for the longevity of the team, the culture, and the practices. I am intentionally creating a container that protects the team.  Protects them from what you might ask?


With a cultural approach, the worst thing I can do is teach accountability, self-organization, and how to be a generalist and have that subverted by team members being swapped off of teams. Teams that stay together figure out how to use each others unique strengths over time. They can be responsible for their own outcome and held accountable for it too. They can figure out how to best do the job at hand. If I pull someone out of the team, that kills accountability and their self-organization is blown to bits. Go ahead and try to hold them accountable. They will hate you for the long hours they pull trying to get the work done.  Respect the structure, and seek to change culture after protecting the container.


With a practices approach. The container for the team to take those practices and run with them isn’t created. The team may not be able to effectively create a working agreement or retrospect to come up with new ways to improve or implement the practice. They can attempt to self-organize, but it’s beyond their control much of the time.


So I turn to structure. If I structure the team, protect the team, and support them, they have the best chance for success. Not all teams will succeed. Some will improve slower than others. But they have a shot at changing their own culture. We can hold them accountable for their outcome because we have enabled them to do so. They can become generalists over time so when life events happen, the team can cover and prop up their team member. When attrition occurs, the team can absorb the change more readily.

Assuming that all made sense, what the heck is preventing stabilization of teams?

To list a few:

1.  Focus on individual utilization.

Some organizations focus on maximizing individual utilization, they even overbook. In large PMO organizations that focus on utilization of individuals, there is a sense that team members aren’t always working. This just isn’t accurate. While in the short term a team member or two can be underutilized, that is a maturity of practice issue that can be overcome. Utilization can indeed be improved. Not only that, but those people managing how much individuals are used across the organization can focus on something else because utilization is stable too. Knowing how much capital investment you have vs opex is much more valuable than scheduling someone’s percentage of work on a feature.

2. Side Projects

Kill them with fire.   Actually no… some side projects make great candidates for the backlog of the organization. Providing a single source of truth gives clarity to teams. Tech Debt can be prioritized along side of features. Teams need to be able to quantify the value. That’s generally easy to teach them. In the end, they will find that they get much more done. On another note, if the team is receiving work from the side. On the down low. That needs to stop. It’s an impediment to Team Stability. Find the source, figure out why it’s happening and eradicate it. That’s not simple, people have side projects for a reason. They are trying to solve a problem. Figure out if it’s an education issue, an alignment issue, or otherwise.

3. Dependencies on other teams

Dependencies are always a truth and cost of doing business in my opinion. There are some really cool things we can do to get rid of them, but in the meantime, we need to shoot for the most capable team with the fewest dependencies on other teams that we can. Capability modeling can be a life-raft to help with this. Structuring around your existing capabilities and enabling the teams by putting what they need to take care of those capabilities is critical to predictability. Dependencies still need to be managed, but not as much if we are smart about how we staff the team to enable them and figure out capability ownership by the team.

Stable Teams are a non-negotiable part of a predictable system.  Sure, there are outliers, but by and large, stable teams are one of the biggest ways you can help yourself and your organization.  If you can’t figure out how to do it or are not empowered, get help.  And remember, this isn’t just about Scrum teams, it’s about teams.


The post Stable Teams – Predictability Edition appeared first on LeadingAgile.

Categories: Blogs

Darwin Perspective on Agile Architecture

TV Agile - Wed, 12/07/2016 - 11:59
Through the comparison of the Darwin’s theory of evolution to software development, we will try to find an answer to how to build sustainable, agile and antifragile software systems, which software architecture are responsive and adaptable to the challenges of a volatile business environments. Video producer:
Categories: Blogs

The Simple Leader: The Two Pillars

Evolving Excellence - Wed, 12/07/2016 - 11:38

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


The majority of Lean transformations will fail. Sorry, that’s just the sad truth. The reason for this failure rate is because Lean has two fundamental pillars that most organizations don’t know about, let alone understand the importance of the second. These two pillars are:

  • Create value from the customer’s perspective through continuous improvement
  • Respect for people

Lean actually differs slightly from the traditional Toyota Production System in the first pillar. Most organizations trying to become “Lean” focus on reducing waste while Toyota promotes creating flow. Although the approach is different, the tools are generally the same and the end goal is still to create value from the perspective of the customer. (One danger of the first approach is that focusing only on waste reduction can lead to an emphasis on cost-cutting instead of true improvement.)

In Lean thinking, there are seven primary forms of waste: unnecessary transport, unnecessary inventory, unnecessary motion, waiting, overproduction, overprocessing, and defects. Others add the waste of human potential, where employees are thought of as just a pair of hands instead of a brain with creativity, knowledge, and experience.

These forms of waste are present in manufacturing as well as office and administrative environments. In fact, you can even find them at home. Did you cook too much food for dinner last night? Did you have to wait in line to take a shower? Did you have to search for hours to find a tool in your cluttered garage? All these are types of waste as defined by Lean principles.

Is important to remember that something is only waste if it does not create value from the customer’s perspective. Identifying the customer and then looking for waste (and value) from the perspective of the customer is far harder than it sounds. Some activities may appear to be waste for one customer and not another. Is a long commute a waste of time? To some it is, to others it is a valuable time to relax and refocus. To add even more complexity, some forms of waste may even be necessary, such as regulatory paperwork. Another example is advertising, an expenditure that doesn’t generally add value for the customer but is necessary to help sustain the business.

Even if a company is good at eliminating waste, it still needs to implement the second pillar—respect for people—if it wants to be successful. Respect for people grew out of Toyota’s concept of “autonomation.” Autonomation means “automation with a human touch.” At Toyota and in TPS, machines aid humans, not vice versa. To this day, when you visit a Toyota factory you will see far more humans than at comparable factories of other automakers. Robots are primarily used in dangerous processes and to lift heavy assemblies.

I have come to believe that respect for people is the most important pillar of Lean. However, because it is the least under- stood (or accepted), it is often the primary reason why most Lean transformations fail. Companies focus on eliminating waste and do not emphasize having respect for people, which causes the whole system to collapse. People are the core value-creators of a Lean organization, something many companies do not understand. Toyota is known for saying “we develop people before we make cars.”

Respect for people takes many forms. First, it aims to create an environment for employees where ideas, knowledge, creativity, and experience are valued. Traditional accounting practices measure the cost of the pair of hands, but do not measure the value of experience and creativity in the brain attached to the pair of hands. The lack of a defined value offset is why traditional accounting drives decisions to move factories to countries with lower labor costs, even if hundreds or thousands of experienced, creative people are replaced by even more people with less knowledge.

Respect for people also applies to customers. Every customer is considered to be very important, and their problems are taken seriously. This is part of why Toyota failed with their series of recalls in 2009 and 2010. Instead of holding to a strong culture of respect for its customers, the company tried to play down the stuck accelerator problem for years before the negative perception and press became too great. Imagine how much different those years—and the resulting financial and reputational costs—would have been if Toyota had publicly treated each incident as being extremely serious.

Respect should also be promoted among a company’s suppliers and community, which is why a more accurate translation of Toyota’s “respect for people” is really “respect for humanity.” Engaging the entire value stream and business environment in continuous improvement efforts and knowledge development can pay huge rewards in terms of trust, ideas, and support.

Lean’s reputation is not always one of having respect for people. When Womack and Jones wrote their book in 1990, they could not have anticipated the problems associated with “Lean” rhyming with “mean.” Not a day goes by without some reference to a “Lean and mean” organization. This is a misperception. Real Lean is definitely not mean to the people implementing it.

Real Lean companies leverage productivity improvements to capture new business, which allows them to keep the people impacted by those improvements. Some Lean companies go so far as to pledge that there will be no layoffs due to Lean efforts. This is often necessary to get buy-in for what can appear to be job-threatening improvement programs. And Lean companies like Toyota are generally not unionized simply because the employees are already treated with respect and often paid better than at comparable organizations.

At its most fundamental level, Lean is about enabling people to create improvements that add value for the customer. Leadership is also about people, including ourselves. Together, this is the foundation for our exploration of how Lean can help transform personal and professional leadership.

Categories: Blogs

A3 Templates for Backbriefing and Experimenting

AvailAgility - Karl Scotland - Tue, 12/06/2016 - 13:59

I’ve been meaning to share a couple of A3 templates that I’ve developed over the last year or so while I’ve been using Strategy Deployment. To paraphrase what I said when I described my thoughts on Kanban Thinkingwe need to create more templates, rather than reduce everything down to “common sense” or “good practice”. In other words, the more A3s and Canvases there are, the more variety there is for people to choose from, and hopefully, the more people will think about why they choose one over another. Further, if people can’t find one that’s quite right, I encourage them to develop their own, and then share it so there is even more variety and choice!

Having said that, the value of A3s is always in the conversations and collaborations that take part while populating them. They should be co-created as part of a Catchball process, and not filled in and handed down as instructions.

Here are the two I am making available. Both are used in the context of the X-Matrix Deployment Model. Click on the images to download the pdfs.

Backbriefing A3

Backbriefing A3

This one is heavily inspired by Stephen Bungay’s Art of Action. I use it to charter a team working on a tactical improvement initiative. The sections are:

  • Context – why the team has been brought together
  • Intent – what the team hopes to achieve
  • Higher Intent – how the team’s work helps the business achieve its goals
  • Team – who is, or needs to be, on the team
  • Boundaries – what the team are or are not allowed to do in their work
  • Plan – what the team are going to do to meet their intent, and the higher intent

The idea here is to ensure a tactical team has understood their mission and mission parameters before they move into action. The A3 helps ensure that the team remain aligned to the original strategy that has been deployed to them.

The Plan section naturally leads into the Experiment A3.

Experiment A3

Experiment A3

This is a more typical A3, but with a bias towards testing the hypotheses that are part of Strategy Deployment. I use this to help tactical teams in defining the experiments for their improvement initiative. The sections are:

  • Context – the problem the experiment is trying to solve
  • Hypothesis – the premise behind the experiment
  • Rationale – the reasons why the experiment is coherent
  • Actions – the steps required to run the experiment
  • Results – the indicators of whether the experiment has worked or not
  • Follow-up – the next steps based on what was learned from the experiment

Note that experiments can (and should) attempt to both prove and disprove a hypothesis to minimise the risk of confirmation bias. And the learning involved should be “safe to fail”.

Categories: Blogs

Agendashift, Cynefin and the Butterfly Stamped

AvailAgility - Karl Scotland - Mon, 12/05/2016 - 18:08

The butterfly who stamped

I’ve recently become an Agendashift partner and have enjoyed exploring how this inclusive, contextual, fulfilling, open approach fits with how I use Strategy Deployment.

Specifically, I find that the Agendashift values-based  assessment can be a form of diagnosis of a team or organisation’s critical challenges, in order to agree guiding policy for change and focus coherent action. I use those italicised terms deliberately as they come from Richard Rumelt’s book Good Strategy/Bad Strategy in which he defines a good strategy kernel as containing those key elements. I love this definition as it maps beautifully onto how I understand Strategy Deployment, and I intent to blog more about this soon.

In an early conversation with Mike when I was first experimenting with the assessment, we were exploring how Cynefin relates to the approach, and in particular the fact that not everything needs to be an experiment. This led to the idea of using the Agendashift assessment prompts as part of a Cynefin contextualisation exercise, which in turn led to the session we ran together at Lean Agile Scotland this year (also including elements of Clean Language).

My original thought had been to try something even more basic though, using the assessment prompts directly in a method that Dave Snowden calls “and the butterfly stamped“, and I got the chance to give that a go last week at Agile Northants.

The exercise – sometimes called simply Butterfly Stamping – is essentially a Four Points Contextualisation in which the items being contextualised are provided by the facilitator rather than generated by the participants. In this case those items were the prompts used in the Agendashift mini assessment, which you can see by completing the 2016 Agendashift global survey.

This meant that as well as learning about Cynefin and Sensemaking, participants were able to have rich conversations about their contexts and how well they were working, without getting stuck on what they were doing and what tools, techniques and practices they were using. Feedback was very positive, and you can see some of the output in this tweet:

Four of the #Agendashift #Cynefin results from tonight’s #AgileNorthants meetup.

— Karl Scotland (@kjscotland) November 29, 2016

I hope we can turn this into something that can be easily shared and reused. Let me know if you’re interested in running at your event. And watch this space!

Categories: Blogs

The Break-Up

Leading Agile - Mike Cottmeyer - Mon, 12/05/2016 - 15:00

True Story.

It was a painful breakup. I thought things were going fine, but “it’s not you” she said “it’s me”…. and just like that, our relationship was over.

I really didn’t see it coming. We have been working together for a long time now.

Really? Really, I said. You no longer need QA?

She went on to explain that as her team has gotten better and better at building software, the whole team has taken more ownership of quality. They are on a pretty consistent cadence now and have working tested software ready to ship every two weeks…which is just about the time it used to take to just get through a QA testing cycle. “We are just moving way too fast for that nowadays” she said.

At first I was incredulous. Everyone needs quality assurance! QA is an essential role! You can’t just pump out code and hope it is good enough. The customers would crucify us for that. At first, I thought what she was saying was ridiculous, but the more she explained it, the more it started to make sense. Her team had started Test Driving development a couple of years ago and now they have really high levels of confidence in the entire codebase. “It’s a team quality thing” she said.

I knew this team was good. The code is both Clean and SOLID.

Their quality metrics are on par with any other team. They consistently produce 90% automated test code coverage, including code branches. Code reviews made sure that everything was covered, and they had automated jobs that broke whenever they decreased that coverage. Ultimately, they have virtually no defects that get into Production; some of the best code there is. They also have automated acceptance tests, UI tests and load and performance tests. They have pretty much automated the entire testing pyramid.

Testing Pyramid

Test Driven Development (TDD) was just the start though. Soon after the programmers were test driving, the QA folks started automating UI tests in Selenium. To learn how, they had been pairing with programmers and their relationships in the team were the best of any of my QA teams. After that, they started using CucumberJVM to build behavior tests with the Business Analysts and the Product Owners. The pace of development with this team was better than any other team and I had pretty much let the team run on autopilot because they never seemed to have the same kind of problems
the other teams have.

“We no longer have a QA verified cycle” she said. “We have renamed it to ‘Team Verified, because the whole team does testing before we demo it”. She went on. “We’ve also taken ownership of exploratory testing. The whole team does exploratory testing every week and this has really been a place where the QA folks have been a big help.”

“So you do need us” I said. “Oh yes” she said, “your people are great, but let me explain. We still want all the quality experts we can get. As long as they are committed to the team, are committed to continuing their learning, and stay with the team every day”.
She looked right at me and said, “What we don’t need is all the management oversight we used to have”.

I was beginning to understand. I realized that she still wanted quality. In fact, she was more dedicated to quality than just about any other team in the organization and we are a huge organization. She was advocating for a level of quality the other teams could only hope to duplicate.

She went on to say, “we’d like to keep some of your team members if we can”. “What do you mean?” I said. She went on…”Most of your team members have embraced the changes we have introduced. They love test automation, learning and the collaborative nature of how we produce quality now. And those folks have a place on our team.” She went on, “The QA folks really took a leadership role with exploratory testing. We want to keep them because they see things differently. They look for anomalies and edge cases and they really help the team round out all the aspects of testing that we need.”

Granted, this team has really taken agile adoption and self-organization seriously. They really epitomize teamwork. They do not behave like an assembly line, passing work from one role to another. They’ve eliminated a lot of the limitations of specific roles in the team room through organized cross-training and pairing. For just about any task, most anyone can pair with someone if not complete the task themselves outright.

“But we move fast now, as a team” she said. “And I guess what I’m saying, is that we want to keep your folks on the team and…well, there’s still a lot more that I can go into, but the short answer is…We just don’t need a QA Lead anymore.”

I was crestfallen. The breakup wasn’t painful from her perspective. The pain was all in my head. It was just a matter of my realization that self-organizing teams don’t need as much management as a traditional team. The whole team owns quality now, and, I suppose, in the long run, that’s going to turn out to be a really good thing.

The post The Break-Up appeared first on LeadingAgile.

Categories: Blogs