Skip to content

Feed aggregator

Dealing with Process Debt by Taking a First Agile Step

Ben Linders - Tue, 11/03/2015 - 14:39
When you're facing problems and feel that there's no room for improvement, the situation will only get worse. You're building up process debt, a debt that you will have to repay as soon as possible to prevent from going bankrupt. Let's explore how you can deal with process debt by taking a first step on a journey of continuous improvement to increase your agility. Continue reading →
Categories: Blogs

Agile, but still really not Agile? What Pipeline Automation can do for you. Part 3.

Xebia Blog - Tue, 11/03/2015 - 14:07

Organizations adopting Agile and teams delivering on a feature-by-feature basis producing business value at the end of every sprint. Quite possibly this is also the case in your organization. But do these features actually reach your customer at the same pace and generate business value straight away? And while we are at it: are you able to actually use feedback from your customer and apply it for use in the very next sprint?

Possibly your answer is “No”, which I see very often. Many companies have adopted the Agile way of working in their lines of business, but for some reason ‘old problems’ just do not seem to go away...

Hence the question:

“Do you fully capitalize on the benefits provided by working in an Agile manner?”

Straight forward Software Delivery Pipeline Automation might help you with that.

In this post I hope to inspire you to think about how Software Development Pipeline automation can help your company to move forward and take the next steps towards becoming a truly Agile company. Not just a company adopting Agile principles, but a company that is really positioned to respond to the ever changing environment that is our marketplace today. To explain this, I take the Agile Manifesto as a starting point and work from there.

In my previous posts (post 1, post 2), I addressed Agile Principles 1 to 4 and 5 to 8. Please read below where I'll explain about how automation can help you for Agile Principles 9 to 12.


Agile Principle 9: Continuous attention to technical excellence and good design enhances agility.

In Agile teams, technical excellence is achieved by complete openness of design, transparency on design-implications, reacting to new realities1-Quality and using feedback loops to continuously enhance the product. However, many Agile teams still seem to operate in the blind when it comes to feedback on build, deployment, runtime and customer experience.

Automation enhances the opportunity to implement feedback loops into the Software Delivery Process as feedback. Whatever is automated can be monitored and immediately provides insight into the actual state of your pipeline. Think of things like trend information, test results, statistics, and current health status at a press of a button.

Accumulating actual measurement data is an important step, pulling this data to an abstract level for the complete team to understand in the blink of an eye is another. Take that extra mile and use dashboarding techniques to make data visible. Not only is it fun to do, it is very helpful in making project status come alive.


Agile Principle 10: Simplicity--the art of maximizing the amount of work not done--is essential.

Many of us may know the quote: “Everything should be made as simple as possible, but no simpler”. For “Simplicity-the art of maximizing the amount of work not done”, wastes like ‘over-processing’ and ‘over-production’ are minimized by showing the product to the Product Owner as soon as possible and at frequent intervals, preventing gold plating and build-up of features in the pipeline.Genius-2

Of course, the Product Owner is important, but the most important stakeholder is the customer. To get feedback from the customer, you need to get new features not only to your Demo environment, but all the way to production. Automating the Build, Deploy, Test and provisioning processes are topics that help organizations achieving that goal.

Full automation of your software delivery pipeline provides a great mechanism for minimizing waste and maximizing throughput all the way into production. It will help you to determine when you start gold plating and position you to start doing things that really matter to your customer.

Did you know that according to a Standish report, more than 50% of functionality in software is rarely or never used. These aren’t just marginally valued features; many are no-value features. Imagine what can be achieved when we actually know what is used and what is not.


Agile Principle 11: The best architectures, requirements, and designs emerge from self-organizing teams.

Traditionally engineering projects were populated with specialists. Populating a team with specialists was based on the concept to divide labor, thereby pushing specialists towards focus on their field of expertise. The interaction designer designs the UI, the architect creates the required architectural model, the database administrator sets up the database, the integrator works on integration logic and so forth. Everyone was working on an assigned activity but as soon as components were put together, nothing seemed to work.

In Agile teams it is not about a person performing a specific task, it is about the team delivering fully functional slices of the product. When a slice fails, the team fails. Working together when designing components will help finding a optimal overall solution instead of many optimized sub-solutions that need to be glued together at a later stage. For this to work you need an environment the emits immediate feedback on the complete solution and this is where build, test and deployment automation come into play. Whenever a developer checks in code, the code is to be added to the complete codebase (not just the developer's laptop) and is to be tested, deployed and verified in the actual runtime environment as well. Working with fully slices provides a team the opportunity to experiment as a whole and start doing the right things.


Agile Principle 12: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

To improve is to change, to be perfect is to change often. Self-learning teams and adjustments to new realities are key for Agile teams. However, in many organizations teams remain shielded from important transmitters of feedback like customer (usage), runtime (operational), test resucontinuous_improvement_modellts (quality) and so on.

The concept of Continuous Delivery is largely based upon receiving reliable feedback and using the feedback to improve. It is all about enabling the team to do the right things and do things right. An important aspect here is that information should not become biased. In order to steer correctly, actual measurement data need to be accumulated and represented at an understandable way to the team. Automation of the Software Delivery process allows the organization to gather real data, perform real measurements on real events and act accordingly. This is how one can start act on reality instead of hypothesis.


Maybe this article starts to sound a bit like Steve Balmers Mantra for Developers (oh my goshhhh) but than for "Automation", but just give it five minutes... You work Agile, but are your customers really seeing the difference? Do they now have a better product, delivered faster? If the answer is no, what could help you with this? Instantly?


Michiel Sens.

Categories: Companies

Device Research, the Agile Way

Agile Game Development - Tue, 11/03/2015 - 02:18
New device development doesn't always start out with a single clean vision of the final product.  Often the product is a bit fuzzy because the knowledge of all the capabilities is uncertain.  There can be different visions fragmented by the different domains.  For example, marketing's vision isn't the same as the vision from software engineering, which is different from electrical/mechanical engineering's vision.  There are unknown areas of overlap as well as areas of non-overlap.

Still, there are questions about the overall vision that have to guide the product's R&D.  At first they can be expressed as questions that research seeks to answer:
  • Can we put the capabilities into a small enough package to be marketable?
  • Will the necessary processing power allow our cost, battery duration and heat dissipation to be below an acceptable level?
  • Etc...
There are many mutually dependent questions that need to be answered and some are critical.  How many products have you seen that have failed because, although they may have done most things right, they did a few crucial things wrong (like battery life).

Unfortunately for hardware-based products, we usually can't iterate rapidly on the entire product from the start, at least not well enough to discover these issues.  We still want a cross-discipline approach to our vision, even if  development doesn't support it.

Consider a simple scenario:  We have a set of new technology that we want to leverage into a
new product, for example the first generation iPod.  A key technical development that allowed the iPod was the famous "click wheel".  The click wheel allowed for a tactile intuitive user interface.  It was a big part of the iPod's success, but not the only part.  The design aesthetic, storage space and battery life were all part of the device's success.

(note: although Apple has been a client of mine, I did not work with the iPod team or know anything about the iPod development.  This example is speculative or based on published descriptions from employees).

Before the iPod, the market for mp3 players was saturated by hard-to-use, cheap players.  The vision for the iPod started by addressing what the current market lacked.  So the team explored design aesthetics, batteries, small, high-capacity storage devices and interfaces.  All of these areas of exploration overlapped with the vision of a small, easy-to-use player that could store many songs and which the user would be proud to own.

There was a certain amount of research that went into exploring each area of the iPod.  1.8 inch hard drives had been out for awhile, but newer 1-inch drives showed promise eventually.  Cost and capacity factors led to the 1.8-inch drives being chosen.  This had an impact on all the other areas.

So how do we work with separate groups researching separate areas of a new device, when it's too early to precisely define the device and impractical to iterate on a nearly-shippable version?

Can Agile/Lean Be Used?

Agile & lean practices are designed to explore emerging new products.  They aren't restricted to software products.  Its benefits can be applied for research as well.  However, implementations of agile for software development focus on a few areas that might not be available to most new device developers:
  • We can't have "potentially shippable" versions of the device every 1-3 weeks.
  • We often don't have a clear vision of the device we want to build until we do some research.
  • Stakeholders can be very nervous that research is open-ended and want detailed plans.
  • Researchers have trouble fitting their efforts into 1-3 week time-boxes that produce something that meets a "definition of done".
The concerns raised about applying agile can come from both the stakeholders and researchers as well:
  • Stakeholders: We don't want to have open-ended research with no end in sight.  We want to use more traditional project management techniques to put limits on the cost and time spent.  We need more control on a day-to-day basis.
  • Researchers: We can't estimate iterations.  They are too short to produce any "valuable" result that meets any definition of done.
To overcome these limits and concerns, I list some proven tips for using agile for R&D work:

Align your vision with research goals

Research has to align with the ultimate product's vision.  But sometimes a single product's vision depends on the results of research.   How do we reconcile these mutually dependent things?

The vision for a new device can start with a set of capabilities in a concept that we assume will change as we learn more.  It's critical that the people in R&D have a shared broad vision of the product they are researching.  This is where chartering techniques can help.  These techniques help create a shared vision far more effectively than passing around a large document:
  • Building "look-and-feel" mock-up devices
  • Creating a hypothetical demo video of the future device
  • Short customer-oriented presentations
Check out some of the pitch videos made for crowd-sourcing campaigns.  Many of these show what their devices might look like and how they would be used so they  generate enough excitement to draw millions of dollars of funding.  Isn't that level of excitement as valuable for those making your product?

Use Spikes

Spikes are user stories that limit the amount of time spent working on them.  They also are meant to produce knowledge that can't be estimated in scope up front.

An effective way of using spikes is called "Hypothesis-Driven Development".  One template for for HDD spikes is:
An example of this is:

We believe that implementing algorithm X will result in sufficient accuracy.  We will know we succeeded when we get 95% accuracy from our test data in the lab.  We will give up when we spend a full sprint without seeing this accuracy improve beyond 50%".

Set-Based Design

Set-based design is a design approach for considering a wide range of separate domain research that have overlapping and non-overlapping areas:

The approach is to explore an emerging product vision by exploring the range of separate domains and areas they overlap (orange area).   The idea is for research activities to refine the entire domain and converge on the best-shared solution.   This is fundamentally different from "point-based design" where the solution is chosen up front and the domains are forced to implement that point.  Knowledge of what works and what doesn't usually emerges as deviations from the point-based plan and considered as negative impacts to cost or schedule.

For example, suppose the iPod team had decided that the first iPod would have a touch-screen driven interface with solid state memory.  That's a potentially better product than the first generation iPod (in fact it's what the iPod eventually became), but in 2001, due to the existing technology, the memory may have been limited and the touch screen too battery draining.  Having gone down the long path of designing this device, Apple might have released a compromised or much-delayed product.

The Cost of Set-based Design

Set-based design can cost more in the short-term, but it can save your product in the long term.  To illustrate, if we had several contenders for a technical solution--all with various risks and costs associated--how would we work?  If each took a month to evaluate, we could be pushing out the ship date by many more months.

The answer is to research the solutions in parallel and to focus on failing each fast.  For the example of a touch-screen vs. click wheel on the first iPod, we'd focus on the areas of risk first.  How does each feel?  What is the cost of implementing each?  What is the power consumption?  We'd try to get these answers addressed  before making any further decisions (an iPod example is the creation of dozens of prototype cases, which Steve Jobs would choose from).

This tactic of avoid decisions made without sufficient knowledge is referred to as "deferring solutions until the last responsible moment".  We make better decisions when we know more, but we don't want to be in manufacturing when we decide to change the case.

These days, with on-demand rapid prototyping, 3D printing, emulation, etc. we can shorten the experimental cycle on hardware dramatically allowing us to do set-based design far more effectively.

Simplified stage gatesOther Useful Practices

Stage Gates
A project to create and ship a new device will change states and practices as it progresses.  Work will transition from researching foundation forms and technologies to prototyping the whole device to designing the production flow and moving into production.  These are stages can't be combined into each iteration as well as software-only products can, but the traditional problems that stage-gate development encounters can be mitigated with lean practices (this will be addressed in future articles).

Critical Path Management
The emerging vision of the device and the knowledge of what's possible will lead to the identification of a series of dependent activities or goals that need to occur before the device is ready for the prototype of production stage.  Identifying these paths and focusing efforts on improving flow through them starts early.

Many advances in hardware development practices (such as 3D printing, field-programmable gate arrays, etc.) have allowed teams developing hardware to benefit from the practices that software developers have been exploring for over a decade.  While iterating on an electrical or mechanical feature isn't as rapid as recompiling code, it's allowing more and more iteration and exploration into making better products.

Categories: Blogs

Bringing Agile to the Next Level

Xebia Blog - Mon, 11/02/2015 - 23:22

The Best is Yet to Come written on desert roadI finished my last post with the statement Agile will be applied on a much wider scale in the near future. Within governmental organizations, industry, startups, on a personal level, you name it.  But how?  In my next posts I will deep dive in this exciting story lying in front of us in five steps:

Blogpost/Step I: Creating Awareness & Distributing Agile Knowledge
Change is a chance, not a threat.  Understanding and applying the Agile Mindset and toolsets will help everyone riding the wave of change with more pleasure and success.  This is the main reason why I’ve joined initiatives like Nederland Kantelt, EduScrum, Wikispeed and Delft University’s D.R.E.A.M. Hall.

Blogpost/Step II: Fit Agile for Purpose
The Agile Manifesto was originally written for software.  Lots of variants of the manifesto emerged the last couple of years serving different sectors and products. This is a good thing if the core values of the agile manifesto are respected.

However, agile is not applicable for everything.  For example, Boeing will never apply Scrum directly for producing critical systems.  They’re applying Scrum for less critical parts and R&D processes. For determining the right approach they use the Cynefin framework.  In this post I will explain this framework making it a lot easier where you could apply Agile and where you should be careful.

Blogpost/Step III: Creating a Credible Purpose or “Why”
You can implement a new framework or organization, hire the brightest minds and have loads of capital, in the end it all boils down to real passion and believe. Every purpose should be spot on in hitting the center of the Golden Circle.  But how to create this fontainebleau in spring?

Blogpost/Step IV: Breaking the Status Quo and Igniting Entrepreneurship
Many corporate organizations are busy or have implemented existing frameworks like SAFe or successful Agile models from companies like Netflix and Spotify.  But the culture change which goes with it, is the most important step. How to spark a startup mentality in your organization?  How to create real autonomy?

Compass with needle pointing the word organic. Green and grey tones over beige background, Conceptual illustration for healthy eating and organic farming.Blogpost/Step V: Creating Organic Organizations
Many Agile implementations do not transform organizations in being intrinsically Agile.  To enable this, organizations should evolve organically, like Holacracy.   They will become stronger and stronger by setbacks and uncertain circumstances.  Organic organizations will be more resilient and anti-fragile.  In fact, it’s exactly how nature works.  But how can you work towards this ideal situation?

Categories: Companies

Workshop on Agile Retrospectives in Tel Aviv

Ben Linders - Mon, 11/02/2015 - 22:51

news Ben LindersI will do the full day tutorial Make Your Agile Retrospectives More Valuable at the Agile Practitioners conference in Tel Aviv on January 26. Ticket sales has started, click here to register.

In this workshop I will explain the “what” and “why” of retrospectives and the business value and benefits that they can bring. Attendance will practice one or more retrospective exercises and learn how to facilitate retrospectives. I … Continue reading →

Categories: Blogs

3 Fights Each Day

J.D. Meier's Blog - Mon, 11/02/2015 - 18:55

“Sometimes the prize is not worth the costs. The means by which we achieve victory are as important as the victory itself.”
― Brandon Sanderson

Every day presents us with new challenges.  Whether it’s a personal struggle, or a challenge at work, or something that requires you to stand and deliver.

To find your strength.

To summon your courage, or find your motivation, or to dig deep and give it your all.

Adapt, Adjust, Or Avoid Situations

Sometimes you wonder whether the struggle is worth it.  Then other times you breakthrough.  And, other times you wonder why it was even a struggle at all.

The struggle is your growth.  And every struggle is a chance for personal growth and self-actualization.  It’s also a chance to really build your self-awareness.

For example, how well can you read a situation and anticipate how well you will do?   In every situation, you can either Adapt, Adjust, or Avoid the situation.  Adapt means you change yourself for the situation.  Adjust means you change the situation to better suite you.  And Avoid means, stay away from it.  You will be like a fish out of water.  If you don’t like roller coasters, then don’t get on them.

So every situation is a great opportunity to gain insight into yourself as well as to learn how to read situation, and people, much better.  And the faster you adapt, the more fit you will be to survive, and ultimately thrive.

Nature favors the flexible.

The 3 Fights We Fight Each Day

But aside from Adapting, Adjusting, and Avoiding situations, it also helps to have a simple mental model to frame your challenges each day.  A former Navy Seal frames it for us really well.  He says we fight 3 fights each day:

  1. Inside you
  2. The enemy
  3. The “system”

Maybe you can relate?  Each day you wake up, your first fight is with yourself.  Can you summon your best energy?  Can you get in your most resourceful state?  Can you find your motivation?   Can you drop a bad habit, or add a good one?   Can you get into your best frame of mind to tackle the challenges before you?

Winning this fight sets the stage for the rest.

The second fight is what most people would consider the actual fight.  It’s the challenge you are up against.   Maybe it’s winning a deal.  Maybe it’s doing your workout.  Maybe it’s completing an assignment or task at work.  Either way, this is where if you lost your first fight, this is going to be even tougher now.

The third fight is with the “system.”  Everybody operates within a system.  It might be your politics, policies, or procedures.  You might be in a school or a corporation or an institution, or on a team, or within an organization.  Either way, there are rules and expectations.  There are ways for things to be done.  Sometimes they work with you.  Sometimes they work against you.   And herein lies the fight.

In my latest post, I share some simple ways from our Navy Seal friends how you can survive and thrive against these 3 fights:

3 Fights We Fight Each Day

You can read it quickly.  But use the tools inside to actually practice and prepare so you can respond better to your most challenging situations.  If you practice the breathing techniques and the techniques for visualization, you will be using the same tools that the world’s best athletes, the Navy Seals, the best execs, and the highest achievers use … to do more, be more, and achieve more … in work and life.

Categories: Blogs

People: Resilience Creators, Not Resources

Johanna Rothman - Mon, 11/02/2015 - 14:38

I’ve been traveling, teaching, speaking and consulting all over the world. I keep encountering managers who talk about the “resources.” They mean people, and they say “resources.”

That makes me nuts. I blogged about that in People Are Not Resources. (I have other posts about this, too, but that’s a good one.)

I finally determined what we might call people. People are “resilience creators.” They are able to recognize challenges, concerns, or problems, and adjust their behavior.

People solve problems so the project can continue and deliver the product.

People fix problems so that customer support or sales can make the user experience useful.

People deliver products and/or services (or support the people who do) so that the company can continue to employ other people, deliver the company’s work, and acquire/retain customers.

We want resilient companies (and projects and environments). When we encounter a hiccup (or worse) we want the work to continue. Maybe not in the way it did before. Maybe we need to change something about what we do or how we do it. That’s fine. You hired great people, right?

People can solve problems so that the company can be resilient. To me, that means that the people are resilience creators, not “resources.”

People create resilience when they have the ability to solve problems because you asked them for results.

People create resilience when they understand the goals of the work.

People create resilience when they have the ability to work together, in a holistic way, not in competition with each other.

What would you rather have in your organization: resources or resilience creators?

Categories: Blogs

Handling Request/Reply Timeouts With RabbitMQ, Node and Promises

Derick Bailey - new ThoughtStream - Mon, 11/02/2015 - 14:30

In my RabbitMQ Patterns For Applications (email course / ebook), I talk about how you should limit the amount of time allowed for a Request/Reply scenario:

The typical use case for a request/response scenario is to retrieve data that a user needs to see, from some external system. When the request is made, a reasonable timeout can be set. When the timeout elapses, the requesting code should be notified that the response is missing so it can move on.

If you have a scenario where you need an extensive amount of time to process a message / request, you should look at long running processes and status updates (also covered in the course / book). 

Working time limit

But, what does it look like to elapse a timeout for a request/response scenario? Is that done in code, in RabbitMQ, in both, or … ???

The Request Timeout

Set up a request using Express to handle an HTTP request, and Rabbus (my little micro-servicebus, built on top of wascally) to handle the RabbitMQ Request, as this example shows.

There isn’t much to this example (that’s part of the point of Rabbus), but it does show how a simple request can be made from within an Express router. 

Assuming a response comes from the other end of RabbitMQ, everything will be good to go. Unfortunately, that expectation will fail time and time again, as the network hiccups, applications go down, and other problems interfere with the request or response. And if that happens, the request will sit there forever, never returning and never rendering a response to the HTTP request!

Timeout The Request

Because of the possibility of an endless wait-state and never responding to the HTTP request, the first place you need a timeout is the RabbitMQ request.

You need to gracefully handle a scenario where a response is not received – for any reason – and move on. Perhaps “move on” means showing some default set of information – or a small warning message to the user. Whatever it means, you need to have code that handles this scenario. 

To do that, Promises can be used effectively.

Start by extracting the Rabbus request into a separate method. This will be useful to keep the Express handler clean:

Next up, wrap the implementation of this extracted method with a new Promise (assuming ES6 promises, or use RSVP or another library).

Within this promise, you’ll need to set up a timer using setTimeout, to be responsible for “cancelling” the request. The actual “cancel” will come from resolving the promise with a “completed” flag set to false. If the request receives a response within the specified time, you can clearTimeout using the timer id received from setTimeout, and then resolve the promise with “completed” set to true, passing along the returned data as well.

Below the wrapping promise, you’ll want to immediately consume the promise using a callback function for both the resolution and rejection. If it resolves with “completed” set to true, fire the callback method with the data. Otherwise, call it with no parameters. If rejected, due to an error, fire the callback with the error.

With this done, your request will wait for the specified amount of time before cancelling and moving on.

If the back-end code on the other side of RabbitMQ goes down, or is taking too long, or whatever, your users will still get the page loaded within a reasonable amount of time – they’ll just be short a little bit of information. You can handle that in a number of ways – showing a “could not load…” message, showing some default information, or ignoring it entirely, for example. 

But, if the back-end request handler goes down, what happens to all of the request messages that are being sent and not processed?

Timing Out The Request Messages

If your request handler goes down for a while, and a lot of requests come in, it’s not too much of a problem, right? You’re timing out the request and moving on. But if there are a lot of requests being made while the request handler is down, you’ll end up with a lot of messages piled up in your RabbitMQ queue. Normally this is a good thing – one of the many reasons you want to use a queue.

In the case of a cancelled request, however, this won’t be quite so good. 

Say you have a request handler go down … for an hour. What happens when the request handler comes back up and it finds 1,000+ messages sitting in it’s queue? Hopefully it will start processing them – that’s what good queue handling code should do! Except in this case, why is going to process them? The code that made the request is no longer interested in the response – it moved on, long ago.

Don’t waste the valuable computing resources with these dead requests that no longer need to be processed. Instead, put a TTL (time to live) on the queue and drop the messages!

This is typically done in the queue declaration, which can be handled with Rabbus, again:

Notice that the timeout (“messageTtl”) has been set to the same amount of time as the timeout above, for the requester. Having them synchronized, or set closely to each other, will help keep the system healthy by not processing messages that have no code to which it will be returned.

With that in place, you have an effective way to manage request timeout on both the requester side, and the messaging / request handler side.

On Temporal Messages

Messages and queues are there to provide reliability and stability in systems, at times. They allow code to be run at some point in the future, make crashes easier to deal with, facilitate inter-process communications in a reliable manner, and more.

Often, messages need to be persistent and have some guarantee of being handled for the system to work properly, too. But, with request timeouts, there is real value for temporal messages – messages that exist for a short period of time, to facilitate some feature, but are not absolutely required to live and be handled. 

The ability to have temporal messaging is important for distributed systems, and opens another world of opportunity for good messaging architecture. 



Get Up To Speed w/ RabbitMQ and Node

If you’re looking at RabbitMQ and Node, you need to look at the RabbitMQ For Developers bundle. This series of screencasts, ebooks, interviews with messaging professionals and more, will get you up and running with RabbitMQ and Node, faster than any other resource set. You’ll learn from real-world experience and see a working implementation of the most common messaging patterns.

Get the RabbitMQ For Developers bundle, and get your messages flowing!

Categories: Blogs

XP Days Benelux, Mechelen, Belgium, December 3-4 2015

Scrum Expert - Mon, 11/02/2015 - 14:00
XP Days Benelux is a two-day conference dedicated to eXtreme Programming (XP) and all Agile software development approaches like Scrum or Kanban. It groups Scrum practicionners from The Netherlands, Belgium and Luxembourg. Local and international Agile experts provide the presentations. In the agenda of the XP Days Benelux conference you can find topics like “Functional Programming and Test Driven Development”, “Agile transition and formal change management”, ...
Categories: Communities

Dancing with GetKanban (Using POLCA)

Xebia Blog - Mon, 11/02/2015 - 13:59

Very recently POLCA got some attention on twitter. The potential and application of POLCA to knowledge work I explained in my blog 'Squeeze more out of kanban with POLCA!' [Rij11] of 4 years ago.

In this blog the GetKanban [GetKanban] game is played by following the the initial 'standard' rules for handling Work in Progress (WiP) limits and by changing the rules of the game inspired by POLCA (See [POLCA]).

The results show an equal throughput between POLCA and non-overlapping WiP limits, with smaller inventory size in the case of POLCA way of approaching WiP limits.

First a short introduction to the GetKanban game is given and a description of the set-up together with the basic results.

Second a brief introduction to POLCA is given and the change of rules in the game is explained. Thirdly, the set-up of the game using POLCA and the results are discussed.

Finally, a few words are spent on the team's utilization.

Simulation: GetKanban Game

The step-up with standard WiP limits is shown below. The focus is on a basic simulation of the complete 24 project days of only the regular work items. The expedite, fixed delivery date, and intangibles are left out. In addition, the events described on the event cards are ignored. Reason being to get a 'clean' simulation showing the effect of applying the WiP limits in a different manner.


Other policies taken from the game: a billing cycle of 3 days, replenishment of the 'Ready' column is allowed only at the end of project days 9, 12, 15, 18, and 21.

The result of running the game at the end of day 24 iskanban

The picture shows the state of the board, lead time distribution diagram, control chart, and cumulative flow diagram.

From these it can be inferred that (a) 10 items are in progress, (b) throughput is 25 items in 24 days, (c) median of 9 days for the lead time of items from Ready to Deployed.


Interesting is that the control chart (middle chart) shows the average lead time dropping to 5-6 days in the last three days of the simulation. Since the game starts at day 9, this shows that it takes 12 days before the system settles in a new stable state with 5-6 as an average lead time compared to the 9 days at the beginning.

POLCA: New Rules

In POLCA (see [POLCA]) the essence is to make the WiP limit overlap. The 'O' of POLCA stands for 'Overlapping':

POLCA - Paired Overlapping Loops of Cards with Authorization

One of the characteristics differentiating POLCA from e.g. kanban based systems is that it is a combination of push & pull: 'Push when you know you can pull'.

Setting WiP limits to support pushing work when you know it can subsequently be pulled. The set-up in the game for POLCA is as follows:


For clarity the loops are indicated in the 'expedite' row.

How do the limits work? Two additional rules are introduced:

Rule 1)
In the columns associated with each loop a limit on the number of work items is set. E.g. the columns 'Development - Done' and 'Test' can only accommodate for a maximum of 3 cards. Likewise, the columns underneath the blue and red loops have a limit of 4 and 4 respectively.

Rule 2)
Work can only be pulled if a) the loop has capacity, i.e. has fewer cards than the limit, b) the next, or overlapping loop, has capacity.

These are illustrated with a number of examples that typically occur during the game:

Example 1: Four items in the 'Development - Done' column
No items are allowed to be pulled in 'Ready' because there is no capacity available in the blue loop (Rule 2)

Example2: Two items in 'Test' & two items in 'Analysis - Done'
One item can be pulled in 'Development - In Progress' (Rule 1 and Rule2).

Results for POLCA

The main results for running the game for 24 days with the above rules is shown in the charts below.

The Control Chart shows that it takes roughly 6 days for all existing items to flow out of the system. The effect of the new rules (POLCA style WiP limits) are seen starting from day 15.

Lead Time
On average the charts show a lead time of 3 days, starting from day 18. This is also clearly visible on the lead time distribution chart and on the narrow cumulative flow diagram.

The number of items produced by the project is 24 items in 24 days. This is equal enough to the through put as measured using the standard set of rules.

Work In Progress
The total number of items in progress in only 3 to 4 items. This is less than half of the items as seen in the simulation run using the standard set of rules.

polca-new-3 polca-new-1 polca-new-2

Note: The cumulative flow diagram clearly shows the 3-day billing day and replenishment (step-wise increments of black and purple lines).


As described in my blog 'One change at a time' [Rij15] getting feedback from improvements/changes that effect the flow of the system take some time before the system settles in the new stable state.

With POLCA it is expected that this learning cycle can be shortened. In running the game the control charts show that it takes approximately 12 days before the system reaches the stable state, whereas with the POLCA set of rules this is reached in half the time.

Results for POLCA - Continuous Replenishment

As described above, until now we have adhered to the billing cycle of 3 days, which also allows for replenishment every 3 days.

What happens if replenishment is allowed when possible. The results are shown int he charts below.

polca2-1 polca2-2 polca2-3

The cumulative flow diagram shows the same throughput, namely 24 items over a period of 24 days. Work in progress is larger because it is pulled in earlier instead of at the end of every third day.

What is interesting is that the Control Chart shows a large variation in lead time: from 3 to 6 days. What I noted during playing the game is that at regular times 3 to 4 items are allowed to be pulled into 'Ready'. These would sit for some time in 'Ready' and then suddenly completed all the way to 'Ready for Deployment'. Then another bunch of 3 to 4 items are pulled into 'Ready'.
This behavior is corroborated by the Control Chart (staircase pattern). The larger variation is shown in the Lead Time Distribution Chart.

What is the reason for this? My guess is that the limit of 4 on the red loop is too large. When replenishment was only allowed at days 9, 12, 15, ... this basically meant a lower limit for the red loop.
Tuning the limits is important for establishing a certain cadence. Luckily this behavior can be seen in the Control Chart.


In the GetKanban game specialists in the team are represented by colored dices: green for testers, blue for developers, and red for analysts. Effort spent is simulated by throwing the dices. Besides spending the available effort in their own speciality, it can also be spent in other specialities in which case the effort to spend is reduced.

During the game it may happen that utilization is less than 100%:

  1. Not spending effort in the speciality, e.g. assigning developers to do test work.
  2. No work item to spend the effort on because of WiP limits (not allowed to pull work).

The picture below depicts the utilization as happened during the game: on average a utilization of 80%.



In this blog I have shown how POLCA style of setting WiP limits work, how overlapping loops of limits help in pulling work fast through the system, and the positive effect on the team's learning cycle.

In summary, POLCA allows for

  • Shorter lead times
  • Lower work in progress, enabling a faster learning cycle

Tuning of the loop limits seems to be important for establishing a regular cadence. A 'staircase' pattern in the Control Chart is a strong indication that loop limits are not optimal.


[GetKanban] GetKanban Game:

[Rij11] Blog: Squeeze more out of kanban with POLCA!

[Rij15] Blog: One change at a time



Categories: Companies

How to Quantify Cost of Delay

Find out how quantifying Cost of Delay helps organizations improve prioritization, make trade-off decisions and create a...

The post How to Quantify Cost of Delay appeared first on Blog | LeanKit.

Categories: Companies

Overview of Agile and Scrum Certifications

Ben Linders - Sun, 11/01/2015 - 23:37

Agile Scrum CertificationThe agile and lean tool agile/Scrum certification provides an overview of the most important agile and Scrum certifications that are being offered by different institutes.

There are many different certificates for agile, which makes it difficult to choose. In addition, an objective assessment of the whole choice of training institutes is difficult as well. This tool provides information to help you decide which exam would be most suitable for you.

The overview … Continue reading →

Categories: Blogs

Why the especially terrible "Agile" you hear about is misled and wrong

There have been claims of the terribleness of Agile from the birth of the Agile Manifesto in 2001 and before then, when it was more the terribleness of Extreme Programming.

Because of this, most of the "new" arguments you hear are not actually that new but typically reflect the same misunderstanding (whether accidental or deliberate) of what XP / Agile is about.  This is not to say that all new arguments have been made before nor that every argument whether old or new is completely devoid of merit.  However, it's rather tiresome and perhaps not too useful to respond to the exact same objections that have already been answered.

To that end, let's try to capture some of the patterns of misunderstanding and explain why they are misled and wrong.
Patterns of Agile misunderstanding

Categories: Blogs

The power of map and flatMap of Swift optionals

Xebia Blog - Sun, 11/01/2015 - 02:22

Until recently, I always felt like I was missing something in Swift. Something that makes working with optionals a lot easier. And just a short while ago I found out that the thing I was missing does already exist. I'm talking about the map and flatMap functions of Swift optionals (not the Array map function). Perhaps it's because they're not mentioned in the optionals sections of the Swift guide and because I haven't seen it in any other samples or tutorials. And after asking around I found out some of my fellow Swift programmers also didn't know about it. Since I find it an amazing Swift feature that makes your Swift code often a lot more elegant I'd like to share my experiences with it.

If you didn't know about the map and flatMap functions either you should keep on reading. If you did already know about it, I hope to show some good, real and useful samples of it's usage that perhaps you didn't think about yet.

What do map and flatMap do?

Let me first give you a brief example of what the functions do. If you're already familiar with this, feel free to skip ahead to the examples.

The map function transforms an optional into another type in case it's not nil, and otherwise it just returns nil. It does this by taking a closure as parameter. Here is a very basic example that you can try in a Swift Playground:

var value: Int? = 2
var newValue = { $0 * 2 }
// newValue is now 4

value = nil
newValue = { $0 * 2 }
// newValue is now nil

At first, this might look odd because we're calling a function on an optional. And don't we always have to unwrap it first? In this case not. That's because the map function is a function of the Optional type and not of the type that is wrapped by the Optional.

The flatMap is pretty much the same as map, except that the return value of the closure in map is not allowed to return nil, while the closure of flatMap can return nil. Let's see another basic example:

var value: Double? = 10
var newValue: Double? = value.flatMap { v in
    if v < 5.0 {
        return nil
    return v / 5.0
// newValue is now 2

newValue = newValue.flatMap { v in
    if v < 5.0 {
        return nil
    return v / 5.0
// now it's nil

If we would try to use map instead of flatMap in this case, it would not compile.

When to use it?

In many cases where you use a ternary operator to check if an optional is not nil, and then return some value if it's not nil and otherwise return nil, it's probably better to use one of the map functions. If you recognise the following pattern, you might want to go through your code and make some changes:

var value: Int? = 10
var newValue = value != nil ? value! + 10 : nil 
// or the other way around:
var otherValue = value == nil ? nil : value! + 10

The force unwrapping should already indicate that something is not quite right. So instead use the map function shown previously.

To avoid the force unwrapping, you might have used a simple if let or guard statement instead:

func addTen(value: Int?) -> Int? {
  if let value = value {
    return value + 10
  return nil

func addTwenty(value: Int?) -> Int? {
  guard let value = value else {
    return nil
  return value + 20

This still does exactly the same as the ternary operator and thus is better written with a map function.

Useful real examples of using the map functions

Now let's see some real examples of when you can use the map functions in a smart way that you might not immediately think of. You get the most out of it when you can immediately pass in an existing function that takes the type wrapped by the optional as it's only parameter. In all of the examples below I will first show it without a map function and then again rewritten with a map function.

Date formatting

Without map:

var date: NSDate? = ...
var formatted: String? = date == nil ? nil : NSDateFormatter().stringFromDate(date!)

With map:

var date: NSDate? = ...
var formatted:
Segue from cell in UITableView

Without map:

func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if let cell = sender as? UITableViewCell, let indexPath = tableView.indexPathForCell(cell) {
    (segue.destinationViewController as! MyViewController).item = items[indexPath.row]

With map:

func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if let indexPath = (sender as? UITableViewCell).flatMap(tableView.indexPathForCell) {
    (segue.destinationViewController as! MyViewController).item = items[indexPath.row]
Values in String literals

Without map:

func ageToString(age: Int?) -> String {
    return age == nil ? "Unknown age" : "She is (age!) years old"

With map:

func ageToString(age: Int?) -> String {
    return { "She is ($0) years old" } ?? "Unknown age"

(Please note that in the above examples there need to be backslashes before (age!) and ($0) but unfortunately :-( that breaks the formatting of WordPress in this post)

Localized Strings

Without map:

let label = UILabel()
func updateLabel(value: String?) {
  if let value = value {
    label.text = String.localizedStringWithFormat(
      NSLocalizedString("value %@", comment: ""), value)
  } else {
    label.text = nil

With map:

let label = UILabel()
func updateLabel(value: String?) {
  label.text = { 
    String.localizedStringWithFormat(NSLocalizedString("value %@", comment: ""), $0) 
Enum with rawValue from optional with default

Without map:

enum State: String {
    case Default = ""
    case Cancelled = "CANCELLED"

    static func parseState(state: String?) -> State {
        guard let state = state else {
            return .Default
        return State(rawValue: state) ?? .Default

With map:

enum State: String {
    case Default = ""
    case Cancelled = "CANCELLED"

    static func parseState(state: String?) -> State {
        return state.flatMap(State.init) ?? .Default
Find item in Array

With Item like:

struct Item {
    let identifier: String
    let value: String

let items: [Item]

Without map:

func find(identifier: String) -> Item? {
    if let index = items.indexOf({$0.identifier == identifier}) {
        return items[index]
    return nil

With map:

func find(identifier: String) -> Item? {
    return items.indexOf({$0.identifier == identifier}).map({items[$0]})
Constructing objects with json like dictionaries

With a struct (or class) like:

struct Person {
    let firstName: String
    let lastName: String

    init?(json: [String: AnyObject]) {
        if let firstName = json["firstName"] as? String, let lastName = json["lastName"] as? String {
            self.firstName = firstName
            self.lastName = lastName
        return nil

Without map:

func createPerson(json: [String: AnyObject]) -> Person? {
    if let personJson = json["person"] as? [String: AnyObject] {
        return Person(json: personJson)
    return nil

With map:

func createPerson(json: [String: AnyObject]) -> Person? {
    return (json["person"] as? [String: AnyObject]).flatMap(Person.init)

The map and flatMap functions can be incredibly powerful and make your code more elegant. Hopefully with these examples you'll be able to spot when situations where it will really benefit your code when you use them.

Please let me know in the comments if you have similar smart examples of map and flatMap usages and I will add them to the list.

Categories: Companies

Hadoop: HDFS – ava.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.(Ljava/util/zip/Checksum;II)V

Mark Needham - Sun, 11/01/2015 - 01:58

I wanted to write a little program to check that one machine could communicate a HDFS server running on the other and adapted some code from the Hadoop wiki as follows:

package org.playground;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HadoopDFSFileReadWrite {
    static void printAndExit(String str) {
        System.err.println( str );
    public static void main (String[] argv) throws IOException {
        Configuration conf = new Configuration();
        conf.addResource(new Path("/Users/markneedham/Downloads/core-site.xml"));
        FileSystem fs = FileSystem.get(conf);
        Path inFile = new Path("hdfs://");
        Path outFile = new Path("hdfs://" + System.currentTimeMillis());
        // Check if input/output are valid
        if (!fs.exists(inFile))
            printAndExit("Input file not found");
        if (!fs.isFile(inFile))
            printAndExit("Input should be a file");
        if (fs.exists(outFile))
            printAndExit("Output already exists");
        // Read from and write to new file
        byte buffer[] = new byte[256];
        try ( FSDataInputStream in = inFile ); FSDataOutputStream out = fs.create( outFile ) )
            int bytesRead = 0;
            while ( (bytesRead = buffer )) > 0 )
                out.write( buffer, 0, bytesRead );
        catch ( IOException e )
            System.out.println( "Error while copying file" );

I initially thought I only had the following in my POM file:


But when I ran the script I got the following exception:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.<init>(Ljava/util/zip/Checksum;II)V
	at org.apache.hadoop.hdfs.DFSOutputStream.<init>(
	at org.apache.hadoop.hdfs.DFSOutputStream.<init>(
	at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(
	at org.apache.hadoop.hdfs.DFSClient.create(
	at org.apache.hadoop.hdfs.DFSClient.create(
	at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(
	at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.apache.hadoop.fs.FileSystem.create(
	at org.playground.HadoopDFSFileReadWrite.main(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at com.intellij.rt.execution.application.AppMain.main(

From following the stack trace I realised I’d made a mistake and had accidentally pulled in a dependency on hadoop-hdfs 2.4.1. If we don’t have the hadoop-hdfs dependency we’d actually see this error instead:

Exception in thread "main" No FileSystem for scheme: hdfs
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
	at org.apache.hadoop.fs.FileSystem.createFileSystem(
	at org.apache.hadoop.fs.FileSystem.access$200(
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
	at org.apache.hadoop.fs.FileSystem$Cache.get(
	at org.apache.hadoop.fs.FileSystem.get(
	at org.apache.hadoop.fs.FileSystem.get(
	at org.playground.HadoopDFSFileReadWrite.main(
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at com.intellij.rt.execution.application.AppMain.main(

Now let’s add the correct version of the dependency and make sure it all works as expected:


When we run that a new file is created in HDFS on the other machine with the current timestamp:

$ date +%s000
$ hdfs dfs -ls
-rw-r--r--   3 markneedham supergroup       9249 2015-11-01 00:13 output-1446337098257
Categories: Blogs

Android Resource Configuration override and Large Text Mode

Xebia Blog - Sat, 10/31/2015 - 14:15

In Android, the resource Configuration dictates what resources and assets are selected. The system populates a default configuration to match your device and settings (screen size, device orientation, language). Sometimes, you need to deviate from these defaults. Since API 17 you can use applyOverrideConfiguration(Configuration) to specify an alternative resource config. The normal place to do so is in the attachBaseContext(Context) method of your Activity.

public class MainActivity extends Activity {

protected void attachBaseContext(Context newBase) {
    final Configuration override = new Configuration();
    override.locale = new Locale("nl", "NL");



Here's what that looks like:

Screenshot Screenshot

Unfortunately, there's a catch.

Android has a "Large Text" setting in its accessibility options (and in some cases a different text size setting in the display options). If you use the overrideConfiguration method to set your own resource configurtation, you will wipe out the Large Text preference, hurting your accessibilty support. This problem is easily overlooked, and luckily, easily fixed.

Screenshot Screenshot

The large fonts setting works by changing the Configuration.fontScale attribute, which is a public float. This works with the scaled density-independent pixels (sp's) that you use to define fontSize attributes. All sp dimensions have this fontScale multiplier applied. My Nexus 5 has two font size settings, normal at 1.0 and large at 1.3. The Nexus 5 emulator image has four, and many Samsung devices have seven different font sizes you can choose from.

When you set the override configuration, the new Configuration object has its fontScale set to 1.0f, thereby breaking the large fonts mode. To fix this problem, you simply have to copy the current fontScale value from the base context. This is best done using the copy constructor, which will also account for any other properties that come with the same issue.

public class MainActivity extends Activity {

protected void attachBaseContext(Context newBase) {
    final Configuration override = new Configuration(
            // Copy the original configuration so it isn't lost.
    override.locale = new Locale("nl", "NL");

    // BTW: You can also access the fontScale value using Settings.System:
    // Settings.System.getFloat(getContentResolver(), Settings.System.FONT_SCALE, 1.0f);



The app now works as intended, with accessibility support intact.


Long story short: when you use applyOverrideConfiguration, always test your app in the Large Fonts accessibility setting. Be sure to copy the original Configuration in your new Configuration constructor, or use the System.Settings.FONT_SCALE property to retrieve the font scale separately.

Categories: Companies

Be The Change To Create Change

NetObjectives - Sat, 10/31/2015 - 08:57
“Be the change that you wish to see in the world.” ― Mahatma Gandhi I have heard many versions of this quote and have always thought I knew its meaning.  I thought it meant that one must look to oneself for action, that one must take responsibility for change and not expect or wait for others to do it.  While I still believe this is part of the message, there is another one that I have been...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Gatineau-Ottawa Agile Tour 2015 schedule confirmed!

Agile Ottawa - Fri, 10/30/2015 - 20:26
(voir la version française plus bas) Hi all! The Gatineau-Ottawa Agile Tour 2015 program has been confirmed! We’ll have 2 great keynotes from Richard Sheridan and Gil Broza, as well as 4 tracks of 4 sessions each, with a variety … Continue reading →
Categories: Communities

It's a Trap! Agile Lessons from Star Wars

Rally Agile Blog - Fri, 10/30/2015 - 16:00

Recently I experienced what was one of the saddest moments of my career when, while re-watching “Return of the Jedi” for the 116th time, I realized I was coaching a company that was basically the Empire.

Surely you remember the movie's very first scene? Darth Vader arrives on the Death Star to help put the long-delayed project back on schedule.

Here’s the conversation that ensues between the commander, our de facto Death Star project manager, and Vader, our business stakeholder. Swap out the titles and you can almost imagine it taking place in a conference room in a company not so far, far away.

Commander: “I assure you Lord Vader, my men are working as fast as they can.”

Vader: “Perhaps I can find new ways to motivate them.”

Commander: “I tell you that this station will be operational as planned.”

Vader: “The Emperor does not share your optimistic appraisal of the situation.”

Commander: “But he asks the impossible … I need more men.”

Vader: “Then perhaps you can tell him when he arrives.”

Commander: “The Emperor’s coming here?”

Vader: "That is correct, Commander. And he is most displeased with your apparent lack of progress.”

Commander: “We shall double our efforts!”

Vader: “I hope so, Commander, for your sake. The Emperor is not as forgiving as I am.”

You know you’re in trouble when Lord Vader suggests he’s the more forgiving of your bosses!

If you’re the commander, then, at this point, you are very worried. Because you already know you don’t have the people to succeed. (You may think throwing “more men” at the problem will make it go away, but Brooks’ Mythical Man Month tells us that adding manpower to a late complex engineering project only makes it later.)

And you know that if you fail, you’ll get force choked.

Fail with a Culture of Fear

This is what you call a culture of fear. It pains me to see this in my coaching, because the problem with a culture of fear isn’t just that it doesn’t work. Nothing shuts down innovation, motivation and collaboration faster than a fear-based culture.

Fear can be persuasive in the short term, but with this tactic you’re going to lose a lot of people over the long term. People will make promises they can’t keep and say things are going well, even when they aren't, to avoid punishment. Fear will exacerbate work delays as people obscure the true status of the work, and, more importantly, it will result in a loss of your workers as they get force choked out of the organization.

Dan Pink has pointed out that extrinsic carrot-and-stick motivation was designed for 20th-century work. In the 21st century, people feel trapped by organizations that use this approach: people often know the best thing to do to get the work done, but they don’t feel empowered to do it because they’re stuck in this systematized culture of fear. 

Empower Your Teams

In a culture of fear, teams don’t feel empowered. Slap their hands a few times and they’ll stop trying. Eventually you’ll end up with a team that waits for directions, that waits for someone else to make decisions for them rather than moving forward. As the president of 3M said,

If you put fences around people, you get sheep.

From a manager’s point of view, this can actually seem more frustrating than it does compliant. I’ve heard managers complain that “my team won’t do anything unless I tell them to do it,” without acknowledging (ironically) that they’ve created this culture by not trusting their teams, or by punishing them for straying outside the fences.

We often think of the military as the epitome of command-and-control management, but in reality even the military recognizes the importance of decentralized decision-making. A military commander might radio his troops and tell them to “take the hill,” but he doesn’t need to tell them HOW to take the hill: he lets THEM decide the best way to do it, based on local information and the situation on the ground. In a fear-based culture, if you tell your team to take the hill, they may very well stand still and say, “How?”

Adapt to Take Out the Death Star

Think about a boss in your life with whom you worked really well. I bet that person had trust and faith in you to get your job done, and had your back if you needed help.

Now think about the Battle of Endor, toward the end of “Return of the Jedi.” Admiral Ackbar has assembled the entire rebel alliance fleet and they’re approaching the Death Star, intending to take it out and save the galaxy. But as they get closer they realize that the Death Star’s shield hasn’t yet been deactivated by the team on the ground with Han Solo and Princess Leia. Then they see the fleet of imperial star destroyers.

At first Ackbar wants to retreat, but Lando Calrissian lobbies him to change tactics—engage the star destroyers in battle, despite being outnumbered, so that Han and Leia have more time to work on the shield. [SPOILER ALERT] Ackbar assents: the fleet spends a little time picking off tie fighters and star destroyers, Solo and Leia get the shield down, and Lando and Wedge Antilles successfully destroy the Death Star. Yee ha!

To accomplish an epic feat like this you’ve got to collaborate, communicate, respond to feedback and trust your colleagues to do their jobs.

Would Vader have listened to advice, like Ackbar did? Would the Emperor have trusted his team on the ground to pull through, even though they were late? Heck, would you dare to give either one of them feedback on their plan, in the heat of battle? I didn’t think so.

But let’s be honest: not every project is the Death Star. In fact, to act like every project is mission-critical is disingenuous—not to mention demoralizing. And when you do have that project that feels like life or death, everyone already knows: you should all be in alignment already, with everyone on board, eager to work together to get the job done.

Mind the Trap

Ackbar was right to call out the trap. Your trap is thinking that you can have control over everything, and thinking that having control would improve your circumstances. In truth, especially at large organizations, you can’t possibly see everything that’s coming—much less control it.

Agile organizations avoid the trap by reflexively responding to change, delegating decision-making closer to the ground, improving their visibility into the work and making adjustments based on feedback.

To become an agile organization, you have to start by changing your leadership style. Managers: you can coach all you want at the team level, but if your executives still go around acting like Emperor Palpatine, you’ll end up with no change to the business and no improvement in your results.

You also have to change your system. Sure, everyone may have a little dark side in them, but most people aren’t genuinely evil—they’re just a product of the system they’re in. In the same way that carrot-and-stick systems aren’t good motivators for today’s style of work, fear-based systems don’t produce positive long-term results. Change the system, however, and you’ll change the behaviors of the people in that system. Take it from a scoundrel like Han Solo, who, adopted into a system of people collaborating for a better universe, turned himself into a hero.

 (image: Flickr, CC)

Use the Force

We know that fear is the path to the dark side. Yoda tells us that fear leads to anger, anger leads to hate, and hate leads to … suffering. Let’s put an end to cultures of fear and suffering! It’s no way to work, and besides, it’s not going to get your organization where it wants to go. (Unless you want your competition to fly into your core reactor and blow you to smithereens.)

Do not give in to the dark side! You have the power at your disposal to change. Before “The Force Awakens” later this year, I hope you’ll join me on a journey towards greater business agility.

Todd Sheridan
Categories: Companies

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.