Skip to content

Feed aggregator

The Grumpy Scrum Master

Agile Tools - Thu, 09/18/2014 - 06:54

grumpy dwarf

“Going against men, I have heard at times a deep harmony
thrumming in the mixture, and when they ask me what
I say I don’t know. It is not the only or the easiest
way to come to the truth. It is one way.” – Wendell Berry

I looked in the mirror the other day and guess what I saw? The grumpy scrum master. He comes by sometimes and pays me a visit. Old grumpy looked at me and I looked at him and together we agreed that perhaps, just this one time, he just might be right.

We sat down and had a talk. It turns out he’s tired and cranky and seen this all before. I told him I can relate. We agreed that we’ve both done enough stupid to last a couple of lifetimes. No arguments there. He knows what he doesn’t like – me too! After a little debate, we both agreed we don’t give a damn what you think.

So we decided it was time to write a manifesto. That is

We grumps have come to value:

Speaking our mind over listening to whiners

Working hard over talking about it

 Getting shit done over following a plan

Disagreeing with you over getting along

That is, while the items are the right are a total waste of time, the stuff on the left is much more gratifying.


Filed under: Coaching, Humor, Scrum Tagged: bad attitude, grumpy, Humor, Scrum, Scrum Master
Categories: Blogs

Become high performing. By being happy.  

Xebia Blog - Thu, 09/18/2014 - 04:59

The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

This are definitely awkward times. Terrible wars and epidemics where we can’t turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find “time” for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

Categories: Companies

How to create an Agile Burn-Up Graph in Google Docs - Kane Mar - Wed, 09/17/2014 - 22:01

A Burn-Up graph is simply a stack graph showing the total amount of work the team has in their product backlog over a number of Sprints. I’ve used a variety of different Agile Burn-Up graphs over the years. Here’s one of my favourites:


Agile Burn-Up Graph

Agile Burn-Up Graph


I created this with Excel while working with an insurance company based in Mayfield, Ohio. In this article I’ll show you how to create something similar using Google docs.

Understanding the Burn-up Graph

This graph (above) shows the total amount of work in the product backlog (top line of the graph), the amount of work completed (yellow) and the amount of work remaining (red and blue). The amount of work remaining is divided into estimated work (red) and un-estimated work (blue) which we guessed at using a very course scale. At the start you can see the total amount of work on the backlog increase until the fourth Sprint as indicated by the rising top-line of the graph.

After the fourth Sprint the team decided that they needed to start breaking down the un-estimated work into small User Stories and so you can see an increase in the red area of the graph and a decline in the blue. We continued to complete work, so the yellow area continued to grow.

By Sprint 12 we had completely broken down all the large bodies of work and had a well refined backlog.

Creating the Graph in Google Spreadsheets

The Google graph that I’ve created is a little bit simpler than the graph above. It shows the total amount of work in the product, the total amount of work added to the product backlog, and the total amount of work completed. You can get the Google Spreadsheet document to create this graph here.

This is what it looks like:


Agile Product Burn-up Graph

Agile Product Burn-up Graph


The spreadsheet contains two tabs. The first tab contains the data necessary for the graph, and the second tab contains the graph. To start using this graph,

  1. Make a copy of the Google Spreadsheet
  2. Enter the total of the teams estimates in the product backlog into the first column of Series A.
  3. There after all you need to record is the total number of the teams estimates completed at the end of each Sprint, and
  4. The total number of the teams estimates added to the Product Backlog (by the Product Owner) during the sprint.


Product Burn-up Graph Google Spreadsheet

Product Burn-up Graph Google Spreadsheet


You can get the Google Spreadsheet document to create this graph here.

Categories: Blogs

Container Usage Guidelines

Jimmy Bogard - Wed, 09/17/2014 - 21:25

Over the years, I’ve used and abused IoC containers. While the different tools have come and gone, I’ve settled on a set of guidelines on using containers effectively. As a big fan of the Framework Design Guidelines book and its style of “DO/CONSIDER/AVOID/DON’T”, I tried to capture what has made me successful with containers over the years in a series of guidelines below.

Container configuration

Container configuration typically occurs once at the beginning of the lifecycle of an AppDomain, creating an instance of a container as the composition root of the application, and configuring and framework-specific service locators. StructureMap combines scanning for convention-based registration, and Registries for component-specific configuration.

X AVOID scanning an assembly more than once.

Scanning is somewhat expensive, as scanning involves passing each type in an assembly through each convention. A typical use of scanning is to target one or more assemblies, find all custom Registries, and apply conventions. Conventions include generics rules, matching common naming conventions (IFoo to Foo) and applying custom conventions. A typical root configuration would be:

var container = new Container(cfg =>
    cfg.Scan(scan => {

Component-specific configuration is then separated out into individual Registry objects, instead of mixed with scanning. Although it is possible to perform both scanning and component configuration in one step, separating component-specific registration in individual registries provides a better separation of conventions and configuration.

√ DO separate configuration concerning different components or concerns into different Registry classes.

Individual Registry classes contain component-specific registration. Prefer smaller, targeted Registries, organized around function, scope, component etc. All container configuration for a single 3rd-party component organized into a single Registry makes it easy to view and modify all configuration for that one component:

public class NHibernateRegistry : Registry {
    public NHibernateRegistry() {
        For<Configuration>().Singleton().Use(c => new ConfigurationFactory().CreateConfiguration());
        For<ISessionFactory>().Singleton().Use(c => c.GetInstance<Configuration>().BuildSessionFactory());
        For<ISession>().Use(c => {
            var sessionFactory = c.GetInstance<ISessionFactory>();
            var orgInterceptor = new OrganizationInterceptor(c.GetInstance<IUserContext>());
            return sessionFactory.OpenSession(orgInterceptor);

X DO NOT use the static API for configuring or resolving.

Although StructureMap exposes a static API in the ObjectFactory class, it is considered obsolete. If a static instance of a composition root is needed for 3rd-party libraries, create a static instance of the composition root Container in application code.

√ DO use the instance-based API for configuring.

Instead of using ObjectFactory.Initialize and exposing ObjectFactory.Instance, create a Container instance directly. The consuming application is responsible for determining the lifecycle/configuration timing and exposing container creation/configuration as an explicit function allows the consuming runtime to determine these (for example, in a web application vs. integration tests).

X DO NOT create a separate project solely for dependency resolution and configuration.

Container configuration belongs in applications requiring those dependencies. Avoid convoluted project reference hierarchies (i.e., a “DependencyResolution” project). Instead, organize container configuration inside the projects needing them, and defer additional project creation until multiple deployed applications need shared, common configuration.

√ DO include a Registry in each assembly that needs dependencies configured.

In the case where multiple deployed applications share a common project, include inside that project container configuration for components specific to that project. If the shared project requires convention scanning, then a single Registry local to that project should perform the scanning of itself and any dependent assemblies.

X AVOID loading assemblies by name to configure.

Scanning allows adding assemblies by name, “scan.Assembly(“MyAssembly”)”. Since assembly names can change, reference a specific type in that assembly to be registered. 
Lifecycle configuration

Most containers allow defining the lifecycle of components, and StructureMap is no exception. Lifecycles determine how StructureMap manages instances of components. By default, instances for a single request are shared. Ideally, only Singleton instances and per-request instances should be needed. There are cases where a custom lifecycle is necessary, to scope a component to a given HTTP request (HttpContext).

√ DO use the container to configure component lifecycle.

Avoid creating custom factories or builder methods for component lifecycles. Your custom factory for building a singleton component is probably broken, and lifecycles in containers have undergone extensive testing and usage over many years. Additionally, building factories solely for controlling lifecycles leaks implementation and environment concerns to services consuming lifecycle-controlled components. In the case where instantiation needs to be deferred or lifecycle needs to be explicitly managed (for example, instantiating in a using block), depending on a Func<IService> or an abstract factory is appropriate.

√ CONSIDER using child containers for per-request instances instead of HttpContext or similar scopes.

Child/nested containers inherit configuration from a root container, and many modern application frameworks include the concept of creating scopes for requests. Web API in particular creates a dependency scope for each request. Instead of using a lifecycle, individual components can be configured for an individual instance of a child container:

public IDependencyScope BeginScope() {
    IContainer child = this.Container.GetNestedContainer();
    var session = new ApiContext(child.GetInstance<IDomainEventDispatcher>());
    var resolver = new StructureMapDependencyResolver(child);
    var provider = new ServiceLocatorProvider(() => resolver);
    child.Configure(cfg =>
    return resolver;

Since components configured for a child container are transient for that container, child containers provide a mechanism to create explicit lifecycle scopes configured for that one child container instance. Common applications include creating child containers per integration test, MVVM command handler, web request etc.

√ DO dispose of child containers.

Containers contain a Dispose method, so if the underlying service locator extensions do not dispose directly, dispose of the container yourself. Containers, when disposed, will call Dispose on any non-singleton component that implements IDisposable. This will ensure that any resources potentially consumed by components are disposed properly.
Component design and naming

Much of the negativity around DI containers arises from their encapsulation of building object graphs. A large, complicated object graph is resolved with single line of code, hiding potentially dozens of disparate underlying services. Common to those new to Domain-Driven Design is the habit of creating interfaces for every small behavior, leading to overly complex designs. These design smells are easy to spot without a container, since building complex object graphs by hand is tedious. DI containers hide this pain, so it is up to the developer to recognize these design smells up front, or avoid them entirely.

X AVOID deeply nested object graphs.

Large object graphs are difficult to understand, but easy to create with DI containers. Instead of a strict top-down design, identify cross-cutting concerns and build generic abstractions around them. Procedural code is perfectly acceptable, and many design patterns and refactoring techniques exist to address complicated procedural code. The behavioral design patterns can be especially helpful, combined with refactorings dealing with long/complicated code can be especially helpful. Starting with the Transaction Script pattern keeps the number of structures low until the code exhibits enough design smells to warrant refactoring.

√ CONSIDER building generic abstractions around concepts, such as IRequestHandler<T>, IValidator<T>.

When designs do become unwieldy, breaking down components into multiple services often leads to service-itis, where a system contains numerous services but each only used in one context or execution path. Instead, behavioral patterns such as the Mediator, Command, Chain of Responsibility and Strategy are especially helpful in creating abstractions around concepts. Common concepts include:

  • Queries
  • Commands
  • Validators
  • Notifications
  • Model binders
  • Filters
  • Search providers
  • PDF document generators
  • REST document readers/writers

Each of these patterns begins with a common interface:

public interface IRequestHandler<in TRequest, out TResponse>
    where TRequest : IRequest<TResponse> {
    TResponse Handle(TRequest request);
public interface IValidator<in T> {
    ValidationResult Validate(T instance);
public interface ISearcher {
    bool IsMatch(string query);
    IEnumerable<Person> Search(string query);

Registration for these components involves adding all implementations of an interface, and code using these components request an instance based on a generic parameter or all instances in the case of the chain of responsibility pattern.

One exception to this rule is for third-party components and external, volatile dependencies.

√ CONSIDER encapsulating 3rd-party libraries behind adapters or facades.

While using a 3rd-party dependency does not necessitate building an abstraction for that component, if the component is difficult or impossible to fake/mock in a test, then it is appropriate to create a facade around that component. File system, web services, email, queues and anything else that touches the file system or network are prime targets for abstraction.

The database layer is a little more subtle, as requests to the database often need to be optimized in isolation from any other request. Switching database/ORM strategies is fairly straightforward, since most ORMs use a common language already (LINQ), but have subtle differences when it comes to optimizing calls. Large projects can switch between major ORMs with relative ease, so any abstraction would limit the use of any one ORM into the least common denominator.

X DO NOT create interfaces for every service.

Another common misconception of SOLID design is that every component deserves an interface. DI containers can resolve concrete types without an issue, so there is no technical limitation to depending directly on a concrete type. In the book Growing Object-Oriented Software, Guided by Tests, these components are referred to as Peers, and in Hexagonal Architecture terms, interfaces are reserved for Ports.

√ DO depend on concrete types when those dependencies are in the same logical layer/tier.

A side effect of depending directly on concrete types is that it becomes very difficult to over-specify tests. Interfaces are appropriate when there is truly an abstraction to a concept, but if there is no abstraction, no interface is needed.

X AVOID implementation names whose name is the implemented interface name without the “I”.

StructureMap’s default conventions do match up IFoo with Foo, and this can be a convenient default behavior, but when you have implementations whose name is the same as their interface without the “I”, that is a symptom that you are arbitrarily creating an interface for every service, when just resolving the concrete service type would be sufficient instead.  In other words, the mere ability to resolve a service type by an interface is not sufficient justification for introducing an interface.

√ DO name implementation classes based on details of the implementation (AspNetUserContext : IUserContext).

An easy way to detect excessive abstraction is when class names are directly the interface name without the prefix “I”. An implementation of an interface should describe the implementation. For concept-based interfaces, class names describe the representation of the concept (ChangeNameValidator, NameSearcher etc.) Environment/context-specific implementations are named after that context (WebApiUserContext : IUserContext).
Dynamic resolution

While most component resolution occurs at the very top level of a request (controller/presenter), there are occasions when dynamic resolution of a component is necessary. For example, model binding in MVC occurs after a controller is created, making it slightly more difficult to know at controller construction time what the model type is, unless it is assumed using the action parameters. For many extension points in MVC, it is impossible to avoid service location.

X AVOID using the container for service location directly.

Ideally, component resolution occurs once in a request, but in the cases where this is not possible, use a framework’s built-in resolution capabilities. In Web API for example, dynamically resolved dependencies should be resolved from the current dependency scope:

var validationProvider = actionContext

Web API creates a child container per request and caches this scoped container within the request message. If the framework does not provide a scoped instance, store the current container in an appropriately scoped object, such as HttpContext.Items for web requests. Occasionally, you might need to depend on a service but need to explicitly decouple or control its lifecycle. In those cases, containers support depending directly on a Func.

√ CONSIDER depending on a Func<IService> for late-bound services.

For cases where known types need to be resolved dynamically, instead of trying to build special caching/resolution services, you can instead depend on a constructor function in the form of a Func. This separates wiring of dependencies from instantiation, allowing client code to have explicit construction without depending directly on a container.

public EmailController(Func<IEmailService> emailServiceProvider) {
    _emailServiceProvider = emailServiceProvider;
public ActionResult SendEmail(string to, string subject, string body) {
    using (var emailService = _emailServiceProvider()) {
        emailService.Send(to, subject, body);

In cases where this becomes complicated, or reflection code is needed, a factory method or delegate type explicitly captures this intent.

√ DO encapsulate container usage with factory classes when invoking a container is required.

The Patterns and Practices Common Service Locator defines a delegate type representing the creation of a service locator instance:

public delegate IServiceLocator ServiceLocatorProvider();

For code needing dynamic instantiation of a service locator, configuration code creates a dependency definition for this delegate type:

public IDependencyScope BeginScope()
    IContainer child = this.Container.GetNestedContainer();
    var resolver = new StructureMapWebApiDependencyResolver(child);
    var provider = new ServiceLocatorProvider(() => resolver);
    child.Configure(cfg =>
    return new StructureMapWebApiDependencyResolver(child);

This pattern is especially useful if an outer dependency has a longer configured lifecycle (static/singleton) but you need a window of shorter lifecycles. For simple instances of reflection-based component resolution, some containers include automatic facilities for creating factories.

√ CONSIDER using auto-factory capabilities of the container, if available.

Auto-factories in StructureMap are available as a separate package, and allow you to create an interface with an automatic implementation:

public interface IPluginFactory {
    IList<IPlugin> GetPlugins();

The AutoFactories feature will dynamically create an implementation that defers to the container for instantiating the list of plugins.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

How to Enable Estimate-Free Development

Practical Agility - Dave Rooney - Wed, 09/17/2014 - 20:06
Most of us have been there... the release or sprint planning meeting to goes on and on and on and on. There is constant discussion over what a story means and endless debate over whether it's 3, 5 or 8 points. You're eventually bludgeoned into agreement, or simply too numb to disagree. Any way you look at it, you'll never get those 2, 4 or even 6 hours back - they're gone forever! And to what
Categories: Blogs

Continuous Delivery is about removing waste from the Software Delivery Pipeline

Xebia Blog - Wed, 09/17/2014 - 16:44

On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

a traditional software delivery process

a traditional software delivery process

Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

overview of tooling for automating the software delivery process

overview of tooling for automating the software delivery process

When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

an optimized software delivery process

an optimized software delivery process

As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

Michiel Sens.

Categories: Companies

Lean Kanban Central Europe 2014

AvailAgility - Karl Scotland - Wed, 09/17/2014 - 12:49

Despite my best efforts, I couldn’t help but get involved in Lean Kanban Central Europe again this year, and have even taken on more of a role by helping co-chair the Kanban track. Its always a great event (one of my favourites) because the people and content are such high calibre. This year looks to be no different.

This year I’m also running a workshop (as previously announced). I hope to see you there!


Categories: Blogs

Die Lego-Story oder das Geheimnis gelungener Skalierung

Scrum 4 You - Wed, 09/17/2014 - 07:30

Wenn wir von Skalierung sprechen, denken wir häufig an Umbrüche, Neuorganisation und Transformationen. Wir haben große Schemata und Organigramme im Kopf. Ein schönes Lehrstück darüber, wie solche Skalierungen mächtig schiefgehen können, beschreibt David Robertson in seinem Buch “Brick by Brick”. Es erzählt die Geschichte der dänischen Lego-Gruppe.

Besonders spannend sind zwei Augenblicke in der Firmengeschichte: Zum einen ist da die frühe Firmengeschichte in der Nachkriegszeit, als Lego sich vom Hersteller von Holzspielzeugen zum Fabrikant modularer Plastikbausteine mauserte. Damals, im Jahr 1946, investierte Lego mehr als den zweifachen Jahresgewinn zum Erwerb einer enzigen Spritzgussmaschine für Kunststoff. Den Erfolg dieser Entscheidung kennen wir alle.

Verlorener Fokus

Weniger bekannt ist, dass Lego um die Jahrtausendwende kurz vor der Insolvenz stand. Wie konnte es so weit kommen? Lego hatte in den 1990er Jahren erlebt, wie sich das Spielverhalten von Kindern und Jugendlichen veränderte. In Zeiten von Gameboy und XBox erschien das Aufeinandertürmen von Spielklötzen zur Erschaffung eigener Welten plötzlich veraltet. Lego reagierte, indem es sich in einer Reihe von Geschäftsfeldern wie Computerspielen, Lifestyle-Produkte, Themenparks und Lernkonzepten versuchte. Dabei geriet das, was Lego immer schon stark gemacht hatte, aus dem Fokus. An Stelle des klaren und einfachen Designs, in dem Kinder ihre eigenen Erlebeniswelten nachbauen konnten, traten abstrakte Action- und Sci-Fi-Figuren. Die bei Kleinkindern beliebte Duplo-Linie wurde durch ein längst schon wieder abgeschafftes System namens Lego Explore ersetzt, das sich an den Spielzeugen des Konkurrenten Fisher-Price orientierte. Lego war in viele Richtungen gewachsen, aber diese Diversifizierung brach dem Unternehmen fast das Genick.

Eine interne Untersuchung ergab im Jahr 2004, dass 94% der Lego-Baukästen unprofitabel waren. Neben der misslungenen Diversifizierung war dies auch auf eine misslungene Systemarchitektur zurückzuführen. Zwischen 1997 und 2004 war die Nummer an Einzelbausteinen von 6.000 auf über 14.000 explodiert. Die Farbvarianten der Bausteine stiegen von ursprünglich sechs auf über 50 an. Jeder Einkäufer, jeder Logistiker kann sich vorstellen, wie viel Mehraufwand es für Produktion, Lager und Versand bedeutet, wenn neue Elemente nur für ein Baukasten-Set verwendbar sind. Aber auch die Integrität der Marken litt darunter: Plötzlich gab es nicht ein, sondern gleich acht Polizeimännchen mit minimalen Unterschieden in der Konfiguration. Für Produktlinien wurden eigene Minifiguren geschaffen, die zum Teil den Gesichtern ihrer Designer nachempfunden waren. Die Tradition war, im wörtlichsten Sinn (tra-dere – hinüber-geben) verloren gegangen. Erwachsene fanden das Lego ihrer Kindheit nicht wieder – und Kinder konnten mit den schönen neuen Welten wenig anfangen.

Rückbesinnung auf die Stärken

Der Weg zurück zur Profitabilität war ebenso einfach wie genial. Robertson erzählt, wie bei dem frisch ernannten CEO Jørgen Vig Knudstorp, damals Mitte 30, der Groschen fiel:

“For children and their parents, the benefits of a play system were obvious: combining bricks in almost any way they wanted fired kid’s creativity and imagination and delivered a singularly unique building experience. But for Knudstorp, his eureka moment came when he realized the Lego System is not just a play system, it’s also a business system. (…) Instead of following the industry norm of striving to come up with one-hit wonders, LEGO should create a coherent, expandable universe of toys. A Lego system of toys (…) would build familiarity and a sense of community around Lego.”

Diese Erkenntnis führte beispielsweise dazu, dass künftig mindestens 70% aller Bauteile aus bisherigem Bestand sein mussten. Oder dass die erfahrensten Designer und Entwickler wieder die Autorität bekamen, den Entwicklungsprozess von Anfang an mitzubestimmen. Oder dass die interne Softwareentwicklung mangels Erfahrung aufgegeben wurde und stattdessen Kooperationen eingegangen wurden.
Die Lego-Story ist ein Lehrstück in Skalierung. Kurz vor dem eigenen Untergang hat Lego erkannt, dass wirkliches Wachstum nur in Feldern geschehen kann, in denen das Unternehmen stark ist. In den neunziger Jahren versuchte Lego, Zeittrends hinterher zu laufen, die mit seiner eigenen Identität wenig zu tun hatten. Entsprechend hilflos agierte Lego dann auch. Bis es schließlich erkannte, welche Stärken es schon immer gehabt hatte. Dann hat Lego angefangen, konsequent auf diese Stärken zu fokussieren. Dort, im geschlossenen Ökoystem der Bausteinwelten, ist das Wachstum gelungen. Das Ergebnis kann sich mittlerweile auch finanziell sehen lassen: Im vergangenen Jahr meldete die Lego-Gruppe eine Umsatzrendite von 34% bei 3,1 Milliarden Euro Umsatz.


Vitra-Haus in Weil am Rhein. Die Architekten Herzog & de Neuron beschreiben die Verschachtelung von zwölf Häusern auf fünf Etagen als “domestic scale”. Das archetypische Haus (Ur-Haus) dient dabei als Grundform. Quelle:

Related posts:

  1. 1 Euro wer zu spät kommt | Daily Scrum | Bärenherz
  2. Die Kraft der Begeisterung
  3. Mehr wissen! Moderationstraining

Categories: Blogs

Role != Job

Agile Tools - Wed, 09/17/2014 - 07:16


When I talk to folks about Scrum, one of the points I make sure to cover is the holy trinity, the three basic roles in Scrum: Product Owner, Scrum Master, and Team. I’m starting to think I must be doing it wrong because when I talk about roles, somehow that role manifests itself as a job. Let me back up a step and see if I can explain what I mean. To me, a role is a transitory responsibility that anyone can take on. It’s akin to what actors do. Actors take different roles all the time. But when an actor takes a role, say as a teacher, they act in every way like a teacher, without actually being a teacher. They do it and then leave it behind and move on to the next role. They may perform the role so well that you can’t tell the difference between the actor and the teacher, but to the actor teaching is still just a role.

Now there are people for whom teaching is a job. A job is very different from a role. You are hired for a job. A job is something that you identify with and are assigned to. A job, at least for some, becomes something that they identify strongly with (i.e. “I am a teacher.” or “Teaching is what I do.”). A job is a very different thing than a role. A job comes with identity, some feeling of authenticity and permanence. Typically we hire people to perform jobs.

According to this definition, jobs and roles are very different beasts. However, people have a hard time keeping this distinction in mind. We tend to take roles and turn them into jobs. That’s unfortunate, because a role is meant to be something transitory, something that is filled temporarily. It is meant to be worn like a costume and then passed on to the next wearer. When you turn a role into a job, you risk perverting it’s purpose. When you turn a role into a job, you make it very difficult for others to share it – it’s hard to swap back and forth. When you make a role into a job, people get surprisingly defensive about it. It becomes something that they identify with very closely. If you try and tell them that anybody can do it, they tend to get all fussy and upset. They start to try and protect their job with clever artifacts like certifications – they’ll do anything to make themselves unique enough to keep that job. It’s an identity trap.

Here is how I see this problem manifest itself with Scrum teams: You sell them on scrum and teach them how it works. Every team has a Scrum Master and a Product Owner. So what do they do? They run out and hire themselves some people to fill the jobs of Scrum Masters and Product Owners. They get their teams sprinting and start delivering quickly – hey, now they’re agile! Only they’re not really. You see, as you face the challenge and complexity of modern day business, the team often needs to change. That person you hired as the Scrum Master? You may be best served to swap that role with somebody else. Maybe a developer or QA on the team. The ability to move that role around to different actors could be very useful. But you can’t do that now because it’s no longer a role, it’s somebody’s job. And you can’t mess with their job without seriously upsetting somebody. The end result is that your organization effectively can’t change. You limit your agility.

The bottom line is that I believe that the roles in Scrum were never intended to be jobs. To make those roles into jobs risks limiting your agility.

Filed under: Agile, Coaching, Scrum, Teams Tagged: Agile, agility, jobs, roles, Scrum
Categories: Blogs

Reliable Scrum

TV Agile - Tue, 09/16/2014 - 22:04
This presentation shows an approach based on Critical Chain Project Management (CCPM) how to make Agile approaches reliable (Reliable Scrum) and protect the Agile part of course. The next step is to make them even faster (and more Agile) with ideas out of the Lean and Theory of Constraints (TOC). In the third step you […]
Categories: Blogs

XProgNew – today’s problem - Tue, 09/16/2014 - 20:43

With Bill Tozier, I’m working on re-basing from WordPress to Ruby/Sinatra etc. We tweeted a bit about an issue we ran into today. It goes like this:

The image we had of the new site is that each article would be stored in a folder named to match the WordPress “slug” for the article such as “…/articles/xprognew-todays-problem/”. We’re planning to keep the article, its metadata, and any other assets, such as images, right in the article.

Writing will be in Markdown. So I’d expect to write an article that looks like this:

Some Title

Here's a paragraph of text. Assume it runs on and on ...


Markdown assumes that the construct above specifies a picture in the same folder as the article. It will generate, roughly, <img src='picture7.png/>.

This would clearly be the easiest way to specify the picture, and it seems to us to make sense to keep the pictures and metadata with the article. And the folder name matching the slug makes converting the existing site much easier. (Existing images will not have that kind of link, but are all in a WordPress images folder together somewhere. We’ve not looked yet at what to do about that. We’re trying to make new articles easier and if the conversion is tricky so be it.)

We are using Sinatra and kramdown. The Markdown above does generate the image statement shown. However, somewhere inside or above Sinatra, there is a built-in assumption that assets like images will be in a folder named “public”, and the article is rendered as if it, itself, were in public, so that the image call looks into public.

We tried navigating out of public with things like “../articles/myslug/” but this just doesn’t work. I’m told by Z Spencer that Rackup or whatever its name is, keeps you in public as a security measure. Good for it.

The result, though, is that we see few options:

  • Put all images in public. This will work, and is not unlike WordPress, but we don’t like it.
  • Put images in a subfolder of public, and type the folder name into the image call. This ties the article contents to the name of its slug and we don’t like that.
  • Put images in a subfolder of public, and do an on-the-fly substitution of the file name as we render the file. This requires a search and replace on the article and we don’t like that either.

There are probably other options but we see none that we like. And we see, today, no way to override where public points, except somewhere else under public.

Anyway, that’s the problem I’ve been tweeting about.

Categories: Blogs

R: ggplot – Plotting multiple variables on a line chart

Mark Needham - Tue, 09/16/2014 - 18:59

In my continued playing around with meetup data I wanted to plot the number of members who join the Neo4j group over time.

I started off with the variable ‘byWeek’ which shows how many members joined the group each week:

> head(byWeek)
Source: local data frame [6 x 2]
        week n
1 2011-06-02 8
2 2011-06-09 4
3 2011-06-30 2
4 2011-07-14 1
5 2011-07-21 1
6 2011-08-18 1

I wanted to plot the actual count alongside a rolling average for which I created the following data frame:

joinsByWeek = data.frame(actual = byWeek$n, 
                         week = byWeek$week,
                         rolling = rollmean(byWeek$n, 4, fill = NA, align=c("right")))
> head(joinsByWeek, 10)
   actual       week rolling
1       8 2011-06-02      NA
2       4 2011-06-09      NA
3       2 2011-06-30      NA
4       1 2011-07-14    3.75
5       1 2011-07-21    2.00
6       1 2011-08-18    1.25
7       1 2011-10-13    1.00
8       2 2011-11-24    1.25
9       1 2012-01-05    1.25
10      3 2012-01-12    1.75

The next step was to work out how to plot both ‘rolling’ and ‘actual’ on the same line chart. The easiest way is to make two calls to ‘geom_line’, like so:

ggplot(joinsByWeek, aes(x = week)) + 
  geom_line(aes(y = rolling), colour="blue") + 
  geom_line(aes(y = actual), colour = "grey") + 
  ylab(label="Number of new members") + 
2014 09 16 21 57 14

Alternatively we can make use of the ‘melt’ function from the reshape library…

meltedJoinsByWeek = melt(joinsByWeek, id = 'week')
> head(meltedJoinsByWeek, 20)
   week variable value
1     1   actual     8
2     2   actual     4
3     3   actual     2
4     4   actual     1
5     5   actual     1
6     6   actual     1
7     7   actual     1
8     8   actual     2
9     9   actual     1
10   10   actual     3
11   11   actual     1
12   12   actual     2
13   13   actual     4
14   14   actual     2
15   15   actual     3
16   16   actual     5
17   17   actual     1
18   18   actual     2
19   19   actual     1
20   20   actual     2

…which then means we can plot the chart with a single call to geom_line:

ggplot(meltedJoinsByWeek, aes(x = week, y = value, colour = variable)) + 
  geom_line() + 
  ylab(label="Number of new members") + 
  xlab("Week Number") + 
  scale_colour_manual(values=c("grey", "blue"))

2014 09 16 22 17 40

Categories: Blogs

What is Card Size and Why Does it Matter?

Many teams find it useful to identify and track the size — or amount of work — associated with each work item. LeanKit lets you easily assign a size value to each card,  manage your WIP based on card size, and run reports using size as a variable. In LeanKit, size is an optional field that you […]

The post What is Card Size and Why Does it Matter? appeared first on Blog | LeanKit.

Categories: Companies

How To Think Like a Microsoft Executive

J.D. Meier's Blog - Tue, 09/16/2014 - 18:06

One of the things I do, as a patterns and practices kind of guy, is research and share success patterns. 

One of my more interesting bodies of work is my set of patterns and practices for successful executive thinking.

A while back, I interviewed several Microsoft executives to get their take on how to think like an effective executive.

While the styles vary, what I enjoyed is the different mindset that each executive uses as they approach the challenge of how to change the world in a meaningful way.

5 Key Questions to Share Proven Practices for Executive Thinking

My approach was pretty simple.   I tried to think of a simple way to capture and distill the essence. I originally went the path of identifying key thinking scenarios (changing perspective, creating ideas, evaluating ideas, making decisions, making meaning, prioritizing ideas, and solving problems) ... and the path of identifying key thinking techniques (blue ocean/strategic profile, PMI, Six Thinking Hats, PQ/PA, BusinessThink, Five Whys, ... etc.) -- but I think just a simple set of 5 key questions was more effective.

These are the five questions I ended up using:

  1. What frame do you mostly use to evaluate ideas? (for example, one frame is: who's the customer? what's the problem? what's the competition doing? what does success look like?)
  2. How do you think differently, than other people might, that helps you get a better perspective on the problem?
  3. How do you think differently, than other people might, that helps you make a better decision?
  4. What are the top 3 questions you ask yourself the most each day that make the most difference?
  5. How do you get in your best state of mind or frame of mind for your best thinking?

The insights and lessons learned could fill books, but I thought I would share three of the responses that I tend to use and draw from on a regular basis …

Microsoft Executive #1

1) The dominant framework I like to use for decisions is: how can we best help the customer? Prioritizing the customer is nearly always the right way to make good decisions for the long term. While one has to have awareness of the competition and the like, it usually fails to “follow taillights” excessively. The best lens through which to view the competition is, “how are they helping their customers, and is there anything we can learn from them about how to help our own customers?”

2) I don’t think that there is anything magical about executive thinking. The one thing we hopefully have is a greater breadth and depth of experience on key decisions. We use this experience to discern patterns, and those patterns often help us make good decisions on relatively little data.

3) Same answer as #2.

4) How can we help our customers more? Are we being realistic in our assessments of ourselves, our offerings and the needs of our customers? How can we best execute on delivering customer value?

5) It is key to keep some discretionary time for connecting with customers, studying the competition and the marketplace and “white space thinking.” It is too easy to get caught up on being reactionary to lots of short-term details and therefore lose the time to think about the long term.

Microsoft Executive #2

There are three things that I think about as it relates to leading organizations: Vision, People and Results. Some of the principles in each of these components will apply to any organization, whether the organization's goal is to make profit, achieve strategic objectives, or make non-profit social impact.


In setting the vision and top level objectives, it is very important to pick the right priorities. I like to focus on the big rocks instead of small rocks at the vision-setting stage. In today's world of information overload, it is really easy to get bombarded with too many things needing attention. This can dilute your focus across too many objectives. The negative effect of not having a clear concentrated focus multiplies rapidly across many people when you are running a large organization. So, you need to first ask yourself what are the few ultimate results that are the objectives of your organization and then stay disciplined to focus on those objectives. The ultimate goal might be a single objective or a few, but should not be a laundry list. It is alright to have multiple metrics that are aligned to drive each objective, but the overall objectives themselves should be crisp and focused.


The next step in running an organization is to make sure you have the right people in the right jobs. This starts with first identifying the needs of the business to achieve the vision set out above. Then, I try to figure out what types of roles are needed to meet those needs. What will the organization structure look like? What kind of competencies, that is, attributes, skills, and behaviors, are needed in those roles to meet expected results? If there is a mismatch between the role and the person, it can set up both the employee and the business for failure. So, this is a crucial step in making sure you've a well running organization.

Once you have the right people in the right jobs, I try to make sure that the work environment encourages people to do their best. Selfless leadership, where the leaders have a sense of humility and are committed to the success of the business over their own self, is essential. An inclusive environment where everyone is encouraged to contribute is also a must. People's experience with the organization is for the most part shaped by their interaction with their immediate manager. Therefore, it is very important that a lot of care goes into selecting, encouraging and rewarding people managers who can create a positive environment for their employees.


Finally, the organization needs to produce results towards achieving the vision and the objectives you set out. Do not confuse results with actions. You need to make sure you reward people based on performance towards producing results instead of actions. When setting commitments for people, you need to be thoughtful about what metrics you choose so that you incent the right behavior. This again helps build an environment that encourages people to do their best. Producing results also requires that you've a compelling strategy for the organization. Thus, you need to stay on top of where the market and customers are. This will help you focus your organization's efforts on anticipating customer needs, and proactively taking steps to delight customers. This is necessary to ensure that organization's resources are prioritized towards those efforts that will produce the highest return on investment.

Microsoft Executive #3
  1. Different situations call for different pivots.  That said, I most often start with the customer, as technology is just a tool; ultimately, people are trying to solve problems.  I should note, however, that “customer” does not always mean the person who licenses or uses our products and/or services.  While they may be the focus, my true “customer” is sometimes the business itself (and its management), a business group, or a government (addressing a policy issue).  Often, the problem presented has to be solved in a multi-disciplinary way (e.g., a mixture of policy changes, education, technological innovation, and business process refinements).  Think, for example, about protecting children on-line.  While technology may help, any comprehensive solution may also involve government laws, parental and child education, a change in website business practices, etc.
  2. As noted above, the key is thinking in a multi-disciplinary way. People gravitate to what they know; thus the old adage that “if you have a hammer, everything you see is a nail.” Think more broadly about an issue, and a more interesting solution to the customer’s problem may present itself. (Scenario focused engineering works this way too.)
  3. It is partially about thinking differently (as discussed above), but also about seeking the right counsel.  There is an interesting truth about hard Presidential decisions.  The more sensitive an issue, the fewer the number of people consulted (because of the sensitivity) and the less informed the decision.  Obtaining good counsel – while avoiding the pitfall of paralysis (either because you have yet to speak to everyone on the planet or because there was not universal consensus on what to do next) is the key.
  4. (1) What is the right thing to do? (This may be harder than it looks because the different customers described above may have different interests.  For example, a costly solution may be good for customers but bad for shareholders.  A regulatory solution might be convenient for governments but stifle technological innovation.)  (2) What unintended consequences might occur? (The best laid plans….).  (3) Will the solution be achievable?
  5. I need quiet time; time to think deeply.

The big things that really stand out for me are using the customer as the North Star, balancing with multi-disciplinary perspectives, evaluating multiple, cascading ramifications, and leading with vision.

You Might Also Like

100 Articles to Sharpen Your Mind

Rituals for Results

Thinking About Career Paths

Categories: Blogs

Cuttable Scope

J.D. Meier's Blog - Tue, 09/16/2014 - 17:22

Early on in my Program Management career, I ran into challenges around cutting scope.

The schedule said the project was done by next week, but scope said the project would be done a few months from now.

On the Microsoft patterns & practices team, we optimized around “fix time, flex scope.”   This ensured we were on time, on budget.  This helped constrain risk.  Plus, as soon as you start chasing scope, you become a victim of scope creep, and create a runaway train.  It’s better to get smart people shipping on a cadence, and focus on creating incremental value.  If the trains leave the station on time, then if you miss a train, you know you can count on the next train.  Plus, this builds a reputation for shipping and execution excellence.

And so I would have to cut scope, and feel the pains of impact ripple across multiple dependencies.

Without a simple chunking mechanism, it was a game of trying to cut features and trying to figure out which requirements could be completed and still be useful within a given time frame.

This is where User Stories and System Stories helped.  

Stories created a simple way to chunk up value.   Stories help us put requirements into a context and a testable outcome, share what good looks like, and estimate our work.  So paring stories down is fine, and a good thing, as long as we can still achieve those basic goals.

Stories help us create Cuttable Scope.  

They make it easier to deliver value in incremental chunks.

A healthy project start includes a baseline set of stories that help define a Minimum Credible Release, and additional stories that would add additional, incremental value.

It helps create a lot of confidence in your project when there is a clear vision for what your solution will do, along with a healthy path of execution that includes a baseline release, along with a healthy pipeline of additional value, chunked up in the form of user stories that your stakeholders and user community can relate to.

You Might Also Like

Continuous Value Delivery the Agile Way

Experience-Driven Development

Kanban: The Secret of High-Performing Teams at Microsoft

Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

Portfolios, Programs, and Projects

Categories: Blogs

Agile Jeopardy–Not as Easy as You’d Think

DFW Scrum User Group - Tue, 09/16/2014 - 16:39
Last month we played Agile Jeopardy, and the questions and answers related directly to the Agile Manifesto. Four values and twelve principles—how bad could it be? Turns out most people don’t have them memorized, and our teams had a rough … Continue reading →
Categories: Communities

The ScrumMaster is Responsible for What Artifacts?

Learn more about our Scrum and Agile training sessions on

Organizations like to have clear role definitions, clear processes outlined and clear documentation templates.  It’s just in the nature of bureaucracy to want to know every detail, to capture every dotted “i” and crossed “t”, and to use all that information to control, monitor, predict and protect.  ScrumMasters should be anti-bureaucracy.  Not anti-process, not anti-documentation, but constantly on the lookout for process and documentation creep.

To help aspiring ScrumMasters, particularly those who come from a formal Project Management background, I have here a short list of exactly which artifacts the ScrumMaster is responsible for.

- None – the ScrumMaster is a facilitator and change agent and is not directly responsible for any of the Scrum artifacts (e.g. Product Backlog) or traditional artifacts (e.g. Gantt Chart).

- Obstacles or impediments “backlog” - a list of all the problems, obstacles, impediments and challenges that the Scrum Team is facing.  These obstacles can be identified by Team Members at any time, but particularly during the Daily Scrum or the Retrospective.
- Definition of “Done” gap report, every Sprint – a comparison of how “done” the Team’s work is during Sprint Review vs. the corporate standards required to actually ship an increment of the Team’s work (e.g. unit testing done every Sprint, but not system testing).
- Sharable retrospective outcomes report, every Sprint – an optional report from the Scrum Team to outside stakeholders including other Scrum Teams.  Current best practice is that the retrospective is a private meeting for the members of the Scrum Team and that in order to create a safe environment, the Scrum Team only shares items from the retrospective if they are unanimously agreed.  Outsiders are not welcome to the retrospective.
- Sprint burndown chart every Sprint – a chart that tracks the amount of work remaining at the end of every day of the Sprint, usually measured in number of tasks.  This chart simply helps a team to see if their progress so far during a Sprint is reasonable for them to complete their work.
- State of Scrum report, every Sprint – possibly using a checklist or tool such as the “Scrum Team Assessment” (shameless plug alert!).

- minutes of Scrum meetings
- process compliance audit reports
- project administrative documents (e.g. status reports, time sheets)

- project charter (often recommended for the Product Owner, however)
- project plans (this is done by the Product Owner and the Scrum Team with the Product Backlog)
- any sort of up-front technical or design documents

The ScrumMaster is not a project manager, not a technical lead, not a functional manager, and not even a team coach.  There are aspects of all of those roles in the ScrumMaster role, but it is best to think of the role as completely new and focused on two things:
- improving how the team uses Scrum
- helping the team to remove obstacles and impediments to getting their work done.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
Categories: Blogs

Telling Executive Stories

Leading Agile - Mike Cottmeyer - Tue, 09/16/2014 - 15:31

Delivery teams manage and deliver value supported by the tool user stories. These teams tell stories about who, what, why, and acceptability using standard form, “As a <persona>, I want <capability> so that <delivered value> occurs,” and behavior acceptance form, “Given < context>, when <action occurs>, then < consequence >.” These stories form the foundation of repeatable delivery and management of value.

While these forms support delivery team conversations well, they are inadequate to support the richer conversation needed by executives to manage investment and value. What forms the basis of these stories? How do we tell stories about delivering product value to our customers and delivering investment value to our organization?

Developing contextual story-telling focuses on the kinds of conversations the product managers, product owners, business owners, and executives have when they meet to operate and run the business. We listen to these stories and then use existing canvas templates to develop contextually relevant canvas designs. These canvases become the fabric used when beginning new stories, and continuing old stories regarding business strategy and tactics.

To develop the canvases, we need to listen to these conversations and stories and develop a sense for topics and content of the strategic and tactical conversations. These questions form the thinking needed to create a first draft tool that can be used to bookmark a conversation.

  • What is the focus of each conversation?
  • What are the conversational topics?
  • What is the airtime of each topic?
  • What is the passion level of each topic?
  • In what order are the topics discussed?

These conversations may cover the following topics:

Product Focus Areas

  • Vision
  • Problem Space
  • Solution Space
  • Metrics
  • Costs
  • Alignment
  • Value
Market Focus Areas

  • Revenue
  • Customers
  • Delivery Channels
  • Strengths
  • Vision
  • Value

We also discover that there are quite a few topics of conversations that don’t quite fit into the strategic bucket and sound more like high-level tactics. This turns out to be the work of the executive and product teams. Those topics of conversation may cover the following:

  • Naming
  • Goal
  • Metrics
  • Leader or Owner
  • Customers
  • Stakeholders
  • Overview
  • Big Picture
  • Alignment
  • Solution Details

The key is identifying the key conversations in meetings and formulating a canvas around those topics. A common mistake is to use an existing template and force the conversations to conform to that template. Although tempting, this mistake leads to disengagement and abandonment of the canvas and tools. The tools are there to support the way the team works, not to force conformity to industry luminary ideals. These canvas designs will evolve as the organization improves its prowess at portfolio management.

The following is an example of the conversations important to an executive strategy canvas we recently developed:

  • Vision: Why pursue the strategy?
  • Customers: Who wanted it achieved?
  • Problem Space: What problems were they facing?
  • Solution Space: What solutions would work?
  • Value to the Customer: What epic stories would the customers tell?
  • Metrics:  What dials would move in the near future?
  • Value to the Organization: How does the organization benefit?

These topics of conversation are arranged canvas style so that new conversations take place to create a new strategy or so that existing conversations can be continued to check in on existing conversations. You may notice that while the range of topics available, this group focused around the customer. This is very clear by the absence of cost and revenue as a significant part of their strategic conversation.

The executives also had conversations about investments they would make in the strategy. The topics of that conversation included the following areas:

  • Goal: What is the desired outcome?
  • Metrics: What are the measures of success?
  • Customers: Who wants this?
  • Big Picture: What is the big picture?
  • Solution Details: What are possible solutions?
  • Alignment: How does this align with the strategy canvas?

We also listened for how the executive team intended to use the tools to support their work. They decided to use the canvases in the following ways:

Strategic Canvas

  • 90 Day True North
  • Investment Decision Filter
  • Organizational Strategic Alignment
  • Organizational Transparent

Investment Canvas

  • Tactical deliverable investment designed to experiment with some part of the strategy
  • Flow in a work system
  • Regular discussions about discovery, validation, delivery, and evaluation

We have created a glimpse of a strategic and work alignment system focused on portfolio management. The system and artifacts was developed based on the context of the organization and the thought leadership in the industry. The key point to take away is that context matters when developing the artifacts executives use to manage their strategic and tactical portfolio work.

The post Telling Executive Stories appeared first on LeadingAgile.

Categories: Blogs

Letting Go of Agile (Culture)

Agilitrix - Michael Sahota - Tue, 09/16/2014 - 15:09

Letting go of Agile Culture“If you want something very, very badly, let it go free.  If it comes back to you, it’s yours forever.  If it doesn’t, it was never yours to begin with.” - Harry Kronman

I have discovered the truth of this with Agile. The one time in my whole life I truly surrendered my attachment to Agile, it resulted in a beautiful transformation starting. But most of the time I was too attached to Agile to let it go.

This post is about how we may accidentally harm organizations with Agile and how we can let go so that we may succeed.

Accidentally Harming Organizations

Here is the basic thinking:

  1. Agile is a good thing.
  2. We can help companies if they use Agile.
  3. Let’s do it!
Trap #1: Accidentally introduce cultural conflict

Agile for me is basic common sense – this is how to get stuff done. BUT Agile does not work in most organizations due to culture. Sure there are some small pockets where Agile just works but this seems to be relatively rare – especially now that Agile has crossed the chasm.

Agile is a different culture from most companies, so the first trap is to accidentally introduce organizational conflict. That’s why I wrote “An Agile Adoption and Transformation Survival Guide: Working with Organizational Culture” – to help people notice this trap and avoid it.

My suggestion was to look at two options:

  1. Adopt elements of Agile that fit with the culture.
  2. Transform the organizational culture.

For many, option 1 is like giving up on Agile since they key part of it is missing so many Agile folks don’t like that option.

Increasingly Agile experts go for option #2 instead: Transform the Organizational Culture. I sure did. I set out to learn how to change organizational culture. And I figured it out. But there was a problem. A big one.

Trap #2 Attempt to Transform to Agile Culture

The core of the problem is that Agile is not an end in itself. It is means to an end. Some common goals (ends) are: a quality product, time to market or engaged staff. The problem is not that Agile doesn’t help with these goals (it certainly does), the problem is that people confuse Agile as the goal and often act in ways that undermine the real goal. We see Agile being used as a Whip or a Shield. That is why it’s a good idea to Stop Agile Initiatives. A better alternative to an Agile initiative is to have an initiative around the real goals. One way to get at the real goals is to run a workshop to clarify why people want Agile.

It is a good thing to change culture in service to what organizations really want for themselves. A specific culture is not a goal in itself, but a means to accomplishing something. We may seek a culture of engagement and innovation not for itself, but because we want our organization to thrive in a competitive landscape.

There are many many beautiful, productive organization cultures all over the world that have nothing to do with Agile. The implication is that there are many ways to get to a place where people love what they do. If we really want to help people, then the best move is to work with them to evolve a wonderful culture that is right for them. And for sure it will not be exactly “Agile Culture” (especially since this is not completely precise). If it is a progressive culture, it will likely be Agile-compatible and using Agile to get benefits will be very natural. It’s a win – win.

Agile Culture should never be a goal. If it is, we will likely just cause harm.

Let Go of the Outcome to Find Success

Here is my secret to success: Let go of the outcome.

I wrote a couple of years about about how leaders have a choice between the red pill (deeper reality) and the blue pill (surface reality). I stated it like I gave people a choice. But I didn’t. The only choice I wanted was the red pill. I wanted so much to help the people in organizations I pushed for the red pill. The truth is I cared so much for the outcome which I assumed was best that I didn’t really give a open choice. In subtle and more obvious ways I was attempting to coerce leaders into taking the red pill. Ooops! Coercion is not any part of Agile, but here I was wanting my outcome for others. And it is not just me. I have talked to dozens of professional coaches and this is pandemic in the Agile community.

The solution is obvious. If we really want to stay true to Agile values, we can’t coerce. We have to let the people (especially management teams) make their own decisions and their own mistakes. We have to help them find and walk the path that they choose. This means letting go of the outcome. This means letting go of Agile.

This business of learning to let go is not new. In fact, letting go of attachment is a central message of Buddhism.

To close, the one time I fully let go of Agile it came back in such a beautiful sustainable and lasting way. Time to rinse and repeat.

“If you want Agile very, very badly, let it go free.  If it comes back to you, it’s there forever.  If it doesn’t, it was never meant to be.”

(Stay tuned for a follow-up post on Agile as a means of creating freedom by Olaf Lewitz.)

My Apology

I helped a lot of people see Agile as a culture system and learn how to stop causing accidental conflict.

Unfortunately, I also energized a lot of people to seek culture change with the goal of growing Agile. As clarified in this blog post, this was a mistake. I am sorry.

What’s the alternative? For those who want real change, let’s help them meet their organizational goals with culture transformation and let Agile come willingly.

The post Letting Go of Agile (Culture) appeared first on Catalyst - Agile & Culture.

Related posts:

  1. Agile is NOT the Goal (Workshop) Here is how to run a one hour workshop turn...
  2. Guy Laurence – Culture Change Through Renovation Guy Lawrence – former CEO at Vodafone – tells of...
  3. Stop Agile Initiatives! I am sick to death of Agile Initiatives because they...

Related posts brought to you by Yet Another Related Posts Plugin.

Categories: Blogs

One week sprint, 45 minute Retrospective plan

Growing Agile - Tue, 09/16/2014 - 14:19

We are currently helping out at a client, and I found myself in the ScrumMaster role icon smile One week sprint, 45 minute Retrospective plan It’s been a while! They run one week sprints and their teams are 1 or 2 people and they change every month or so. This seems to work for them on the most part. I need to run a retrospective for two teams at the same time. One team of 2 people and one team of one. As I haven’t been able to clone myself as yet, this needed to happen together, but allow each team to reach their own improvement point.

The Plan:

OneWeekRetroPlan1 One week sprint, 45 minute Retrospective plan



How did this work in reality?

The person on the team of one was off sick, so the retro ended up being just for one team of two – this made it a bit easier to facilitate and interact with the team.

The rest of this feedback came from the Plus/Delta exercise. It shows how important it is as a facilitator to get feedback from your team.

This team usually does the same retro every week (same activities), so the different activities felt a bit weird to them. I’m a bit surprised I fell into this trap, I do know better icon smile One week sprint, 45 minute Retrospective plan If your team is used to doing something only change one activity at a time, changing everything makes them feel a bit unsettled and uncomfortable.

They enjoyed the first exercise (Check In) but felt it didn’t lead anywhere – so perhaps a Diverge section that flows from it will be better.

They enjoyed the questions as it made them really think. They did miss not having a timeline activity to remind them of what happened over the week of their sprint.

They liked getting to one action that is very solid and they know will do. They are used to having a list of goals they want to achieve during the sprint.

Thank you Concetta and Wilhelm!

photo 22 e1410510576487 225x300 One week sprint, 45 minute Retrospective plan




Categories: Companies

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.