Skip to content


Killing 7 Impediments in One Blow

Agile Tools - Fri, 09/19/2014 - 08:13

Have you heard the story of the Brave Little Tailor? Here’s a refresher:

So one day this little guy kills 7 flies with one mighty blow. He crafts for himself a belt with “7 in One Blow” sewn into it. He then proceeds through various feats of cleverness to intimidate or subdue giants, soldiers, kings and princesses. Each one, in their own ignorance, misinterpreting what “7 in One Blow” actually refers to. It’s a classic for a number of reasons:

  1. It’s a story about mis-communication: Not one single adversary has the wit to ask just what he means by killing “7 in one blow”
  2. It’s also a story about using one’s cleverness to achieve great things. You have to love the ingenuity of the little guy as he makes his way adroitly past each obstacle.
  3. It’s a story about blowing things way out of proportion. Each of the tailor’s adversaries manages to magnify the capabilities of the tailor to extraordinary, even supernatural levels.

I’m thinking I might have to get a belt like that and wear it around the office. A nice pair of kahkis, a button down shirt, and a big belt with the words, “7 in One Blow”. Given how prone we all tend to be to each of the foibles above, I’m sure it would be a riot.
A QA guy might see my belt and say, “Wow! He killed 7 bugs in one blow!”
Maybe a project manager might see it and think, “This guy is so good he finished 7 projects all at once!” Or maybe the HR rep says, “Did he really fire 7 people in one day?” Or the Scrum Master who thinks, “That’s a lot of impediments to clear out at once!”
The point is that we make up these stories all the time. We have stories in our heads about our team mates, “Did you hear about Joe?” our managers, and their managers. Sometimes it seems as though we all have these distorted visions of each other. And perhaps we do. We need to get better at questioning those stories. We need to cultivate more of a sense of curiosity about the incomplete knowledge that we have of each other. That belt would be my reminder. I might have to buy one for each member of my team.
Of course the other thing that the belt can remind us of, is to use our own innate cleverness to help get what we need. When we are wrestling with the corporate challenges, we all too often tend to try and brute force our problems and obstacles. We need to be a bit more like the Little Tailor and manipulate the world around us with some cleverness. We all have it to one degree or another, and Lord knows we need all the cleverness we can get. Good work is full of challenges and you don’t want to take them all head on or you will end up like an NFL linebacker – brain damaged. Instead, we need to approach some things with subtlety. There is just as much value in not being in the path of a problem as there is in tackling things head on. Like the Tailor, we need to recruit others to achieve our objectives.
Finally, we really must stop blowing things out of proportion. Nobody cares about our methodology. You want to know what my favorite kind of pairing is? Lunch! We need to lighten up a bit. Working your way through the dark corporate forest, you can either play with what ever it brings and gracefully dodge the risks, or…you can get stepped on.

Filed under: Agile, Coaching, impediment, Process, Teams Tagged: cleverness, fool, Process, Teams, wit
Categories: Blogs

The Agile Reader – Weekend Edition: 09/19/2014 - Kane Mar - Fri, 09/19/2014 - 06:16

You can get the Weekend Edition delivered directly to you via email by signing up

The Weekend Edition is a list of some interesting links found on the web to catch up with over the weekend. It is generated automatically, so I can’t vouch for any particular link but I’ve found the results are generally interesting and useful.

  • #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • Even better – communicating while drawing! #Scrum #Agile
  • #Agile – How does Planning Poker work in Agile? –
  • Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • #Meetings! Huh! What are they good for? #scrum
  • @Dell is hiring #SDET, #Austin, TX #Iwork4Dell #ecommerce #agile #Scrum
    #dotnet #Automation #testing @CareersAtDell
  • Medical Mutual #IT #Job: Agile Scrum Master – Project Manager 14-266 (#Strongsville, OH) #Jobs
  • Has #Scrum Killed the Business Analyst? #scrumrocks #agile #yrustilldoingwaterfall
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • Scrum Master at Agile (Atlanta, GA): : Our client is focused on building a platform and related… #ATL
  • How to Plan an Agile Sprint Meeting? –
  • Agile Scrum isn’t a silver bullet solution for s/w development, but it can be a big help. #AppsTrans #HP
  • Now hiring for: Scrum Master in Gainesville, FL #job #agile #mindtree
  • Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4) #agile #scrum
  • RT @yochum: Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4) #agile #scrum
  • RT @yochum: Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4) #agile #scrum
  • Agile Scrum #Master needed in #SanFrancisco, apply now at #Accenture! #job
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • FREE SCRUM EBOOK based on the AMAZON BESTSELLER: #scrum #agile inspired by #kschwaber
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • You get more value from periodic “lessons learned” events rather than a big one at the end #agile #scrum #PMI
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @tirrellpayton: You get more value from periodic “lessons learned” events rather than a big one at the end #agile…
  • A Quick, Effective Swarming Exercise for Scrum Development Teams #agile #projectmanagement
  • RT @yochum: Agile Tools: The Grumpy Scrum Master #agile #scrum
  • +1 The #agile mindset: It’s time to change our thinking, not #Scrum #agile #scrum (via @sdtimes)
  • Agile by McKnight, Scrum by Day is out! Stories via @dinwal @StratacticalCo
  • SCRUM EBOOK #Scrum #Agile inspired by #Ken Schwaber
  • RT @MRGottschalk: “Think Scrum is Only for Developers? Think Again.” by @MRGottschalk on @LinkedIn #Scrum #Agile
  • RT @FreeScrumEbook: SCRUM EBOOK #Scrum #Agile inspired by #Ken Schwaber
  • #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • RT @MichaelNir: #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • RT @MichaelNir: #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • Want to know more about Agile? Sign up to our free workshop #Scrum #agile
  • RT @boostagile: Want to know more about Agile? Sign up to our free workshop #Scrum #agile
  • RT @HPappsServices: Agile Scrum isn’t a silver bullet solution for s/w development, but it can be a big help. #Apps…
  • Does your agile process look like this? via @joescii
  • #jobs4u #jobs Scrum Master / Agile Coach #RVA #richmond #VA
  • Why do we “think” we “need” estimates? Its worth thinking about. #agile #scrum #kanban #NoEstimates
  • Check it out – FREE SCRUM EBOOK: #Scrum #Agile inspired by #KenSchwaber
  • Categories: Blogs

    iOS 8: The biggest iOS release ever

    Derick Bailey - new ThoughtStream - Thu, 09/18/2014 - 20:22

    I updated my iPad to iOS 8 yesterday.

    IOS8 big release

    (and, yes… I drew this on my iPad)

         Related Stories 
    Categories: Blogs

    Management Innovation is at the Top of the Innovation Stack

    J.D. Meier's Blog - Thu, 09/18/2014 - 18:13

    Management Innovation is at the top of the Innovation Stack.  

    The Innovation Stack includes the following layers:

    1. Management Innovation
    2. Strategic Innovation
    3. Product Innovation
    4. Operational Innovation

    While there is value in all of the layers, some layers of the Innovation Stack are more valuable than others in terms of overall impact.  I wrote a post that walks through each of the layers in the Innovation Stack.

    I think it’s often a surprise for people that Product or Service Innovation is not at the top of the stack.   Many people assume that if you figure out the ultimate product, then victory is yours.

    History shows that’s not the case, and that Management Innovation is actually where you create a breeding ground for ideas and people to flourish.

    Management Innovation is all about new ways of mobilizing talent, allocating resources, and building strategies.

    If you want to build an extremely competitive advantage, then build a Management Innovation advantage.  Management Innovation advantages are tough to copy or replicate.

    If you’ve followed my blog, you know that I’m a fan of extreme effectiveness.   When it comes to innovation, I’ve had the privilege and pleasure of playing a role in lots of types of innovation over the years at Microsoft.   If I look back, the most significant impact has always been in the area of Management Innovation.

    It’s the trump card.

    Categories: Blogs

    Agile Advice Book Update

    Learn more about our Scrum and Agile training sessions on

    Well, last spring I announced that I was going to be publishing a collection of the best Agile Advice articles in a book.  I managed to get an ISBN number, got a great cover page design, and so it is almost done.  I’m still trying to figure out how to build an index… any suggestions would be welcome!!!  But… I’m hoping to get it published on iBooks and Amazon in the next month or two.  Let me know if you have any feedback on “must-have” Agile Advice articles – there’s still time to add / edit the contents.

    There are six major sections to the book:

    1. Basics and Foundations
    2. Applications and Variations
    3. Agile and Other Systems
    4. For Managers and Executives
    5. Bonus Chapters
    6. Agile Methods Quick Reference and Selection Guide

    The book will also have a small collection of 3 in-depth articles that have never been published here on Agile Advice (and never will be).  The three special articles are:

    1. Agile Mining at a Large Canadian Oil Sands Company
    2. Crossing the Knowing-Doing Gap
    3. Becoming a Professional Software Developer

    Again, any feedback on tools or techniques for creating a quick index section on a book would be great.  I’m using LibreOffice for my word processor on a Mac.  I’m cool with command-line tools if there’s something good!


    Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
    Categories: Blogs

    Compromises in setting up teams in a scaled framework

    Scrum 4 You - Thu, 09/18/2014 - 07:16

    Making the first steps in setting up a scaled framework with agile teams in a complex environment does not always allow to set up teams by the book. Rather than dragging people from their current activities and having everybody dislike agile, before we have even really started using and living it, I try to make compromises.

    By working in very short sprints of one week, I allow team members to switch between teams with every start of a new sprint. For the one sprint however, they commit to be fully engaged in the sprint backlog and team activities of the one single team. Product owners can plan with the team members’ knowledge for some sprints. The team can make the experience i.e. every other sprint what they are capable of on their own and build confidence and see where they still need more knowledge transfer and help. Multi skilled team members can learn step by step to let go their „baby“ and still be engaged in the field of the product.

    An alternative in the same situation might be:
    If there is a huge overlap in skills needed for one and the other team or part of the product, try putting the teams to either being one team. Build rather large teams in the beginning covering two backlogs (should not be more). Creating a team backlog rather than a product backlog allows the others to learn about the other parts of the product by just being in the team and does not force team members to leave one or the other part of the product without their experience and knowledge.
    Forcing the teams into a decision for one or the other might lead to the opposite – double work by only making one part transparent or frustrated „left alone teams“ and multi skilled team members with feelings of guilt.

    What else have you tried?

    Related posts:

    1. Sprint Planning with geographically dispersed teams located in different timezones
    2. Scrum Teams – No Part Time!
    3. How internationally distributed Teams can improve their Sprint Planning 2

    Categories: Blogs

    The Grumpy Scrum Master

    Agile Tools - Thu, 09/18/2014 - 06:54

    grumpy dwarf

    “Going against men, I have heard at times a deep harmony
    thrumming in the mixture, and when they ask me what
    I say I don’t know. It is not the only or the easiest
    way to come to the truth. It is one way.” – Wendell Berry

    I looked in the mirror the other day and guess what I saw? The grumpy scrum master. He comes by sometimes and pays me a visit. Old grumpy looked at me and I looked at him and together we agreed that perhaps, just this one time, he just might be right.

    We sat down and had a talk. It turns out he’s tired and cranky and seen this all before. I told him I can relate. We agreed that we’ve both done enough stupid to last a couple of lifetimes. No arguments there. He knows what he doesn’t like – me too! After a little debate, we both agreed we don’t give a damn what you think.

    So we decided it was time to write a manifesto. That is

    We grumps have come to value:

    Speaking our mind over listening to whiners

    Working hard over talking about it

     Getting shit done over following a plan

    Disagreeing with you over getting along

    That is, while the items are the right are a total waste of time, the stuff on the left is much more gratifying.


    Filed under: Coaching, Humor, Scrum Tagged: bad attitude, grumpy, Humor, Scrum, Scrum Master
    Categories: Blogs

    How to create an Agile Burn-Up Graph in Google Docs - Kane Mar - Wed, 09/17/2014 - 22:01

    A Burn-Up graph is simply a stack graph showing the total amount of work the team has in their product backlog over a number of Sprints. I’ve used a variety of different Agile Burn-Up graphs over the years. Here’s one of my favourites:


    Agile Burn-Up Graph

    Agile Burn-Up Graph


    I created this with Excel while working with an insurance company based in Mayfield, Ohio. In this article I’ll show you how to create something similar using Google docs.

    Understanding the Burn-up Graph

    This graph (above) shows the total amount of work in the product backlog (top line of the graph), the amount of work completed (yellow) and the amount of work remaining (red and blue). The amount of work remaining is divided into estimated work (red) and un-estimated work (blue) which we guessed at using a very course scale. At the start you can see the total amount of work on the backlog increase until the fourth Sprint as indicated by the rising top-line of the graph.

    After the fourth Sprint the team decided that they needed to start breaking down the un-estimated work into small User Stories and so you can see an increase in the red area of the graph and a decline in the blue. We continued to complete work, so the yellow area continued to grow.

    By Sprint 12 we had completely broken down all the large bodies of work and had a well refined backlog.

    Creating the Graph in Google Spreadsheets

    The Google graph that I’ve created is a little bit simpler than the graph above. It shows the total amount of work in the product, the total amount of work added to the product backlog, and the total amount of work completed. You can get the Google Spreadsheet document to create this graph here.

    This is what it looks like:


    Agile Product Burn-up Graph

    Agile Product Burn-up Graph


    The spreadsheet contains two tabs. The first tab contains the data necessary for the graph, and the second tab contains the graph. To start using this graph,

    1. Make a copy of the Google Spreadsheet
    2. Enter the total of the teams estimates in the product backlog into the first column of Series A.
    3. There after all you need to record is the total number of the teams estimates completed at the end of each Sprint, and
    4. The total number of the teams estimates added to the Product Backlog (by the Product Owner) during the sprint.


    Product Burn-up Graph Google Spreadsheet

    Product Burn-up Graph Google Spreadsheet


    You can get the Google Spreadsheet document to create this graph here.

    Categories: Blogs

    Container Usage Guidelines

    Jimmy Bogard - Wed, 09/17/2014 - 21:25

    Over the years, I’ve used and abused IoC containers. While the different tools have come and gone, I’ve settled on a set of guidelines on using containers effectively. As a big fan of the Framework Design Guidelines book and its style of “DO/CONSIDER/AVOID/DON’T”, I tried to capture what has made me successful with containers over the years in a series of guidelines below.

    Container configuration

    Container configuration typically occurs once at the beginning of the lifecycle of an AppDomain, creating an instance of a container as the composition root of the application, and configuring and framework-specific service locators. StructureMap combines scanning for convention-based registration, and Registries for component-specific configuration.

    X AVOID scanning an assembly more than once.

    Scanning is somewhat expensive, as scanning involves passing each type in an assembly through each convention. A typical use of scanning is to target one or more assemblies, find all custom Registries, and apply conventions. Conventions include generics rules, matching common naming conventions (IFoo to Foo) and applying custom conventions. A typical root configuration would be:

    var container = new Container(cfg =>
        cfg.Scan(scan => {

    Component-specific configuration is then separated out into individual Registry objects, instead of mixed with scanning. Although it is possible to perform both scanning and component configuration in one step, separating component-specific registration in individual registries provides a better separation of conventions and configuration.

    √ DO separate configuration concerning different components or concerns into different Registry classes.

    Individual Registry classes contain component-specific registration. Prefer smaller, targeted Registries, organized around function, scope, component etc. All container configuration for a single 3rd-party component organized into a single Registry makes it easy to view and modify all configuration for that one component:

    public class NHibernateRegistry : Registry {
        public NHibernateRegistry() {
            For<Configuration>().Singleton().Use(c => new ConfigurationFactory().CreateConfiguration());
            For<ISessionFactory>().Singleton().Use(c => c.GetInstance<Configuration>().BuildSessionFactory());
            For<ISession>().Use(c => {
                var sessionFactory = c.GetInstance<ISessionFactory>();
                var orgInterceptor = new OrganizationInterceptor(c.GetInstance<IUserContext>());
                return sessionFactory.OpenSession(orgInterceptor);

    X DO NOT use the static API for configuring or resolving.

    Although StructureMap exposes a static API in the ObjectFactory class, it is considered obsolete. If a static instance of a composition root is needed for 3rd-party libraries, create a static instance of the composition root Container in application code.

    √ DO use the instance-based API for configuring.

    Instead of using ObjectFactory.Initialize and exposing ObjectFactory.Instance, create a Container instance directly. The consuming application is responsible for determining the lifecycle/configuration timing and exposing container creation/configuration as an explicit function allows the consuming runtime to determine these (for example, in a web application vs. integration tests).

    X DO NOT create a separate project solely for dependency resolution and configuration.

    Container configuration belongs in applications requiring those dependencies. Avoid convoluted project reference hierarchies (i.e., a “DependencyResolution” project). Instead, organize container configuration inside the projects needing them, and defer additional project creation until multiple deployed applications need shared, common configuration.

    √ DO include a Registry in each assembly that needs dependencies configured.

    In the case where multiple deployed applications share a common project, include inside that project container configuration for components specific to that project. If the shared project requires convention scanning, then a single Registry local to that project should perform the scanning of itself and any dependent assemblies.

    X AVOID loading assemblies by name to configure.

    Scanning allows adding assemblies by name, “scan.Assembly(“MyAssembly”)”. Since assembly names can change, reference a specific type in that assembly to be registered. 
    Lifecycle configuration

    Most containers allow defining the lifecycle of components, and StructureMap is no exception. Lifecycles determine how StructureMap manages instances of components. By default, instances for a single request are shared. Ideally, only Singleton instances and per-request instances should be needed. There are cases where a custom lifecycle is necessary, to scope a component to a given HTTP request (HttpContext).

    √ DO use the container to configure component lifecycle.

    Avoid creating custom factories or builder methods for component lifecycles. Your custom factory for building a singleton component is probably broken, and lifecycles in containers have undergone extensive testing and usage over many years. Additionally, building factories solely for controlling lifecycles leaks implementation and environment concerns to services consuming lifecycle-controlled components. In the case where instantiation needs to be deferred or lifecycle needs to be explicitly managed (for example, instantiating in a using block), depending on a Func<IService> or an abstract factory is appropriate.

    √ CONSIDER using child containers for per-request instances instead of HttpContext or similar scopes.

    Child/nested containers inherit configuration from a root container, and many modern application frameworks include the concept of creating scopes for requests. Web API in particular creates a dependency scope for each request. Instead of using a lifecycle, individual components can be configured for an individual instance of a child container:

    public IDependencyScope BeginScope() {
        IContainer child = this.Container.GetNestedContainer();
        var session = new ApiContext(child.GetInstance<IDomainEventDispatcher>());
        var resolver = new StructureMapDependencyResolver(child);
        var provider = new ServiceLocatorProvider(() => resolver);
        child.Configure(cfg =>
        return resolver;

    Since components configured for a child container are transient for that container, child containers provide a mechanism to create explicit lifecycle scopes configured for that one child container instance. Common applications include creating child containers per integration test, MVVM command handler, web request etc.

    √ DO dispose of child containers.

    Containers contain a Dispose method, so if the underlying service locator extensions do not dispose directly, dispose of the container yourself. Containers, when disposed, will call Dispose on any non-singleton component that implements IDisposable. This will ensure that any resources potentially consumed by components are disposed properly.
    Component design and naming

    Much of the negativity around DI containers arises from their encapsulation of building object graphs. A large, complicated object graph is resolved with single line of code, hiding potentially dozens of disparate underlying services. Common to those new to Domain-Driven Design is the habit of creating interfaces for every small behavior, leading to overly complex designs. These design smells are easy to spot without a container, since building complex object graphs by hand is tedious. DI containers hide this pain, so it is up to the developer to recognize these design smells up front, or avoid them entirely.

    X AVOID deeply nested object graphs.

    Large object graphs are difficult to understand, but easy to create with DI containers. Instead of a strict top-down design, identify cross-cutting concerns and build generic abstractions around them. Procedural code is perfectly acceptable, and many design patterns and refactoring techniques exist to address complicated procedural code. The behavioral design patterns can be especially helpful, combined with refactorings dealing with long/complicated code can be especially helpful. Starting with the Transaction Script pattern keeps the number of structures low until the code exhibits enough design smells to warrant refactoring.

    √ CONSIDER building generic abstractions around concepts, such as IRequestHandler<T>, IValidator<T>.

    When designs do become unwieldy, breaking down components into multiple services often leads to service-itis, where a system contains numerous services but each only used in one context or execution path. Instead, behavioral patterns such as the Mediator, Command, Chain of Responsibility and Strategy are especially helpful in creating abstractions around concepts. Common concepts include:

    • Queries
    • Commands
    • Validators
    • Notifications
    • Model binders
    • Filters
    • Search providers
    • PDF document generators
    • REST document readers/writers

    Each of these patterns begins with a common interface:

    public interface IRequestHandler<in TRequest, out TResponse>
        where TRequest : IRequest<TResponse> {
        TResponse Handle(TRequest request);
    public interface IValidator<in T> {
        ValidationResult Validate(T instance);
    public interface ISearcher {
        bool IsMatch(string query);
        IEnumerable<Person> Search(string query);

    Registration for these components involves adding all implementations of an interface, and code using these components request an instance based on a generic parameter or all instances in the case of the chain of responsibility pattern.

    One exception to this rule is for third-party components and external, volatile dependencies.

    √ CONSIDER encapsulating 3rd-party libraries behind adapters or facades.

    While using a 3rd-party dependency does not necessitate building an abstraction for that component, if the component is difficult or impossible to fake/mock in a test, then it is appropriate to create a facade around that component. File system, web services, email, queues and anything else that touches the file system or network are prime targets for abstraction.

    The database layer is a little more subtle, as requests to the database often need to be optimized in isolation from any other request. Switching database/ORM strategies is fairly straightforward, since most ORMs use a common language already (LINQ), but have subtle differences when it comes to optimizing calls. Large projects can switch between major ORMs with relative ease, so any abstraction would limit the use of any one ORM into the least common denominator.

    X DO NOT create interfaces for every service.

    Another common misconception of SOLID design is that every component deserves an interface. DI containers can resolve concrete types without an issue, so there is no technical limitation to depending directly on a concrete type. In the book Growing Object-Oriented Software, Guided by Tests, these components are referred to as Peers, and in Hexagonal Architecture terms, interfaces are reserved for Ports.

    √ DO depend on concrete types when those dependencies are in the same logical layer/tier.

    A side effect of depending directly on concrete types is that it becomes very difficult to over-specify tests. Interfaces are appropriate when there is truly an abstraction to a concept, but if there is no abstraction, no interface is needed.

    X AVOID implementation names whose name is the implemented interface name without the “I”.

    StructureMap’s default conventions do match up IFoo with Foo, and this can be a convenient default behavior, but when you have implementations whose name is the same as their interface without the “I”, that is a symptom that you are arbitrarily creating an interface for every service, when just resolving the concrete service type would be sufficient instead.  In other words, the mere ability to resolve a service type by an interface is not sufficient justification for introducing an interface.

    √ DO name implementation classes based on details of the implementation (AspNetUserContext : IUserContext).

    An easy way to detect excessive abstraction is when class names are directly the interface name without the prefix “I”. An implementation of an interface should describe the implementation. For concept-based interfaces, class names describe the representation of the concept (ChangeNameValidator, NameSearcher etc.) Environment/context-specific implementations are named after that context (WebApiUserContext : IUserContext).
    Dynamic resolution

    While most component resolution occurs at the very top level of a request (controller/presenter), there are occasions when dynamic resolution of a component is necessary. For example, model binding in MVC occurs after a controller is created, making it slightly more difficult to know at controller construction time what the model type is, unless it is assumed using the action parameters. For many extension points in MVC, it is impossible to avoid service location.

    X AVOID using the container for service location directly.

    Ideally, component resolution occurs once in a request, but in the cases where this is not possible, use a framework’s built-in resolution capabilities. In Web API for example, dynamically resolved dependencies should be resolved from the current dependency scope:

    var validationProvider = actionContext

    Web API creates a child container per request and caches this scoped container within the request message. If the framework does not provide a scoped instance, store the current container in an appropriately scoped object, such as HttpContext.Items for web requests. Occasionally, you might need to depend on a service but need to explicitly decouple or control its lifecycle. In those cases, containers support depending directly on a Func.

    √ CONSIDER depending on a Func<IService> for late-bound services.

    For cases where known types need to be resolved dynamically, instead of trying to build special caching/resolution services, you can instead depend on a constructor function in the form of a Func. This separates wiring of dependencies from instantiation, allowing client code to have explicit construction without depending directly on a container.

    public EmailController(Func<IEmailService> emailServiceProvider) {
        _emailServiceProvider = emailServiceProvider;
    public ActionResult SendEmail(string to, string subject, string body) {
        using (var emailService = _emailServiceProvider()) {
            emailService.Send(to, subject, body);

    In cases where this becomes complicated, or reflection code is needed, a factory method or delegate type explicitly captures this intent.

    √ DO encapsulate container usage with factory classes when invoking a container is required.

    The Patterns and Practices Common Service Locator defines a delegate type representing the creation of a service locator instance:

    public delegate IServiceLocator ServiceLocatorProvider();

    For code needing dynamic instantiation of a service locator, configuration code creates a dependency definition for this delegate type:

    public IDependencyScope BeginScope()
        IContainer child = this.Container.GetNestedContainer();
        var resolver = new StructureMapWebApiDependencyResolver(child);
        var provider = new ServiceLocatorProvider(() => resolver);
        child.Configure(cfg =>
        return new StructureMapWebApiDependencyResolver(child);

    This pattern is especially useful if an outer dependency has a longer configured lifecycle (static/singleton) but you need a window of shorter lifecycles. For simple instances of reflection-based component resolution, some containers include automatic facilities for creating factories.

    √ CONSIDER using auto-factory capabilities of the container, if available.

    Auto-factories in StructureMap are available as a separate package, and allow you to create an interface with an automatic implementation:

    public interface IPluginFactory {
        IList<IPlugin> GetPlugins();

    The AutoFactories feature will dynamically create an implementation that defers to the container for instantiating the list of plugins.

    Post Footer automatically generated by Add Post Footer Plugin for wordpress.

    Categories: Blogs

    How to Enable Estimate-Free Development

    Practical Agility - Dave Rooney - Wed, 09/17/2014 - 20:06
    Most of us have been there... the release or sprint planning meeting to goes on and on and on and on. There is constant discussion over what a story means and endless debate over whether it's 3, 5 or 8 points. You're eventually bludgeoned into agreement, or simply too numb to disagree. Any way you look at it, you'll never get those 2, 4 or even 6 hours back - they're gone forever! And to what
    Categories: Blogs

    Lean Kanban Central Europe 2014

    AvailAgility - Karl Scotland - Wed, 09/17/2014 - 12:49

    Despite my best efforts, I couldn’t help but get involved in Lean Kanban Central Europe again this year, and have even taken on more of a role by helping co-chair the Kanban track. Its always a great event (one of my favourites) because the people and content are such high calibre. This year looks to be no different.

    This year I’m also running a workshop (as previously announced). I hope to see you there!


    Categories: Blogs

    Die Lego-Story oder das Geheimnis gelungener Skalierung

    Scrum 4 You - Wed, 09/17/2014 - 07:30

    Wenn wir von Skalierung sprechen, denken wir häufig an Umbrüche, Neuorganisation und Transformationen. Wir haben große Schemata und Organigramme im Kopf. Ein schönes Lehrstück darüber, wie solche Skalierungen mächtig schiefgehen können, beschreibt David Robertson in seinem Buch “Brick by Brick”. Es erzählt die Geschichte der dänischen Lego-Gruppe.

    Besonders spannend sind zwei Augenblicke in der Firmengeschichte: Zum einen ist da die frühe Firmengeschichte in der Nachkriegszeit, als Lego sich vom Hersteller von Holzspielzeugen zum Fabrikant modularer Plastikbausteine mauserte. Damals, im Jahr 1946, investierte Lego mehr als den zweifachen Jahresgewinn zum Erwerb einer enzigen Spritzgussmaschine für Kunststoff. Den Erfolg dieser Entscheidung kennen wir alle.

    Verlorener Fokus

    Weniger bekannt ist, dass Lego um die Jahrtausendwende kurz vor der Insolvenz stand. Wie konnte es so weit kommen? Lego hatte in den 1990er Jahren erlebt, wie sich das Spielverhalten von Kindern und Jugendlichen veränderte. In Zeiten von Gameboy und XBox erschien das Aufeinandertürmen von Spielklötzen zur Erschaffung eigener Welten plötzlich veraltet. Lego reagierte, indem es sich in einer Reihe von Geschäftsfeldern wie Computerspielen, Lifestyle-Produkte, Themenparks und Lernkonzepten versuchte. Dabei geriet das, was Lego immer schon stark gemacht hatte, aus dem Fokus. An Stelle des klaren und einfachen Designs, in dem Kinder ihre eigenen Erlebeniswelten nachbauen konnten, traten abstrakte Action- und Sci-Fi-Figuren. Die bei Kleinkindern beliebte Duplo-Linie wurde durch ein längst schon wieder abgeschafftes System namens Lego Explore ersetzt, das sich an den Spielzeugen des Konkurrenten Fisher-Price orientierte. Lego war in viele Richtungen gewachsen, aber diese Diversifizierung brach dem Unternehmen fast das Genick.

    Eine interne Untersuchung ergab im Jahr 2004, dass 94% der Lego-Baukästen unprofitabel waren. Neben der misslungenen Diversifizierung war dies auch auf eine misslungene Systemarchitektur zurückzuführen. Zwischen 1997 und 2004 war die Nummer an Einzelbausteinen von 6.000 auf über 14.000 explodiert. Die Farbvarianten der Bausteine stiegen von ursprünglich sechs auf über 50 an. Jeder Einkäufer, jeder Logistiker kann sich vorstellen, wie viel Mehraufwand es für Produktion, Lager und Versand bedeutet, wenn neue Elemente nur für ein Baukasten-Set verwendbar sind. Aber auch die Integrität der Marken litt darunter: Plötzlich gab es nicht ein, sondern gleich acht Polizeimännchen mit minimalen Unterschieden in der Konfiguration. Für Produktlinien wurden eigene Minifiguren geschaffen, die zum Teil den Gesichtern ihrer Designer nachempfunden waren. Die Tradition war, im wörtlichsten Sinn (tra-dere – hinüber-geben) verloren gegangen. Erwachsene fanden das Lego ihrer Kindheit nicht wieder – und Kinder konnten mit den schönen neuen Welten wenig anfangen.

    Rückbesinnung auf die Stärken

    Der Weg zurück zur Profitabilität war ebenso einfach wie genial. Robertson erzählt, wie bei dem frisch ernannten CEO Jørgen Vig Knudstorp, damals Mitte 30, der Groschen fiel:

    “For children and their parents, the benefits of a play system were obvious: combining bricks in almost any way they wanted fired kid’s creativity and imagination and delivered a singularly unique building experience. But for Knudstorp, his eureka moment came when he realized the Lego System is not just a play system, it’s also a business system. (…) Instead of following the industry norm of striving to come up with one-hit wonders, LEGO should create a coherent, expandable universe of toys. A Lego system of toys (…) would build familiarity and a sense of community around Lego.”

    Diese Erkenntnis führte beispielsweise dazu, dass künftig mindestens 70% aller Bauteile aus bisherigem Bestand sein mussten. Oder dass die erfahrensten Designer und Entwickler wieder die Autorität bekamen, den Entwicklungsprozess von Anfang an mitzubestimmen. Oder dass die interne Softwareentwicklung mangels Erfahrung aufgegeben wurde und stattdessen Kooperationen eingegangen wurden.
    Die Lego-Story ist ein Lehrstück in Skalierung. Kurz vor dem eigenen Untergang hat Lego erkannt, dass wirkliches Wachstum nur in Feldern geschehen kann, in denen das Unternehmen stark ist. In den neunziger Jahren versuchte Lego, Zeittrends hinterher zu laufen, die mit seiner eigenen Identität wenig zu tun hatten. Entsprechend hilflos agierte Lego dann auch. Bis es schließlich erkannte, welche Stärken es schon immer gehabt hatte. Dann hat Lego angefangen, konsequent auf diese Stärken zu fokussieren. Dort, im geschlossenen Ökoystem der Bausteinwelten, ist das Wachstum gelungen. Das Ergebnis kann sich mittlerweile auch finanziell sehen lassen: Im vergangenen Jahr meldete die Lego-Gruppe eine Umsatzrendite von 34% bei 3,1 Milliarden Euro Umsatz.


    Vitra-Haus in Weil am Rhein. Die Architekten Herzog & de Neuron beschreiben die Verschachtelung von zwölf Häusern auf fünf Etagen als “domestic scale”. Das archetypische Haus (Ur-Haus) dient dabei als Grundform. Quelle:

    Related posts:

    1. 1 Euro wer zu spät kommt | Daily Scrum | Bärenherz
    2. Die Kraft der Begeisterung
    3. Mehr wissen! Moderationstraining

    Categories: Blogs

    Role != Job

    Agile Tools - Wed, 09/17/2014 - 07:16


    When I talk to folks about Scrum, one of the points I make sure to cover is the holy trinity, the three basic roles in Scrum: Product Owner, Scrum Master, and Team. I’m starting to think I must be doing it wrong because when I talk about roles, somehow that role manifests itself as a job. Let me back up a step and see if I can explain what I mean. To me, a role is a transitory responsibility that anyone can take on. It’s akin to what actors do. Actors take different roles all the time. But when an actor takes a role, say as a teacher, they act in every way like a teacher, without actually being a teacher. They do it and then leave it behind and move on to the next role. They may perform the role so well that you can’t tell the difference between the actor and the teacher, but to the actor teaching is still just a role.

    Now there are people for whom teaching is a job. A job is very different from a role. You are hired for a job. A job is something that you identify with and are assigned to. A job, at least for some, becomes something that they identify strongly with (i.e. “I am a teacher.” or “Teaching is what I do.”). A job is a very different thing than a role. A job comes with identity, some feeling of authenticity and permanence. Typically we hire people to perform jobs.

    According to this definition, jobs and roles are very different beasts. However, people have a hard time keeping this distinction in mind. We tend to take roles and turn them into jobs. That’s unfortunate, because a role is meant to be something transitory, something that is filled temporarily. It is meant to be worn like a costume and then passed on to the next wearer. When you turn a role into a job, you risk perverting it’s purpose. When you turn a role into a job, you make it very difficult for others to share it – it’s hard to swap back and forth. When you make a role into a job, people get surprisingly defensive about it. It becomes something that they identify with very closely. If you try and tell them that anybody can do it, they tend to get all fussy and upset. They start to try and protect their job with clever artifacts like certifications – they’ll do anything to make themselves unique enough to keep that job. It’s an identity trap.

    Here is how I see this problem manifest itself with Scrum teams: You sell them on scrum and teach them how it works. Every team has a Scrum Master and a Product Owner. So what do they do? They run out and hire themselves some people to fill the jobs of Scrum Masters and Product Owners. They get their teams sprinting and start delivering quickly – hey, now they’re agile! Only they’re not really. You see, as you face the challenge and complexity of modern day business, the team often needs to change. That person you hired as the Scrum Master? You may be best served to swap that role with somebody else. Maybe a developer or QA on the team. The ability to move that role around to different actors could be very useful. But you can’t do that now because it’s no longer a role, it’s somebody’s job. And you can’t mess with their job without seriously upsetting somebody. The end result is that your organization effectively can’t change. You limit your agility.

    The bottom line is that I believe that the roles in Scrum were never intended to be jobs. To make those roles into jobs risks limiting your agility.

    Filed under: Agile, Coaching, Scrum, Teams Tagged: Agile, agility, jobs, roles, Scrum
    Categories: Blogs

    Reliable Scrum

    TV Agile - Tue, 09/16/2014 - 22:04
    This presentation shows an approach based on Critical Chain Project Management (CCPM) how to make Agile approaches reliable (Reliable Scrum) and protect the Agile part of course. The next step is to make them even faster (and more Agile) with ideas out of the Lean and Theory of Constraints (TOC). In the third step you […]
    Categories: Blogs

    XProgNew – today’s problem - Tue, 09/16/2014 - 20:43

    With Bill Tozier, I’m working on re-basing from WordPress to Ruby/Sinatra etc. We tweeted a bit about an issue we ran into today. It goes like this:

    The image we had of the new site is that each article would be stored in a folder named to match the WordPress “slug” for the article such as “…/articles/xprognew-todays-problem/”. We’re planning to keep the article, its metadata, and any other assets, such as images, right in the article.

    Writing will be in Markdown. So I’d expect to write an article that looks like this:

    Some Title

    Here's a paragraph of text. Assume it runs on and on ...


    Markdown assumes that the construct above specifies a picture in the same folder as the article. It will generate, roughly, <img src='picture7.png/>.

    This would clearly be the easiest way to specify the picture, and it seems to us to make sense to keep the pictures and metadata with the article. And the folder name matching the slug makes converting the existing site much easier. (Existing images will not have that kind of link, but are all in a WordPress images folder together somewhere. We’ve not looked yet at what to do about that. We’re trying to make new articles easier and if the conversion is tricky so be it.)

    We are using Sinatra and kramdown. The Markdown above does generate the image statement shown. However, somewhere inside or above Sinatra, there is a built-in assumption that assets like images will be in a folder named “public”, and the article is rendered as if it, itself, were in public, so that the image call looks into public.

    We tried navigating out of public with things like “../articles/myslug/” but this just doesn’t work. I’m told by Z Spencer that Rackup or whatever its name is, keeps you in public as a security measure. Good for it.

    The result, though, is that we see few options:

    • Put all images in public. This will work, and is not unlike WordPress, but we don’t like it.
    • Put images in a subfolder of public, and type the folder name into the image call. This ties the article contents to the name of its slug and we don’t like that.
    • Put images in a subfolder of public, and do an on-the-fly substitution of the file name as we render the file. This requires a search and replace on the article and we don’t like that either.

    There are probably other options but we see none that we like. And we see, today, no way to override where public points, except somewhere else under public.

    Anyway, that’s the problem I’ve been tweeting about.

    Categories: Blogs

    R: ggplot – Plotting multiple variables on a line chart

    Mark Needham - Tue, 09/16/2014 - 18:59

    In my continued playing around with meetup data I wanted to plot the number of members who join the Neo4j group over time.

    I started off with the variable ‘byWeek’ which shows how many members joined the group each week:

    > head(byWeek)
    Source: local data frame [6 x 2]
            week n
    1 2011-06-02 8
    2 2011-06-09 4
    3 2011-06-30 2
    4 2011-07-14 1
    5 2011-07-21 1
    6 2011-08-18 1

    I wanted to plot the actual count alongside a rolling average for which I created the following data frame:

    joinsByWeek = data.frame(actual = byWeek$n, 
                             week = byWeek$week,
                             rolling = rollmean(byWeek$n, 4, fill = NA, align=c("right")))
    > head(joinsByWeek, 10)
       actual       week rolling
    1       8 2011-06-02      NA
    2       4 2011-06-09      NA
    3       2 2011-06-30      NA
    4       1 2011-07-14    3.75
    5       1 2011-07-21    2.00
    6       1 2011-08-18    1.25
    7       1 2011-10-13    1.00
    8       2 2011-11-24    1.25
    9       1 2012-01-05    1.25
    10      3 2012-01-12    1.75

    The next step was to work out how to plot both ‘rolling’ and ‘actual’ on the same line chart. The easiest way is to make two calls to ‘geom_line’, like so:

    ggplot(joinsByWeek, aes(x = week)) + 
      geom_line(aes(y = rolling), colour="blue") + 
      geom_line(aes(y = actual), colour = "grey") + 
      ylab(label="Number of new members") + 
    2014 09 16 21 57 14

    Alternatively we can make use of the ‘melt’ function from the reshape library…

    meltedJoinsByWeek = melt(joinsByWeek, id = 'week')
    > head(meltedJoinsByWeek, 20)
       week variable value
    1     1   actual     8
    2     2   actual     4
    3     3   actual     2
    4     4   actual     1
    5     5   actual     1
    6     6   actual     1
    7     7   actual     1
    8     8   actual     2
    9     9   actual     1
    10   10   actual     3
    11   11   actual     1
    12   12   actual     2
    13   13   actual     4
    14   14   actual     2
    15   15   actual     3
    16   16   actual     5
    17   17   actual     1
    18   18   actual     2
    19   19   actual     1
    20   20   actual     2

    …which then means we can plot the chart with a single call to geom_line:

    ggplot(meltedJoinsByWeek, aes(x = week, y = value, colour = variable)) + 
      geom_line() + 
      ylab(label="Number of new members") + 
      xlab("Week Number") + 
      scale_colour_manual(values=c("grey", "blue"))

    2014 09 16 22 17 40

    Categories: Blogs

    How To Think Like a Microsoft Executive

    J.D. Meier's Blog - Tue, 09/16/2014 - 18:06

    One of the things I do, as a patterns and practices kind of guy, is research and share success patterns. 

    One of my more interesting bodies of work is my set of patterns and practices for successful executive thinking.

    A while back, I interviewed several Microsoft executives to get their take on how to think like an effective executive.

    While the styles vary, what I enjoyed is the different mindset that each executive uses as they approach the challenge of how to change the world in a meaningful way.

    5 Key Questions to Share Proven Practices for Executive Thinking

    My approach was pretty simple.   I tried to think of a simple way to capture and distill the essence. I originally went the path of identifying key thinking scenarios (changing perspective, creating ideas, evaluating ideas, making decisions, making meaning, prioritizing ideas, and solving problems) ... and the path of identifying key thinking techniques (blue ocean/strategic profile, PMI, Six Thinking Hats, PQ/PA, BusinessThink, Five Whys, ... etc.) -- but I think just a simple set of 5 key questions was more effective.

    These are the five questions I ended up using:

    1. What frame do you mostly use to evaluate ideas? (for example, one frame is: who's the customer? what's the problem? what's the competition doing? what does success look like?)
    2. How do you think differently, than other people might, that helps you get a better perspective on the problem?
    3. How do you think differently, than other people might, that helps you make a better decision?
    4. What are the top 3 questions you ask yourself the most each day that make the most difference?
    5. How do you get in your best state of mind or frame of mind for your best thinking?

    The insights and lessons learned could fill books, but I thought I would share three of the responses that I tend to use and draw from on a regular basis …

    Microsoft Executive #1

    1) The dominant framework I like to use for decisions is: how can we best help the customer? Prioritizing the customer is nearly always the right way to make good decisions for the long term. While one has to have awareness of the competition and the like, it usually fails to “follow taillights” excessively. The best lens through which to view the competition is, “how are they helping their customers, and is there anything we can learn from them about how to help our own customers?”

    2) I don’t think that there is anything magical about executive thinking. The one thing we hopefully have is a greater breadth and depth of experience on key decisions. We use this experience to discern patterns, and those patterns often help us make good decisions on relatively little data.

    3) Same answer as #2.

    4) How can we help our customers more? Are we being realistic in our assessments of ourselves, our offerings and the needs of our customers? How can we best execute on delivering customer value?

    5) It is key to keep some discretionary time for connecting with customers, studying the competition and the marketplace and “white space thinking.” It is too easy to get caught up on being reactionary to lots of short-term details and therefore lose the time to think about the long term.

    Microsoft Executive #2

    There are three things that I think about as it relates to leading organizations: Vision, People and Results. Some of the principles in each of these components will apply to any organization, whether the organization's goal is to make profit, achieve strategic objectives, or make non-profit social impact.


    In setting the vision and top level objectives, it is very important to pick the right priorities. I like to focus on the big rocks instead of small rocks at the vision-setting stage. In today's world of information overload, it is really easy to get bombarded with too many things needing attention. This can dilute your focus across too many objectives. The negative effect of not having a clear concentrated focus multiplies rapidly across many people when you are running a large organization. So, you need to first ask yourself what are the few ultimate results that are the objectives of your organization and then stay disciplined to focus on those objectives. The ultimate goal might be a single objective or a few, but should not be a laundry list. It is alright to have multiple metrics that are aligned to drive each objective, but the overall objectives themselves should be crisp and focused.


    The next step in running an organization is to make sure you have the right people in the right jobs. This starts with first identifying the needs of the business to achieve the vision set out above. Then, I try to figure out what types of roles are needed to meet those needs. What will the organization structure look like? What kind of competencies, that is, attributes, skills, and behaviors, are needed in those roles to meet expected results? If there is a mismatch between the role and the person, it can set up both the employee and the business for failure. So, this is a crucial step in making sure you've a well running organization.

    Once you have the right people in the right jobs, I try to make sure that the work environment encourages people to do their best. Selfless leadership, where the leaders have a sense of humility and are committed to the success of the business over their own self, is essential. An inclusive environment where everyone is encouraged to contribute is also a must. People's experience with the organization is for the most part shaped by their interaction with their immediate manager. Therefore, it is very important that a lot of care goes into selecting, encouraging and rewarding people managers who can create a positive environment for their employees.


    Finally, the organization needs to produce results towards achieving the vision and the objectives you set out. Do not confuse results with actions. You need to make sure you reward people based on performance towards producing results instead of actions. When setting commitments for people, you need to be thoughtful about what metrics you choose so that you incent the right behavior. This again helps build an environment that encourages people to do their best. Producing results also requires that you've a compelling strategy for the organization. Thus, you need to stay on top of where the market and customers are. This will help you focus your organization's efforts on anticipating customer needs, and proactively taking steps to delight customers. This is necessary to ensure that organization's resources are prioritized towards those efforts that will produce the highest return on investment.

    Microsoft Executive #3
    1. Different situations call for different pivots.  That said, I most often start with the customer, as technology is just a tool; ultimately, people are trying to solve problems.  I should note, however, that “customer” does not always mean the person who licenses or uses our products and/or services.  While they may be the focus, my true “customer” is sometimes the business itself (and its management), a business group, or a government (addressing a policy issue).  Often, the problem presented has to be solved in a multi-disciplinary way (e.g., a mixture of policy changes, education, technological innovation, and business process refinements).  Think, for example, about protecting children on-line.  While technology may help, any comprehensive solution may also involve government laws, parental and child education, a change in website business practices, etc.
    2. As noted above, the key is thinking in a multi-disciplinary way. People gravitate to what they know; thus the old adage that “if you have a hammer, everything you see is a nail.” Think more broadly about an issue, and a more interesting solution to the customer’s problem may present itself. (Scenario focused engineering works this way too.)
    3. It is partially about thinking differently (as discussed above), but also about seeking the right counsel.  There is an interesting truth about hard Presidential decisions.  The more sensitive an issue, the fewer the number of people consulted (because of the sensitivity) and the less informed the decision.  Obtaining good counsel – while avoiding the pitfall of paralysis (either because you have yet to speak to everyone on the planet or because there was not universal consensus on what to do next) is the key.
    4. (1) What is the right thing to do? (This may be harder than it looks because the different customers described above may have different interests.  For example, a costly solution may be good for customers but bad for shareholders.  A regulatory solution might be convenient for governments but stifle technological innovation.)  (2) What unintended consequences might occur? (The best laid plans….).  (3) Will the solution be achievable?
    5. I need quiet time; time to think deeply.

    The big things that really stand out for me are using the customer as the North Star, balancing with multi-disciplinary perspectives, evaluating multiple, cascading ramifications, and leading with vision.

    You Might Also Like

    100 Articles to Sharpen Your Mind

    Rituals for Results

    Thinking About Career Paths

    Categories: Blogs

    Cuttable Scope

    J.D. Meier's Blog - Tue, 09/16/2014 - 17:22

    Early on in my Program Management career, I ran into challenges around cutting scope.

    The schedule said the project was done by next week, but scope said the project would be done a few months from now.

    On the Microsoft patterns & practices team, we optimized around “fix time, flex scope.”   This ensured we were on time, on budget.  This helped constrain risk.  Plus, as soon as you start chasing scope, you become a victim of scope creep, and create a runaway train.  It’s better to get smart people shipping on a cadence, and focus on creating incremental value.  If the trains leave the station on time, then if you miss a train, you know you can count on the next train.  Plus, this builds a reputation for shipping and execution excellence.

    And so I would have to cut scope, and feel the pains of impact ripple across multiple dependencies.

    Without a simple chunking mechanism, it was a game of trying to cut features and trying to figure out which requirements could be completed and still be useful within a given time frame.

    This is where User Stories and System Stories helped.  

    Stories created a simple way to chunk up value.   Stories help us put requirements into a context and a testable outcome, share what good looks like, and estimate our work.  So paring stories down is fine, and a good thing, as long as we can still achieve those basic goals.

    Stories help us create Cuttable Scope.  

    They make it easier to deliver value in incremental chunks.

    A healthy project start includes a baseline set of stories that help define a Minimum Credible Release, and additional stories that would add additional, incremental value.

    It helps create a lot of confidence in your project when there is a clear vision for what your solution will do, along with a healthy path of execution that includes a baseline release, along with a healthy pipeline of additional value, chunked up in the form of user stories that your stakeholders and user community can relate to.

    You Might Also Like

    Continuous Value Delivery the Agile Way

    Experience-Driven Development

    Kanban: The Secret of High-Performing Teams at Microsoft

    Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

    Portfolios, Programs, and Projects

    Categories: Blogs

    The ScrumMaster is Responsible for What Artifacts?

    Learn more about our Scrum and Agile training sessions on

    Organizations like to have clear role definitions, clear processes outlined and clear documentation templates.  It’s just in the nature of bureaucracy to want to know every detail, to capture every dotted “i” and crossed “t”, and to use all that information to control, monitor, predict and protect.  ScrumMasters should be anti-bureaucracy.  Not anti-process, not anti-documentation, but constantly on the lookout for process and documentation creep.

    To help aspiring ScrumMasters, particularly those who come from a formal Project Management background, I have here a short list of exactly which artifacts the ScrumMaster is responsible for.

    - None – the ScrumMaster is a facilitator and change agent and is not directly responsible for any of the Scrum artifacts (e.g. Product Backlog) or traditional artifacts (e.g. Gantt Chart).

    - Obstacles or impediments “backlog” - a list of all the problems, obstacles, impediments and challenges that the Scrum Team is facing.  These obstacles can be identified by Team Members at any time, but particularly during the Daily Scrum or the Retrospective.
    - Definition of “Done” gap report, every Sprint – a comparison of how “done” the Team’s work is during Sprint Review vs. the corporate standards required to actually ship an increment of the Team’s work (e.g. unit testing done every Sprint, but not system testing).
    - Sharable retrospective outcomes report, every Sprint – an optional report from the Scrum Team to outside stakeholders including other Scrum Teams.  Current best practice is that the retrospective is a private meeting for the members of the Scrum Team and that in order to create a safe environment, the Scrum Team only shares items from the retrospective if they are unanimously agreed.  Outsiders are not welcome to the retrospective.
    - Sprint burndown chart every Sprint – a chart that tracks the amount of work remaining at the end of every day of the Sprint, usually measured in number of tasks.  This chart simply helps a team to see if their progress so far during a Sprint is reasonable for them to complete their work.
    - State of Scrum report, every Sprint – possibly using a checklist or tool such as the “Scrum Team Assessment” (shameless plug alert!).

    - minutes of Scrum meetings
    - process compliance audit reports
    - project administrative documents (e.g. status reports, time sheets)

    - project charter (often recommended for the Product Owner, however)
    - project plans (this is done by the Product Owner and the Scrum Team with the Product Backlog)
    - any sort of up-front technical or design documents

    The ScrumMaster is not a project manager, not a technical lead, not a functional manager, and not even a team coach.  There are aspects of all of those roles in the ScrumMaster role, but it is best to think of the role as completely new and focused on two things:
    - improving how the team uses Scrum
    - helping the team to remove obstacles and impediments to getting their work done.

    Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
    Categories: Blogs

    Telling Executive Stories

    Leading Agile - Mike Cottmeyer - Tue, 09/16/2014 - 15:31

    Delivery teams manage and deliver value supported by the tool user stories. These teams tell stories about who, what, why, and acceptability using standard form, “As a <persona>, I want <capability> so that <delivered value> occurs,” and behavior acceptance form, “Given < context>, when <action occurs>, then < consequence >.” These stories form the foundation of repeatable delivery and management of value.

    While these forms support delivery team conversations well, they are inadequate to support the richer conversation needed by executives to manage investment and value. What forms the basis of these stories? How do we tell stories about delivering product value to our customers and delivering investment value to our organization?

    Developing contextual story-telling focuses on the kinds of conversations the product managers, product owners, business owners, and executives have when they meet to operate and run the business. We listen to these stories and then use existing canvas templates to develop contextually relevant canvas designs. These canvases become the fabric used when beginning new stories, and continuing old stories regarding business strategy and tactics.

    To develop the canvases, we need to listen to these conversations and stories and develop a sense for topics and content of the strategic and tactical conversations. These questions form the thinking needed to create a first draft tool that can be used to bookmark a conversation.

    • What is the focus of each conversation?
    • What are the conversational topics?
    • What is the airtime of each topic?
    • What is the passion level of each topic?
    • In what order are the topics discussed?

    These conversations may cover the following topics:

    Product Focus Areas

    • Vision
    • Problem Space
    • Solution Space
    • Metrics
    • Costs
    • Alignment
    • Value
    Market Focus Areas

    • Revenue
    • Customers
    • Delivery Channels
    • Strengths
    • Vision
    • Value

    We also discover that there are quite a few topics of conversations that don’t quite fit into the strategic bucket and sound more like high-level tactics. This turns out to be the work of the executive and product teams. Those topics of conversation may cover the following:

    • Naming
    • Goal
    • Metrics
    • Leader or Owner
    • Customers
    • Stakeholders
    • Overview
    • Big Picture
    • Alignment
    • Solution Details

    The key is identifying the key conversations in meetings and formulating a canvas around those topics. A common mistake is to use an existing template and force the conversations to conform to that template. Although tempting, this mistake leads to disengagement and abandonment of the canvas and tools. The tools are there to support the way the team works, not to force conformity to industry luminary ideals. These canvas designs will evolve as the organization improves its prowess at portfolio management.

    The following is an example of the conversations important to an executive strategy canvas we recently developed:

    • Vision: Why pursue the strategy?
    • Customers: Who wanted it achieved?
    • Problem Space: What problems were they facing?
    • Solution Space: What solutions would work?
    • Value to the Customer: What epic stories would the customers tell?
    • Metrics:  What dials would move in the near future?
    • Value to the Organization: How does the organization benefit?

    These topics of conversation are arranged canvas style so that new conversations take place to create a new strategy or so that existing conversations can be continued to check in on existing conversations. You may notice that while the range of topics available, this group focused around the customer. This is very clear by the absence of cost and revenue as a significant part of their strategic conversation.

    The executives also had conversations about investments they would make in the strategy. The topics of that conversation included the following areas:

    • Goal: What is the desired outcome?
    • Metrics: What are the measures of success?
    • Customers: Who wants this?
    • Big Picture: What is the big picture?
    • Solution Details: What are possible solutions?
    • Alignment: How does this align with the strategy canvas?

    We also listened for how the executive team intended to use the tools to support their work. They decided to use the canvases in the following ways:

    Strategic Canvas

    • 90 Day True North
    • Investment Decision Filter
    • Organizational Strategic Alignment
    • Organizational Transparent

    Investment Canvas

    • Tactical deliverable investment designed to experiment with some part of the strategy
    • Flow in a work system
    • Regular discussions about discovery, validation, delivery, and evaluation

    We have created a glimpse of a strategic and work alignment system focused on portfolio management. The system and artifacts was developed based on the context of the organization and the thought leadership in the industry. The key point to take away is that context matters when developing the artifacts executives use to manage their strategic and tactical portfolio work.

    The post Telling Executive Stories appeared first on LeadingAgile.

    Categories: Blogs