Skip to content

Blogs

Neo4j: Cypher – Step by step to creating a linked list of adjacent nodes using UNWIND

Mark Needham - Fri, 06/05/2015 - 00:17

In late 2013 I wrote a post showing how to create a linked list connecting different football seasons together using Neo4j’s Cypher query language, a post I’ve frequently copy & pasted from!

Now 18 months later, and using Neo4j 2.2 rather than 2.0, we can actually solve this problem in what I believe is a more intuitive way using the UNWIND function. Credit for the idea goes to Michael, I’m just the messenger.

To recap, we had a collection of football seasons and we wanted to connect adjacent seasons to each other to allow easy querying between seasons. The following is the code we used:

CREATE (:Season {name: "2013/2014", timestamp: 1375315200})
CREATE (:Season {name: "2012/2013", timestamp: 1343779200})
CREATE (:Season {name: "2011/2012", timestamp: 1312156800})
CREATE (:Season {name: "2010/2011", timestamp: 1280620800})
CREATE (:Season {name: "2009/2010", timestamp: 1249084800})
MATCH (s:Season)
WITH s
ORDER BY s.timestamp
WITH COLLECT(s) AS seasons
 
FOREACH(i in RANGE(0, length(seasons)-2) | 
    FOREACH(si in [seasons[i]] | 
        FOREACH(si2 in [seasons[i+1]] | 
            MERGE (si)-[:NEXT]->(si2))))

Our goal is to replace those 3 FOREACH loops with something a bit easier to understand. To start with, let’s run the first part of the query to get some intuition of what we’re trying to do:

MATCH (s:Season)
WITH s
ORDER BY s.timestamp
RETURN COLLECT(s) AS seasons
 
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | seasons                                                                                                                                                                                                                                                     |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | [Node[1973]{timestamp:1249084800,name:"2009/2010"},Node[1972]{timestamp:1280620800,name:"2010/2011"},Node[1971]{timestamp:1312156800,name:"2011/2012"},Node[1970]{timestamp:1343779200,name:"2012/2013"},Node[1969]{timestamp:1375315200,name:"2013/2014"}] |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

So at this point we’ve got all the seasons in an array going from 2009/2010 up to 2013/2014. We want to create a ‘NEXT’ relationship between 2009/2010 -> 2010/2011, 2010/2011 -> 2011/2012 and so on.

To achieve this we need to get the adjacent seasons split into two columns, like so:

2009/2010	2010/2011
2010/2011	2011/2012
2011/2012	2012/2013
2012/2013	2013/2014

If we can get the data into that format then we can apply a MERGE between the two fields to create the ‘NEXT’ relationship. So how do we do that?

If we were in Python we’d be calling for the zip function which we could apply like this:

>>> seasons = ["2009/2010", "2010/2011", "2011/2012", "2012/2013", "2013/2014"]
 
>>> zip(seasons, seasons[1:])
[('2009/2010', '2010/2011'), ('2010/2011', '2011/2012'), ('2011/2012', '2012/2013'), ('2012/2013', '2013/2014')]

Unfortunately we don’t have an equivalent function in Cypher but we can achieve the same outcome by creating 2 columns with adjacent integer values. The RANGE and UNWIND functions are our friends here:

return RANGE(0,4)
 
==> +-------------+
==> | RANGE(0,4)  |
==> +-------------+
==> | [0,1,2,3,4] |
==> +-------------+
UNWIND RANGE(0,4) as idx 
RETURN idx, idx +1;
 
==> +--------------+
==> | idx | idx +1 |
==> +--------------+
==> | 0   | 1      |
==> | 1   | 2      |
==> | 2   | 3      |
==> | 3   | 4      |
==> | 4   | 5      |
==> +--------------+
==> 5 rows

Now all we need to do is plug this code into our original query where ‘idx’ and ‘idx + 1′ represent indexes into the array of seasons. We use a range which stops 1 element early since there isn’t anywhere to connect our last season to:

MATCH (s:Season)
WITH s
ORDER BY s.timestamp
WITH COLLECT(s) AS seasons
UNWIND RANGE(0,LENGTH(seasons) - 2) as idx 
RETURN seasons[idx], seasons[idx+1]
 
==> +-------------------------------------------------------------------------------------------------------+
==> | seasons[idx]                                      | seasons[idx+1]                                    |
==> +-------------------------------------------------------------------------------------------------------+
==> | Node[1973]{timestamp:1249084800,name:"2009/2010"} | Node[1972]{timestamp:1280620800,name:"2010/2011"} |
==> | Node[1972]{timestamp:1280620800,name:"2010/2011"} | Node[1971]{timestamp:1312156800,name:"2011/2012"} |
==> | Node[1971]{timestamp:1312156800,name:"2011/2012"} | Node[1970]{timestamp:1343779200,name:"2012/2013"} |
==> | Node[1970]{timestamp:1343779200,name:"2012/2013"} | Node[1969]{timestamp:1375315200,name:"2013/2014"} |
==> +-------------------------------------------------------------------------------------------------------+
==> 4 rows

Now we’ve got all the adjacent seasons lined up we complete the query with a call to MERGE:

MATCH (s:Season)
WITH s
ORDER BY s.timestamp
WITH COLLECT(s) AS seasons
UNWIND RANGE(0,LENGTH(seasons) - 2) as idx 
WITH seasons[idx] AS s1, seasons[idx+1] AS s2
MERGE (s1)-[:NEXT]->(s2)
 
==> +-------------------+
==> | No data returned. |
==> +-------------------+
==> Relationships created: 4

And we’re done. Hopefully I can remember this approach more than I did the initial one!

Categories: Blogs

The SAFe Leadership Retreat and the Power of Self-Organization

Agile Product Owner - Thu, 06/04/2015 - 18:52

Eight months ago, Martin Burns posted a message in the LinkedIn SPC Community of Practice titled “SAFe Leadership Retreat?”  The goals he stated were simple: (1) share ideas and experiences from a broad range of SAFe practitioners; (2) provide input into the direction of SAFe; and (3) build our community.

Inspired by this vision, Martin made this happen and approximately 40 people from 20 different companies across 4 continents made the journey to Crieff, Scotland the week of May 25th.  Did this self-organizing community accomplish its goals?  The answer is a resounding yes.

GOAL #1: SHARE IDEAS AND EXPERIENCES
Martin Burns, Ken Clyne, and Carl Vikman facilitated the Open Space and Lean Coffee sessions. Topics ranged from culture to Kanban to coordinating large value streams.  Though multiple sessions ran concurrently, we came back together for breakout summaries at the end of the retreat.  In addition, Stuart Young, a business visualization expert well-known in the Lean-Agile community, documented the sessions realtime.

SAFe Leadership Retreat Mural

GOAL #2: PROVIDE INPUT INTO THE DIRECTION OF THE FRAMEWORK
The Leadership Retreat began with a plenary session by Scaled Agile to articulate the company vision (Chris James), the state of the Framework (Dean Leffingwell), the future of professional development (Drew Jemilo), the Partner and SPCT programs (Jennifer Fawcett), and the development of the SAFe community (Carol McEwan).

The plenary session triggered many Open Space sessions which are now shaping the direction of SAFe 4.0, our role-based curriculum, and a deeper understanding of implementation success patterns.  We thank everyone for their contributions.

GOAL #3: BUILD OUR COMMUNITY
After Phil Gardiner took the group photo, we realized that our time together was coming to a close.  Before the closing dinner, Carl ran a retrospective which ended with the open-ended question, “How do we continue building the SAFe community after we leave?”

The next morning, on the bus ride to the Edinburgh airport, conversations continued.  We recognized that this self-organizing, face-to-face retreat built the “ba” which will enable us to fuel SAFe communities around the world.

SAFe Leadership Retreat Participants

Many thanks to Martin Burns and his wife, Lucy, for their hard work in bringing us together!

Cheers,
Drew, Dean, Chris, Jennifer, and Carol

Categories: Blogs

The Five Measures Canvas for Agile Transformation

Leading Agile - Mike Cottmeyer - Wed, 06/03/2015 - 21:17
Canvas-sidebyside600 Overview

The Five Measures is a tool from Sun Tzu’s The Art of War. It is the first tool he introduces in the book and it lays the foundation for everything that follows in what is one of the most well known, and widely used sources, that offers guidance on how to cope with situations that involve challenges or will require some form of negotiation. Personally, I began studying and using the tools in The Art of War over 20 years ago. While the premise of the book is rooted in the idea of war, what it has to offer can be (and has been) applied far beyond that area. A search on Amazon will result in thousands of translations, adaptations and detailed examinations of the text that cover everything from raising children, to playing golf, to coping with the challenges many of us face day-to-day in an office setting.

In this post, my goal is to provide an brief explanation of what the Five Measures are, why they are so valuable and how they can be applied in Agile adoption. In working with teams and individuals who are adopting Agile, the Five Measures is something I use in preparation for working with both the Personal Agility Canvas and the PMO Agility Canvas (which will be covered in upcoming posts).

The Five Measures

The Five Measures forms the foundation on which every other concept and tool in the book is based. In any conflict, situational awareness, or context, is often the factor that sets things up for success or failure. So whether you are actually engaging in battle; entering into a negotiation over which projects to pursue and which to drop; or simply having a conversation with management about how to cope with one of the many challenges we face during Agile Transformation, developing a mindful understanding of your surroundings makes all the difference.

The Five Measures are:

  • The Tao
  • The Climate
  • The Ground
  • The Leadership
  • The Discipline

Because I have found that The Tao can often be tough to understand without context, I will start with explaining the other four elements first.

The Ground

In battle, the terrain on which you will engage your opponent can have a major impact on your strategy. Is the ground rocky or muddy? Will we gain an advantage because we are in a higher elevation than our opponent? How much visibility do we have into the battlefield? The Ground is where we enter into the negotiation and the more we know about it, the better our chances. In an office setting, this can be related to the organizational structure. In every organization there is a defined breakdown of power and responsibility. Person B reports to Person A and because of his/her role and title, there are specific things Person B is expected to be responsible for and certain things the organization expects them to take action on. If you were to collect an org. chart and a RACI matrix from your department or your HR department, this should provide you with the detail you need to understand how the ground on which you will engage shapes the scope of what you should be able to get done.

The Climate

If you were going into battle, understanding the terrain is not enough. You also have to factor what the weather will be like when you engage. Will we be engaging the opponent in a hot environment or a cold one? Will it be raining or dry? Will we engage in the daytime or will it be night? In a work environment, this relates to the political environment. In any company, overlaid on top of the org. chart is the political structure. For example, I may know that if I go directly to Senior Executive A, I am not likely to get the response I desire because of where I sit in the organizational structure. However, because Manager B is a rising star within the company, and seen as the one who is helping to shape the future of the company, I may be able to expect that by winning over Manager B and gaining their support, they will help strengthen my case when I do go to Senior Executive A.

I have worked in a number of office settings where the easiest way to get access to a specific person was through someone else – maybe an Admin that I make friends with so that they will find room for me in the schedule of the person I want to interact with. The Climate is where social engineering tools and techniques become very important. We have to understand who has sway, who does not, and how to leverage that information to our advantage.

In the context of Agile Transformation at a company level, understanding who holds what rank in the organization and who has the juice to be able to help strengthen our efforts towards adoption is a vital aspect of success.

The Leadership

Developing awareness of “how” your organization is led is a crucial factor. For example, what is the style of leadership employed by those in charge or the organization in general? Steve Jobs is a great example of a very charismatic, but (arguably) abusive leader. He had a vision that pulled people in. He pushed them very hard and together they did the impossible. Jobs asked this of them because of their devotion to him and his ideals.

I have had the great fortune to work for some leaders who led through trust. Often, when I asked for direction, I was just told to do whatever made the most sense. For the right kind of people, this can be a dizzying, but powerful motivator. I’ve also worked with leaders who were extremely command and control focused. While it would be easy to characterize some styles as good and some as bad, understanding The Five Measures is not about tagging things with a value judgment. It is simply about understanding. Studying the way your leadership inspires or motivates people to work (and change) adds another vital dimension to the context.

The Discipline

This is a breaking point for many organizations that want to adopt Agile. Whether it is at the Executive level or the Team level, the question to ask is, to what extent do we have the ability to do what we say we will do? The road to Agile is littered with bodies that fell victim to “Executive Read HBR article on Airplane” syndrome, where someone in a position of power read about the promise of Agile and directed portions of the company to “go be Agile” with no desire or ability to actually change how they, and the company, interact with the work.

In the context of war, this is simple… are the troops so disciplined that when things go sideways in battle, they will hold the line and let the training guide them, or, will they go running helter skelter in a state of panic. When our organization agrees to adopt a set of Agile practices like Scrum, do we have the discipline to actually keep our Daily Scrum meetings to 15 minutes? Does the business side have the discipline to not try to change the commitment in mid sprint. Does the Scrum team actually hold all the ceremonies and do they protect the way they work?

The Tao

The Tao can loosely be translated into “The Way”. This is at the center of The Five Measures. What is the culture of the organization? Is it one that is supportive of difficult change? Does the business make it’s decisions by burning tomorrow to save today? What is the value system in place? Tying this back to The Leadership and The Discipline, does the company (or client) culture demonstrate that it truly does support the idea of experimentation and learning from failure? Does leadership demonstrate that it has discipline to let the teams work iteratively to find the best way of delivering the most value despite the fact that learning how to do so is going to include a certain degree of failure along the way? Or, does the culture support change only so far as it does not demonstrate that it will have any negative cost impact? In short, what makes this company tick? It is somewhat more difficult to pin down than the other four measures, but each of them feeds into this one.

Practical Application

When I am working with clients who are pursuing Agile adoption, I use the Five Measures to develop context and situational awareness of what is going on around me and in what environment I am going to be trying to promote change. In many ways it is about developing mindfulness so that as the work of Agile Transformation begins, we can have an open, transparent conversation about what we are facing. In some ways it is like a pre-emptive Retrospect, helping us to observe and understand before we begin trying to promote change.

The Five Measures Canvas offers a visual representation of the tool and can be used individually, or in a group.

If you are interested in The Art of War and would like to learn more, here are the three translations I recommend for getting started:


ClearyThe Art of War – Translated by Thomas Cleary (Shambala Pocket Edition)

This is a great, lightweight introduction that offers a simple explanation of the ideas presented in the text. This is a great starting point if you are new to Sun Tzu.

 

 

WingThe Art of Strategy – A New Translation of Sun Tzu’s Classic The Art of War by R.L. Wing

R.L. Wing begins each chapter of the book with an overview of the subject and then explains it on multiple levels that I have found invaluable in my understanding of how the system works. For each chapter Wing explores: Conflict In The Self, Conflict In The Environment, Conflict With Another, and Conflict Among Leaders.

GagliardiSun Tzu’s The Art of War Plus Its Amazing Secrets by Gary Gagliardi

Once you become familiar with ideas in The Art of War, Gagliardi’s breakdown of how to they work and his diagrams explaining the approach provide great insight into how to employ the tools in everyday life.

The post The Five Measures Canvas for Agile Transformation appeared first on LeadingAgile.

Categories: Blogs

End-to-End Hypermedia: Building the Server

Jimmy Bogard - Wed, 06/03/2015 - 19:26

In the last post, we looked at choosing a hypermedia type. This isn’t the easiest of things to do, and can take quite a bit of effort to settle on a choice. On top of that, you very often need validation from a client consumer that your choice was correct. Unfortunately, even tools like Apiary.io aren’t sufficient, you really need to build your API to validate your API.

This isn’t completely unexpected. If I’m building any sort of software the only way of proving it is what I need is to actually use it.

In our case, we choose collection+json as our media type since we were mainly showing lists of things. It’s a fairly straightforward format, with out of the box support for ASP.NET Web API. There are a few NuGet packages that help with collection+json support:

  • CollectionJson.Server – includes a base controller class
  • CollectionJson.Client – includes a formatter
  • CollectionJson – just the object model with no dependencies

We first explored using these out-of-the-box, with the controller base class Server version:

public class FriendsController : CollectionJsonController<Friend>
{
    private IFriendRepository repo;

    public FriendsController(IFriendRepository repo, ICollectionJsonDocumentWriter<Friend> writer, ICollectionJsonDocumentReader<Friend> reader)
        :base(writer, reader)
    {
        this.repo = repo;
    }

    protected override int Create(IWriteDocument writeDocument, HttpResponseMessage response)
    {
        var friend = Reader.Read(writeDocument);
        return repo.Add(friend);
    }

    protected override IReadDocument Read(HttpResponseMessage response)
    {
        var readDoc = Writer.Write(repo.GetAll());
        return readDoc;
    }

    protected override IReadDocument Read(int id, HttpResponseMessage response)
    {
        return Writer.Write(repo.Get(id));
    }

    //custom search method   
    public HttpResponseMessage Get(string name)
    {
        var friends = repo.GetAll().Where(f => f.FullName.IndexOf(name, StringComparison.OrdinalIgnoreCase) > -1);
        var readDocument = Writer.Write(friends);
        return readDocument.ToHttpResponseMessage();
    }

    protected override IReadDocument Update(int id, IWriteDocument writeDocument, HttpResponseMessage response)
    {
        var friend = Reader.Read(writeDocument);
        friend.Id = id;
        repo.Update(friend);
        return Writer.Write(friend);
    }

    protected override void Delete(int id, HttpResponseMessage response)
    {
        repo.Remove(id);
    }
}

If we were implementing a very pure version of collection+json, this might be a good route. However, not all of the HTTP methods were supported for our operations, so we wound up not using this one.

Next, we looked at the Client package, which includes a formatter and some extensions around HttpResponseMessages and the like. That worked best for us – we didn’t really need to extend the model to support extra metadata. I had thought we did, but looking back, we went through several iterations and finally landed on the stock collection+json model.

When looking at Web API extensions for hypermedia, I tend to see three sets of extensions:

  • Object model that represents the media type and is easily serializable
  • Helpers for inside your controller
  • Base controller classes

The code inside of these isn’t that much, so you can always just grab code from GitHub for your media type or roll your own object models.

Building your models

The CollectionJson.Client package deals with two-way model building – writing documents and reading documents. Writing a document involves taking a DTO and building a collection+json document. Reading a document involves taking a collection+json document and building a model.

In my plain ol’ JSON APIs, building a web API endpoint looks almost exactly like an MVC one:

public class FooController : ApiController
{
    private readonly IMediator _mediator;

    public OrganizationController(IMediator mediator)
    {
        _mediator = mediator;
    }

    public IEnumerable<FooModel> Get()
    {
        return _mediator.Send(new FooQuery());
    }

    public IEnumerable<FooModel> Get(string id)
    {
        return _mediator.Send(new FooQuery{Id = id});
    }
}

With building out documents, I need to take those DTOs and build out my representations  (my collection+json documents). The collection+json client defines two interfaces to help make this possible:

public interface ICollectionJsonDocumentReader<TItem>
{
    TItem Read(IWriteDocument document);
}
public interface ICollectionJsonDocumentWriter<TItem>
{
    IReadDocument Write(IEnumerable<TItem> data);
}

To make my life a bit easier, I created a mediator just for collection+json readers/writers, as I like to have a single point in which to request read/write documents:

public interface ICollectionJsonDocumentMediator
{
    IReadDocument Write<TItem>(TItem item);
    IReadDocument Write<TItem>(IEnumerable<TItem> item);
    TItem Read<TItem>(IWriteDocument document);
}

Once again we see that the Mediator pattern is great for turning types with generic parameters into methods with generic parameters. Our mediator implementation is pretty straightforward:

public class CollectionJsonDocumentMediator : ICollectionJsonDocumentMediator {
    private readonly IContainer _container;

    public CollectionJsonDocumentMediator(IContainer container) {
        _container = container;
    }

    public IReadDocument Write<TItem>(IEnumerable<TItem> item) {
        var writer = _container.GetInstance<ICollectionJsonDocumentWriter<TItem>>();

        return writer.Write(model);
    }
    
    public IReadDocument Write<TItem>(TItem item) {
        var writer = _container.GetInstance<ICollectionJsonDocumentWriter<TItem>>();

        return writer.Write(model);
    }
    
    public TItem Read<TItem<(IWriteDocument writeDocument) {
        var reader = _container.GetInstance<ICollectionJsonDocumentReader<TItem>>();

        return reader.Read(writeDocument);
    }
}

In our controllers, building out the responses is fairly easy now, we just add a step before our MediatR mediator:

public class InstructorController : ApiController {
    private readonly IMediator _mediator;
    private readonly ICollectionJsonDocumentMediator _documentMediator;
    
    public InstructorController(
        IMediator mediator,
        ICollectionJsonDocumentMediator documentMediator) {
        _mediator = mediator;
        _documentMediator = documentMediator;
    }
    
    public async Task<HttpResponseMessage> Get([FromUri] Get.Query model) {
        var model = await _mediator.SendAsync(model);
    
        var document = _documentMediator.Write(model);
    
        return document.ToHttpResponseMessage();
    }
}

The MediatR part will be the same as we would normally have. What we’ve added is our collection+json step of taking the DTO from the MediatR step and routing this to our collection+json document mediator. The document writer is then pretty straightforward too:

public class Get {
    // Query, model and handler here
    
    public class DocumentWriter
        : ICollectionJsonDocumentWriter<Model> {
        
        private readonly HttpRequestContext _context;
        
        public DocumentWriter(HttpRequestContext context) {
            _context = context;
        }
        
        public IReadDocument Write(IEnumerable<Model> data) {
            var document = new ReadDocument {
                Collection = new Collection {
                    Href = _context.url.Link<InstructorController>(c => c.Get()),
                    Version = "1.0"
                }
            };
            
            foreach (var model in data) {
                var item = new Item {
                    Href = _context.Url.Link<InstructorController>(c => c.Get(new Query {Id = model.Id}))
                };
                item.Data.Add(new Data { Name = "last-name", Value = model.LastName, prompt = "Last Name"});
                item.Data.Add(new Data { Name = "first-name", Value = model.FirstMidName, prompt = "First Name"});
                item.Data.Add(new Data { Name = "hire-date", Value = model.HireDate.ToString("yyyy-MM-dd"), prompt = "Hire Date"});
                item.Data.Add(new Data { Name = "location", Value = model.OfficeAssignmentLocation, prompt = "Location"});
                
                item.Links.Add(new Link {
                    Href = _context.Url.Link<InstructorController>(c => c.GetCourses(new Courses.Query {Id = model.Id})),
                    Prompt = "Courses",
                    Rel = "courses"
                });
                document.Collection.Items.Add(item);
            }
            
            return document;
        }
    }
}

If you’ve followed my conventional HTML series, you might notice that the kind of information we’re putting into our collection+json document is pretty similar to the metadata we read when building out intelligent views. This helps close a gap I’ve found when building SPAs – I was much less productive building these pages than regular server-side MVC apps since I lost all that nice metadata that only lived on the server. Pre-compiling views can work, but additional metadata in hypermedia-rich media types works too.

For the write side, I could build something similar, including templates and the like. In fact, you can borrow our ideas from conventional HTML to build out helpers for our collection+json models. Since we have models built around read/write items:

public class Model
{
    public int? ID { get; set; }

    public string LastName { get; set; }
    [Display(Name = "First Name")]
    public string FirstMidName { get; set; }

    [DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}")]
    public DateTime? HireDate { get; set; }

    [Display(Name = "Location")]
    public string OfficeAssignmentLocation { get; set; }
}

We can intelligently build out templates and data items:

public class Get {
    // Query, model and handler here
    
    public class DocumentWriter
        : ICollectionJsonDocumentWriter<Model> {
        
        private readonly HttpRequestContext _context;
        
        public DocumentWriter(HttpRequestContext context) {
            _context = context;
        }
        
        public IReadDocument Write(IEnumerable<Model> data) {
            var document = new ReadDocument {
                Collection = new Collection {
                    Href = _context.url.Link<InstructorController>(c => c.Get()),
                    Version = "1.0"
                }
            };
            
            foreach (var model in data) {
                var item = new Item {
                    Href = _context.Url.Link<InstructorController>(c => c.Get(new Query {Id = model.Id}))
                };
                item.Data.Add(model.ToData(m => m.LastName));
                item.Data.Add(model.ToData(m => m.FirstMidName));
                item.Data.Add(model.ToData(m => m.HireDate));
                item.Data.Add(model.ToData(m => m.OfficeAssignmentLocation));

                item.Links.Add(_context.Url.CollectionJsonLink<InstructorController>(c => c.GetCourses(new Courses.Query {Id = model.Id)));
                
                document.Collection.Items.Add(item);
            }
            document.Collection.Template.Data.BuildFrom<Post.Model>();
            
            return document;
        }
    }
}

Our document writers start to look somewhat similar to our Razor views. On the document reader side, it’s a similar exercise of pulling information out of the document, populating a DTO and sending down the MediatR pipeline. Just the reverse of our GET actions.

Altogether not too bad with the available extensions, but building the server API is just half the battle. In the next post, we’ll look at building a consuming client.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Request for Feedback on Agile Advice Book

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Agile Advice Book - coverMy Agile Advice book has been out for a little more than 4 months.  I’m extremely happy about the fact that over 25 people have purchased it so far without me doing any active promotion.  If any of my blog readers have purchased it, I am looking for feedback – please leave comments below.  (And, of course, if you haven’t purchased the book it’s only $2 on lulu.com and iBookstore.)

I am preparing the next release of the book: version 1.1.  It will include many minor changes and some new content, but I have only received a bit of feedback so far and I would greatly appreciate more.  Again, the best forum for feedback is here – feel free to be critical – I really want to make this better!

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Request for Feedback on Agile Advice Book appeared first on Agile Advice.

Categories: Blogs

Roles, not People

thekua.com@work - Wed, 06/03/2015 - 10:29

Naming functions and methods are one of the hardest tasks developers need to take. A good name is hard to find, but with enough thought, is useful to show intent.

Likewise, the name for a given role is useful to help establish what that role is accountable for, and can help speed up communication when people have a common understanding of that role.

All models are wrong, but some are useful – George E.P. Box

Unfortunately there are two inherent problems with role titles:

  • People do not understand the role
  • Roles are not the same as people
Issue 1: People do not understand the role

One point of confusion is assumptions about what the role does or does not do. For example, a Project Manager might assume that the QA role will be responsible for a Testing Strategy. In another situation, a different Project Manager might assume the Tech Lead will be responsible for a Testing Strategy. In this case, different expectations could be a source of conflict about which role is responsible for the Testing Strategy.

Another example might be where the Tech Lead assumes the QA role is responsible for the Testing Strategy, and the QA role assumes the Tech Lead is responsible – resulting in no one really thinking about a Testing Strategy.

A great way mechanism to force a way forward is to run a “Roles & Responsibilities” session. I find an effective method to run one is:

  • As an entire team, brainstorm all the important activities that must be completed by the entire group. Ensure that one activity is written on a separate sticky note.
  • Brainstorm some names of roles and put them at the top of a whiteboard/flipchart next to each other. You may want to add a generic “Everyone” or “Team Member” role as well.
  • Ask everyone to place each activity under the roles they think should be responsible for the activity.
  • Walking the board, review each role one at a time and their activities, inviting discussion and disagreement about why the role should be/not be responsible for that particular activity.

This is a very useful exercise for helping to define, and articulate confusion around particular goals.

Issue 2: Roles are not the same as people

Another common failure mode of roles is where people assume that a role is the same as the person. On my business card, I have the title: Generalising Specialist because it’s true – although I consider myself a developer, architect or Principal Consultant, I am also much more than that.

Generalising SpecialiastGeneralising Specialiast

People come with a whole bag of skills and experiences that sometimes fit into a particular role. Just as important as it is to understand a person’s strengths – it’s just as important to understand where their fit for a role is and the gaps. A person may be much more capable of playing several roles at once, or a role can be split among a group of people with the right set of skills and experiences.

Concluding thoughts

Remember that roles are a name we give to a collection of responsibilities and it doesn’t necessarily map to single people. A role may be split among people (where the responsibilities are distributed) but it is essential that everyone has the same understanding of who has those responsibilities.

Categories: Blogs

R: ggplot geom_density – Error in exists(name, envir = env, mode = mode) : argument “env” is missing, with no default

Mark Needham - Wed, 06/03/2015 - 07:52

Continuing on from yesterday’s blog post where I worked out how to clean up the Think Bayes Price is Right data set, the next task was to plot a distribution of the prices of show case items.

To recap, this is what the data frame we’re working with looks like:

library(dplyr)
 
df2011 = read.csv("~/projects/rLearning/showcases.2011.csv", na.strings = c("", "NA"))
df2011 = df2011 %>% na.omit()
 
> df2011 %>% head()
              X Sep..19 Sep..20 Sep..21 Sep..22 Sep..23 Sep..26 Sep..27 Sep..28 Sep..29 Sep..30 Oct..3
3    Showcase 1   50969   21901   32815   44432   24273   30554   20963   28941   25851   28800  37703
4    Showcase 2   45429   34061   53186   31428   22320   24337   41373   45437   41125   36319  38752
6         Bid 1   42000   14000   32000   27000   18750   27222   25000   35000   22500   21300  21567
7         Bid 2   34000   59900   45000   38000   23000   18525   32000   45000   32000   27500  23800
9  Difference 1    8969    7901     815   17432    5523    3332   -4037   -6059    3351    7500  16136
10 Difference 2   11429  -25839    8186   -6572    -680    5812    9373     437    9125    8819  14952
...

So our goal is to plot the density of the ‘Showcase 1′ items. Unfortunately those aren’t currently stored in a way that makes this easy for us. We need to flip the data frame so that we have a row for each date/price type/price:

PriceType  Date     Price
Showcase 1 Sep..19  50969
Showcase 2 Sep..19  21901
...
Showcase 1 Sep..20  45429
Showcase 2 Sep..20  34061

The reshape library’s melt function is our friend here:

library(reshape)
meltedDf = melt(df2011, id=c("X"))
 
> meltedDf %>% sample_n(10)
                X variable value
643    Showcase 1  Feb..24 27883
224    Showcase 2  Nov..10 34089
1062 Difference 2   Jun..4  9962
770    Showcase 2  Mar..28 39620
150  Difference 2  Oct..24  9137
431  Difference 1   Jan..4  7516
345         Bid 1  Dec..12 21569
918  Difference 2    May.1 -2093
536    Showcase 2  Jan..31 30918
502         Bid 2  Jan..23 27000

Now we need to plug this into ggplot. We’ll start by just plotting all the prices for showcase 1:

> ggplot(aes(x = value), data = meltedDf %>% filter(X == "Showcase 1")) +
    geom_density()
 
Error in exists(name, envir = env, mode = mode) : 
  argument "env" is missing, with no default


This error usually means that you’ve passed an empty data set to ggplot which isn’t the case here, but if we extract the values column we can see the problem:

> meltedDf$value[1:10]
 [1] "50969" "45429" "42000" "34000" "8969"  "11429" "21901" "34061" "14000" "59900"

They are all strings! Making it very difficult to plot a density curve which relies on the data being continuous. Let’s fix that and try again:

meltedDf$value = as.numeric(meltedDf$value)

ggplot(aes(x = value), data = meltedDf %>% filter(X == "Showcase 1")) +
  geom_density()

2015 06 03 06 46 48

If we want to show the curves for both showcases we can tweak our code slightly:

ggplot(meltedDf %>% filter(grepl("Showcase", X)), aes(x = value, colour = X)) + 
  geom_density() + 
  theme(legend.position="top")
2015 06 03 06 50 35

Et voila!

Categories: Blogs

Visual Management Tools

George Dinwiddie’s blog - Wed, 06/03/2015 - 03:26

Sometimes we intentionally make our work more visible so that we can more easily see what’s going on. We do this so that, as a group, we get a better picture of the whole of the group’s effort. At it’s best, this is more than a dashboard that displays information. Instead, it’s a tool that’s used by the people doing the work in the process of doing that work.

It’s important for such a display to be able to accurately describe the state of the work. If it leaves some state or aspect to be implicitly understood, it damages the tale that the tool can tell. One of the advantages of using a physical manifestation for such a tool is that the arrangement can be easily modified to handle special cases or situations that were not envisioned when the tool was first set up.

Sometimes people ask the tool to control people’s behavior. Of course, it cannot do that. People will behave the way they behave. If you try to use the tool to control behavior, perhaps by making it impossible for the tool to display a situation you want to discourage, you certainly damage the tools value. Rather than prevent the behavior, the inability will merely make it invisible. There must be a corollary of Goodhart’s Law here.

Instead, a good Visual Management Tool will display whatever is the reality, both desired and undesired. This makes visible the more abstract reality. Once it is visible, we can notice it, see patterns in it, and have a conversation about it. It is the conversation and the resulting mutual decisions that can change behavior.

Categories: Blogs

Two New Guidance Articles by Eric Willeke

Agile Product Owner - Tue, 06/02/2015 - 20:44

Back in February of this year, Dean Leffingwell blogged about two great articles written by Eric Willeke, Rally SAFe Program Consultant Trainer (SPCT) and Agile Transformation Coach.  Dean thought these articles should be in guidance and with Eric’s help they finally made it to the top of our backlog.

Article #1 – Role of PI Objectives

If you have ever been confused about the role and importance of PI Objectives and why we commit to objectives rather than Features, you will certainly appreciate and enjoy the guidance in this article.  Eric also describes three attributes of PI Objectives that make them so valuable in aligning the business and helping Agile Release Trains to achieve better business outcomes.

Article #2 - A Lean Perspective on SAFe Portfolio WIP Limits

Although SAFe provides high level guidance on WIP limits at the Portfolio level, there are important nuances that are key to understanding how the portfolio level in SAFe is Work in Process (WIP) limited.  Eric shares his perspective from his experience in the field and describes four ways in which SAFe provides implicit and explicit WIP limits around the portfolio.

Eric, thanks for your contributions to SAFe and we are pleased to formally recognize you as a SAFe Community Contributor!

Stay SAFe!
–Richard

Categories: Blogs

Discovery Kanban

TV Agile - Tue, 06/02/2015 - 17:29
As the business landscape is becoming more and more volatile, many organisations need to undergo a fundamental change of management discipline. Recent failures of large and small organisations have shown that a single focus on optimised exploitation of existing business can turn into a monkey trap that leads to stagnation and vulnerability to market disruption. […]
Categories: Blogs

It’s About Finishing, Not About Starting

Leading Agile - Mike Cottmeyer - Tue, 06/02/2015 - 16:59

When I’m teaching an agile bootcamp class and talking about work in process, I always make a point (usually multiple times) to tell the attendees that agile is about finishing work…not about starting work. I reinforce this by pointing out that you can have a glorious looking burndown chart for the duration of the sprint but completely fail in your mission to meet your commitments and finish stories. The team can be burning down hours beautifully on a daily basis, with the remaining task hours looking like they are tracking right along the ideal line, and then boom… It’s closing time for the sprint and no stories actually got completed.

Remember that notion of building working, tested software? Didn’t happen. The team started too many stories at once and ended up not being able to bring any of them across the finish line when the bell rang.

This notion of finishing work applies to sprint planning as well. If you short-change the time it takes to do good sprint planning, and the team meanders off to begin writing code and test plans too soon, there is a risk that the team is going to struggle to be successful.

Remember what we do in sprint planning. Consider velocity, and load the sprint backlog with high priority stories from the product backlog. Check. Determine capacity for the team to work on sprint tasks over the coming sprint. Check. Break stories into tasks and determine who is doing what. Check. Make sure the work is going to fit. Check. Commit to the work. Check.

But what can happen when you don’t take the time to thoughtfully break down tasks and estimate hours of effort? Consider the following burn down chart.

finishing

In this example, the team left sprint planning thinking that they were committing to 830 hours. But just two days into the sprint they discovered additional task hours and instead found themselves in the awkward position of actually needing almost 1200 hours of capacity to complete the committed stories. Guess what…they did not have 1200 hours of capacity to give, especially since they were now a full two days into the work.

In looking more closely at their burndown chart over the course of the two-week sprint, it took them almost 6 days of work to get back to the point where they had 830 hours of work left to do. Six days just to get back to where they thought was their starting point when they concluded their sprint planning meeting.

finishing 2

And surprise, they didn’t finish the sprint successfully.

So, don’t short-change the value that good sprint planning affords. Yes, it takes time. Yes, it can seem tedious. Yes, the team is anxious to get started.

But good sprint planning pays dividends.

Remember that it is about finishing work, and not about starting work.

The post It’s About Finishing, Not About Starting appeared first on LeadingAgile.

Categories: Blogs

The Agile Manifesto – Essay 3: Working Software over Comprehensive Documentation

Learn more about our Scrum and Agile training sessions on WorldMindware.com

How much documentation does it take to run a project with ten people working for six months?  For some organizations it takes way too much:

Photo of heavy documentation for software project

This binder (about 3 or 4 inches thick) is all the documentation associated with such a project.  In looking carefully at the project, creating the documentation took far more time than the time spent on designing, writing and testing the software.  Yet, the documentation does not produce any value.  Only the software produces value.  The Agile Manifesto, asks us to focus on the outcome (working software) and to make tradeoffs to minimize the means (comprehensive documentation).

The Agile Manifesto asks us to challenge our assumptions about documentation.  In many work environments, documentation is an attempt to address some interesting and important needs:

  • Knowledge sharing among stakeholders and the people working on a project.
  • Knowledge sharing across time as people come in and out of a project.
  • Verification and traceability for contracts or other compliance needs.
  • Decision-making and analysis for business and technical problems.
  • Management oversight and control.
  • Various aspects of individual accountability.

Documentation is usually heavier (more comprehensive) the more the following circumstances exist in an organization:

  • Geographical distribution of people.
  • Lack of trust between people, departments or organizations.
  • Regulated work environments.
  • Depth of management hierarchy.
  • Number of people directly and indirectly involved.
  • Knowledge and skill sets highly segregated between people.
  • Culture of respect for written texts.

Working Software

What if the software itself could address the needs that often documentation is used to address?  Let’s look at them in turn:

  • Knowledge sharing among stakeholders and the people working on a project.
    If the software is functional at all stages, as supported by Agile methods such as Scrum and Extreme Programming, then the software becomes an effective representation of the knowledge of all the people who have participated in building it.
  • Knowledge sharing across time as people come in and out of a project.
    Software that is technically excellent is often easier to understand for people who are new to it.  For example, excellence in user experience and design means new users can get up to speed on software faster.  Use of good design patterns and automated testing allows new developers to understand existing software easily.
  • Verification and traceability for contracts or other compliance needs.
    Test-driven development (code) and specification by example (scripting and code) are forms of traceable, executable documentation that easily stay in-sync with the underlying software system.
  • Decision-making and analysis for business and technical problems.
    In particular, diagrams can help a great deal here.  However, electronic tools for creating such diagrams can be slow and awkward.  Consider the practice of Agile Modelling (basically using a whiteboard and taking photos) as a good alternative to precise technical diagramming if you are doing problem-solving.
  • Management oversight and control.
    Reports and metrics drive much of the traditional documentation in an organization.  Simplifying reports and metrics often leads to a clearer picture of what is going on, reduces the opportunities to “game” the system, and always results in lower levels of documentation.  As well, some reports and metrics can be generated 100% through automated means.  All that said, the fundamental premise in the Agile manifesto is that management should base decisions on what is actually built – the “Working software” by looking at it and using it.
  • Various aspects of individual accountability.
    If you really need this, a good version control system can give you the information for this.  Sign-offs and other types of accountability documentation are typically just waste that doesn’t actually help in process improvement.  Most people who are in high-compliance environments already have licenses and/or security clearances that provide this accountability.  If you software is working, however, then this isn’t even a concern as trust is built and bureaucracy can be reduced.

In my recent training programs as research for this article, I have surveyed over 100 people on one aspect of documentation – code documentation.  Every individual surveyed is either currently coding or has a coding background, and every single person had the same answer to a simple scenario question:

Imagine that you have just joined a new organization and you are about to start working as a software developer.  One of the existing team members comes up to you and introduces himself.  He has with him a piece of paper with a complicated-looking diagram and a full binder that looks to be holding about 250 pages.  He asks you, “you need to get up to speed quickly on our existing system – we’re starting you coding tomorrow – would you prefer to go over the architecture diagram with me or would you prefer to review the detailed specifications and design documents.” He indicates the one-page diagram and the binder respectively.  Which would you prefer?

(I’ve put up a Survey Monkey one-question survey: Code Documentation Preference to extend the reach of this question.  It should take you all of 60 seconds to do it.  I’ll post results when I write the next Agile Manifesto essay in a month or two.)

The fact that everyone answers the same way is interesting.  What is even more interesting to me is that if you think through this scenario, it is actually almost the worst-case scenario where you might want documentation for your developers.  That means that in “better” cases where documentation for developers may not be as urgent or necessary, then the approach of just going to talk with someone is a lot better.

Documentation and Maps

The problem with documentation is the same problem we have with maps: “the map is not the territory” (quote from the wisdom of my father, Garry Berteig).  We sometimes forget this simple idea.  When we look at, say, Google Maps, we always have in the back of our consciousness that the map is just a guide and it is not a guarantee.  We know that if we arrive at a place, we will see the richness of the real world, not the simplified lines and colours of a map.  We don’t consider maps as legally binding contracts (usually).  We use maps to orient ourselves… as we look around at our reality.  We can share directions using maps, but we don’t share purpose or problems with maps.  And finally, maps assume that physical reality is changing relatively slowly (even Google Maps).

Many times when we create documentation in organizations, however, we get confused about the map versus the territory.

Agility and Documentation

Of course, code is a funny thing: all code is documentation too.  The code is not the behaviour.  But in software, code (e.g. Java, ASM, Scheme, Prolog, Python, etc.) is as close as possible to the perfect map.  Software is (mostly) deterministic.  Software (mostly) doesn’t change itself.  Software (mostly) runs in a state absent from in-place human changes to that software.  Software (mostly) runs on a system (virtual or physical) that has stable characteristics.  The code we write is a map.  From this perspective, documentation becomes even less important if we have people that already understand the language(s)/platform(s) deeply.

This essay is a continuation of my series on the Agile Manifesto.  The previous two essays are “Value and Values” and “Individuals and Interactions over Processes and Tools“.

 

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

R: dplyr – removing empty rows

Mark Needham - Tue, 06/02/2015 - 08:49

I’m still working my way through the exercises in Think Bayes and in Chapter 6 needed to do some cleaning of the data in a CSV file containing information about the Price is Right.

I downloaded the file using wget:

wget http://www.greenteapress.com/thinkbayes/showcases.2011.csv

And then loaded it into R and explored the first few rows using dplyr

library(dplyr)
df2011 = read.csv("~/projects/rLearning/showcases.2011.csv")
 
> df2011 %>% head(10)
 
           X Sep..19 Sep..20 Sep..21 Sep..22 Sep..23 Sep..26 Sep..27 Sep..28 Sep..29 Sep..30 Oct..3
1              5631K   5632K   5633K   5634K   5635K   5641K   5642K   5643K   5644K   5645K  5681K
2                                                                                                  
3 Showcase 1   50969   21901   32815   44432   24273   30554   20963   28941   25851   28800  37703
4 Showcase 2   45429   34061   53186   31428   22320   24337   41373   45437   41125   36319  38752
5                                                                                                  
...

As you can see, we have some empty rows which we want to get rid of to ease future processing. I couldn’t find an easy way to filter those out but what we can do instead is have empty columns converted to ‘NA’ and then filter those.

First we need to tell read.csv to treat empty columns as NA:

df2011 = read.csv("~/projects/rLearning/showcases.2011.csv", na.strings = c("", "NA"))

And now we can filter them out using na.omit:

df2011 = df2011 %>% na.omit()
 
> df2011  %>% head(5)
             X Sep..19 Sep..20 Sep..21 Sep..22 Sep..23 Sep..26 Sep..27 Sep..28 Sep..29 Sep..30 Oct..3
3   Showcase 1   50969   21901   32815   44432   24273   30554   20963   28941   25851   28800  37703
4   Showcase 2   45429   34061   53186   31428   22320   24337   41373   45437   41125   36319  38752
6        Bid 1   42000   14000   32000   27000   18750   27222   25000   35000   22500   21300  21567
7        Bid 2   34000   59900   45000   38000   23000   18525   32000   45000   32000   27500  23800
9 Difference 1    8969    7901     815   17432    5523    3332   -4037   -6059    3351    7500  16136
...

Much better!

Categories: Blogs

Why Slack Needs Sound Effects

Not to be left behind in hipster group chats, we migrated from Campfire to Slack a few months ago. It’s a slick, simple group chat system with a few elements of fun including custom emojis. We added Carl and Paul from Llamas with Hats to our custom set. We’ve also livened up things with a custom Hubot. Still there is one very sad missing feature Sound Effects.

A bit of research shows the team at slack has had sound effects on its backlog for a long time:

Slack custom emoji but for sound effects. Make it happen, guys. +@SlackHQ

— Nathan Peretic (@nathanperetic) April 5, 2014

@SlackHQ Can Slack do sounds like Campfire ("/play soundname")? Silly as they may be, it's an essential feature for team camaraderie.

— John Pray (@LouieGeetoo) February 27, 2014

We’re more than a year later and still no sign of adding sound effects. We really relied on getting audio notifications of broken builds. Now things slip by for a while as the email or text notifications get lost amid the clutter. And sometimes you just need to play PushIt while deploying. Please Slack get this done.

Categories: Blogs

Purposeful Agile

Agile Thinks and Things - Oana Juncu - Mon, 06/01/2015 - 20:01
Amsterdam, Netherlands - Aerial ViewThrow a purpose in a middle of a crowd, it will start to self-organise.
Many self-organised entities like ... villages or cities have an "embedded purpose", not voluntarily shared out-loud or displayed in common places. Acting in a purpose driven way is less obvious for organisations. More a company grows, more dissolved into the Process  the original business purpose becomes.The enterprise gets diluted into the HOW-TO control complexity. Forget about the purpose. What products we build as a company and who are the services we provide for turns from core meaning to marketing fancies. That's business as usual. Until it's not anymore. Until crisis comes. Until customers change behaviour from consumers to service users. Until adapt to reality brings more value than creating processes to change reality. At this point in time, Agile becomes the new big thing everyone wants to have, do or be. Agile Transformations are ordered and Agile History is written. Here are the 3 levels of Agile Experience Awareness I've seen unfolded ."How-To Agile"1st Level of Agile AwarenessCompanies keep employees occupied on "How To" process compliance since modern industry was stated as such. Bigger the company more compliance with Some BIG Silver Bullet Method is required as unique meaning of business life. Naturally enough , we are doing what we know to do. Which means we apply the cognitive biases we live by to all new initiatives. So the first chapter of Agile experience in business was the "Agile Methodology implementation". Brought in by centralised processes  guru teams ( actually these people are very smart people, that are longing to really serve the enterprise by making use of their capacity of managing concepts ) ,  this Agile Implementation is :

  • focused exclusively of Software Development teams ,because Agile Manifesto talks about Software)
  • regulated by new practices to follow , new roles to define in HR catalogues, new processes to be compliant with, new checklists ( like Definition of Done, ain't it ?) , new matrix to build ( what project are eligible for Agile, what can we lead in the good old comfy way ?
  • a source of new bureaucracy overhead !!??


"Oh, my God , all this Agile stuff was ment to develop software leaner , faster , smarter", some of Agile initiators and sponsors said.  "What did we miss?" . The answer given to this question made organisations jump to the next level of awareness , the Agile Attitude.

Agile Attitude2nd level of Agile AwarenessAt the former level , organisations are busying to do Agile.  First insight at the 2nd level is "Making Agile is not the point, the point is to BE Agile".  A lot of activities become now focused on identifying on ... "how to be Agile". Hmmm...the focus is now of course more on "Values and Principles", and are still stuck in the "How To Focus Un Values". At this stage, of course valuable things happen: teams value more collaboration, communication, support and new ways of management have increased chances to be accepted and experimented. Managers eventually understand that teams' intellectual comfort is important. They may be ready to listen and adapt. Unfortunately , the "Agile Attitude" stage turns often into a huge source of misunderstanding and unhappiness. The "How To Agile" stage had the benefit rise curiosity and show that things can improve based on good practices, but bringing the Agile Transformation to the next level does not point how to organise to insure emergent practices. If management is willing to give teams "more space to express", which means workshops, improvement ideas, improved working space , improved quality time to produce better software, things are better , though not great.  Agile transformes in a huge "group therapy" to express everyone's  on-going issues. No matter how many retrospectives, baby-foots and playful inspirational activities are conducted, insatisfaction sticks in the air.  Actually, at this stage,  more creative activities are in place, more self-focused teams are : "We are not part of an enough Agile organisation  and we need to focus on ourselves to become an Agile organisation". More, other organisations and teams consider often  Stage 2 Agile organisations , as "strange hippies". Agile teams have no real quest and are not really understood . It's not a good place to be , but there is an answer. Therapist - we are at the therapy stage, aren't we?- say that having a purpose is the best anti depressor. Without a clear outcome as a token for collaboration, together with tolerance to error , massages, games and team building sessions won't make it. Time is high to reset Agile achievements around the very purpose of the organisation in place . As an organisation, what outcome we dream to achieve? 
Purposeful Agile3rd level of Agile AwarenessAt this stage , Agile is recognised as an enabler that facilitate answers to key questions like : "Who is this for?""What is this about?""What we are trying to achieve? How visible is it?" 
New management type voices agree that motivation is never an external factor. Managers are designer of systems that enable people to self motivate. And there is no better motivation factor that contributing to something concrete that will serve to someone that will recognise the utility of that service. That's why the ultimate key of successful Agile is not observing religiously the principles, though it's important, as applied "purposelessly"  may turn teams in strange aliens. The ultimate key of successful Agile  it's creating something aligned with a purpose by using the power of Agile Principles and practices. Everything can be adapted if there is a "What for" together for "Who is it for" To become a 3rd stage Agile organisation grasp the  answer to "What For" and use the Agile container to hold it .  Stating the purpose is not an obvious exercise, because usually ranch organisation sown bout feel authorised to express it. Some other organisation is in charge of. Eventually Senior Executives Committees might be in charge of  purpose definition, rules and regulations. Asking this question at all levels , all the time may be perceived as a waste of time.  Nevertheless, defining and detecting and saying aligned with a shared purpose is the best way to have highly engaged responsible teams. Therefore the fist activities of every Agile Happy Enterprise Transformation need to highlight the purpose : everybody is on boarded to explore Customer Behavior, best service to offer, what should be done next.  To do so,  practices from various horizons as  Product Management, Lean Startup, Storytelling, Kanban, Design Thinking, User Experience , Innovation Management and Palo Alto School, can be experienced to achieve the best results, company, teams and each individual will be proud of. 
Conclusion of Nothing and Thanks for  Everything 

Some time ago I had a chat with Sophie Freiermuth. She has met Diana Larsen and Vasco Duarte at a conference somewhere in the Frontierless Land of Agile and they were kind enough to mention me. "Oana has to write a book" , they said. If that will happen , it will tell the story of of turning the "Inside Out" in an "Outside In" company through Agile.  I'm already dedicating it to them. Here goes my purpose  :).

Related Posts 3 Steps To Design Systems of Tomorrow : From Resilient To AntiFragile
Agility Adoption Rather Than Agile At Scale


Categories: Blogs

I’ll Let My Code Fail, and Still Succeed With Message Queues

Derick Bailey - new ThoughtStream - Mon, 06/01/2015 - 19:51

In my upcoming book on RabbitMQ Layout (part of the RabbitMQ For Developers bundle, to be released on June 15th), I tell a story about a system that uses an analytics service. In this system, the analytics service isn’t reliable so the developers make a backup of all the events in a local database.

How to decide

 

As the story unfolds, the developers in the story make some significant changes to the way the application uses RabbitMQ. The results are significant, but the story only focuses on the RabbitMQ side of things and leaves a lot of potential questions open for the database side of the system, where events are stored as a backup. 

I chose not to elaborate on that side of the system in the book, because it didn’t fit. It wasn’t the right context, it would have added too much length to the chapter, and frankly, I didn’t have a complete answer for the problem when I wrote the story. But I think I have an answer, now, and I want to share my thoughts on the potential solution.

Names Have Been Changed To Protect … Me

The story that I tell in the book is based in part on my own experiences in building SignalLeaf and using RabbitMQ to send event data over to Keen.io. While Keen has never been unstable like the analytics service in my story, code has been broken and other services have been down.

Rmq layout small

 A few weeks ago, for example, my RabbitMQ instance became unresponsive. I use shared RabbitMQ hosting with CloudAMQP, and another application on the same physical servers ate all available disk space. I created a new instance on a new cluster and everything started working again. But if I didn’t have that backup database of event entries, as mentioned in the story in my book, I would have lost all the data during the downtime.

Then, this last weekend I updated some SignalLeaf code and mis-configured my exchange and queue bindings in RabbitMQ. After deploying, my code was sending messages to RabbitMQ with a routing key that was not handled. The messages were being lost, and nothing was being published to Keen.io, because of this. I fixed the configuration, but not before 24 hours of analytics data had been missed. Again, the database of event data means I haven’t completely lost all the data. 

Having the database backup for the events was incredibly important in these cases. But even with the database, I have a problem with the way my code is setup and the data is stored. 

The Database Event Design Problem

In the book, I talk about having the database backup for the same kinds of reasons that I point out above. I also talk about having a nightly process that checks for events that haven’t been published.

Long timeline small

In the real world, I have my code for SignalLeaf set up to create a single database record for each event. Along with the event data, I include a “status” field: processed or unprocessed. When a request is made for a podcast episode, a new database entry is made with a status of “unprocessed”. Then a message is sent across RabbitMQ. The code on the other side publishes the event to Keen.io and updates the database entry to be “processed”.

There are several problems with this design.

  • Too many network calls from web server
  • Errors unrelated to file download, prevent file download
  • Duplicate messages may occur

To start, there are too many network calls made from the web server – one for the database and one for RabbitMQ. A podcast listener wanted an episode, and I’m slowing them down with more network calls than I should be using. The extra latency makes it take longer to get the episode to them.

An error writing to the database or RabbitMQ, from the web server, means the process stops and the file is not sent to the user. Why should a database or RabbitMQ failure prevent the file from downloading? These things shouldn’t be tied together so tightly.

If the code behind RabbitMQ works, the event is published to keen. But if the database update fails after that, I won’t know that the keen.io call worked. Any code that looks for “unprocessed” entries could re-process an event, duplicating an entry in keen.io. Multiply this by a few thousand, and suddenly the stats in keen are very wrong.

So.. how do we fix this?

In the book, the developers alleviate these problems by separating the database and analytics service calls. In the real world, I wasn’t sure how I could make this work. But, thanks to the interviews for the RabbitMQ For Developers bundle, I have a better understanding of the situation and solution.

Allow The Code To Fail

The first thing I’m going to do, is not have a database call in the web server code for the event entry. Rather, I will send a single message through RabbitMQ and have multiple routes and queues: one for the database, one for publishing to keen.io. Each queue will have purpose-specific code that does tie the database to keen.io. 

Eliminating the database call from the web server will speed up the response to the HTTP request for the file. It will also allow the database and keen.io calls to fail or succeed independently of each other. This is the “let it fail” mentality that Aria Stewart talks about in the “Design For Failure” interview, in the RMQ for Devs bundle.

Aria stewart

I can have the database call fail, and I don’t care. I can have the keen.io call fail, and I don’t care. I’ll ‘nack’ the messages and put them back on the queue. They’ll be picked up and processed later, when the network hiccup or bad code or whatever, is resolved.

Now for the other end of code, after the event is published to keen.io.

Don’t Update Database Records

Once the code has published the event to keen, I’ll send a “success!” message through RabbitMQ. That message will get routed to the code that writes to the database again. But, there’s a potential problem here. If the record for the original “unprocessed” status has not been added to the database yet, the message to update that record to “processed” will fail.

If there’s no record, there isn’t anything to update. So… how do I fix that? Don’t try to update any records. Just write new records. This is something I picked up in my interview with Anders Ljusberg, where we talked about Event Sourcing in relation to CQRS. 

Anders ljusberg

Event Sourcing is facilitated by a write only data model that is a collection of state changes for a given entity or data object. There’s more to it than this, but that is the part I care about right now. Instead of trying to update a record that might not exist, then, I’m going to write a new record with the “processed” status. If the “unprocessed” record doesn’t exist yet, I don’t care. It will show up eventually, when that message is handled by my queue handling code.

With that done, I can have a process check for events that have an “unprocessed” record, but no “processed” record associated. If these records are older than some time frame (1 day maybe?) I’ll re-process them, knowing they won’t be duplicates.

Tighten Up That Workflow

The solution I just outlined seems pretty solid off-hand. It combines a few things that I learned from the interviews I’ve done for the RabbitMQ For Developers bundle, and puts them to good use. I expect there to be some edge cases and potential issues in implementing this, though. I haven’t yet worked through the real code to make this happen, so there’s bound to be some bumps along the way.

One of the potential bumps I see already, is having a larger workflow coded in to the details. This is something that I fight against in my code architecture already and I can see it happening in this situation. Fortunately, I have yet another interview from which I can draw a solution. In my interview with Jimmy Bogard, we talk about “Sagas” and workflow. 

Jimmy bogard

The idea here, is to code a higher level workflow management object – one that knows when the final state of things has been reached. I think it’s possible to overwhelm my current system with this, since I deal with thousands of requests per minute… but it will be interesting to try this and see what happens.

 

Want To Learn More From These Interviews?

I’ve learned far more from the interviews that I’ve done for this RabbitMQ For Developers bundle, than I ever expected. There’s a wealth of knowledge in these discussions, just waiting to be unleashed upon the world and I’m already taking advantage of it in my daily development!

Pick up your copy of The RabbitMQ For Developers bundle, and get all of these interviews, 12 screencasts, an eBook and so much more!

 

 

Categories: Blogs

Daily Scrum is a Waste of Time

Notes from a Tool User - Mark Levison - Mon, 06/01/2015 - 17:01

original graphic design by FreepikDaily Scrum? It’s a waste of time and interrupts my work.

Daily Scrum is just a chance for the ScrumMaster to show up and micromanage.

Daily Scrum is for reporting status, but I could do that in an email.

Have you heard these complaints before? I have. But I got a new version of it last week that disappointed me to the point that I have to respond:

Daily Scrum comment

I’m all for automating things that need automation, but let’s consider what this tool implies – that Daily Scrum is wasteful. The tool’s authors want to save team members the time that is spent talking to each other, and they imply that will be an “improvement”.

Sadly, that completely misses the point of Daily Scrum.

The Scrum Guide says:

The Daily Scrum is a 15-minute time-boxed event for the Development Team to synchronize activities and create a plan for the next 24 hours. This is done by inspecting the work since the last Daily Scrum and forecasting the work that could be done before the next one. The Daily Scrum is held at the same time and place each day to reduce complexity. During the meeting, the Development Team members explain:

·  What did I do yesterday that helped the Development Team meet the Sprint Goal?

·  What will I do today to help the Development Team meet the Sprint Goal?

·  Do I see any impediment that prevents me or the Development Team from meeting the Sprint Goal?

In my courses I tell people that Daily Scrum is intended to:

  • Prepare the team for the day’s collaboration
  • Help the team sense whether they will meet the Sprint Goal
  • Find anything that is slowing the team down
None of these needs can be satisfied by an automated tool.

This activity can’t be properly completed over email or twitter – it needs to be held face to face because the purpose is only achieved effectively when your team is involved in a dialogue. (If your team is distributed, then real video conference makes an adequate and necessary alternative.)

If a team member complains that Daily Scrum is a waste of time (or a status meeting, or an opportunity for micro-management) remind the team of the meeting’s purpose, and then ask the whole team how they would like to re-organize the activity to achieve that. Perhaps the questions being asked don’t provide the focus? Then change the questions. Perhaps they feel that the standup has turned into a status reporting meeting? Then ask them how to make it about them instead.

As usual in Scrum, asking the team to find a way of solving the problem is far more valuable than just sweeping the problem under the rug – or, in this case, by switching to a tool.

Categories: Blogs

Where Does an Agile Transformation Start? Everywhere.

Leading Agile - Mike Cottmeyer - Mon, 06/01/2015 - 15:20

Okay, so your enterprise wants to start an agile transformation. Good for you! We’ll assume you know why you are doing it, what the values are and that it’s not an overnight process.

That still leaves a question of where in the organization do you start? Do you start with a small scale team level approach? Do you get executive sponsorship for a top down push? Do you work through the PMO? And what about middle management?

Org Structure Question

 

The answer is, yes.

Let’s look at the various entrance vectors for an agile transformation, and why they can fail.

From the Team Up

Up Arrow
When I first gained a formal understanding of agile (like many I’d been doing it for years without realizing it), my basic Shu understanding of agile was very team and individual focused. I think my background in customer service made this a very natural place to go to. As a natural extension of this I believed that “agile must grow from the teams”. If you believe you are agile, you will be.

It was at this time I first came up with my “Better people lead to better teams, better teams to better projects, better projects to better products, better products leads to better companies and better companies will make a better world.” philosophy.

Unfortunately, this is not unlike the kid with a blanket tied around his neck that jumps off his parent’s roof, in the belief he can fly. Belief will only carry you so far in the face of the law of gravity. A team level agile transformation can only go so far in the enterprise before it runs into the impediments of large organizations.

From the Top Down

DownArrowAt the other end of the spectrum you have agilists that firmly believe an agile transformation must come from the executive level. Without their support, you can never conquer the agile anti-bodies and organizational impediments. The most common problem with this method, is a failure to commit. The executive says “we’re going agile” and may even hire some consultants to come in and help.

Only like the product manager who doesn’t get the shift to being a product owner, the executive does not take part in the transformation. Mandates and visions from the C-Suite rarely succeed unless the executives are willing to invest their time directly into the effort. Even if they do, they can run into strong resistance from the middle without constant support from the top.

Meet in the Middle

Candle_Both_Ends_frankieleon_3528950399_362521a615_zFor a time I believed that this was the secret to success. Find a team that wanted to do an agile pilot and get the executive to support this from the top down. This too is fraught with risk. I learned this was not unlike burning the candle at both ends. Pretty soon the middle is melting. Even if the agile pilot was successful, two things would rise up to crush it. The first being most agile pilots are small scale, high performing projects that won’t scale across the organizational impediments. The other problem was that the managers in the middle had a tendency to become detractors out of sheer fear of how this would change their role.

Which led me to to the realization that without middle management bought in and supporting, you could not be successful. This launched me on a quest to help educate managers on what it meant to be a manager in an agile organization. While teaching managers to move from managing tasks, to enabling their teams was certainly valuable, it was not the magic entry point to start a transformation. It did build on my “better people” belief in that I was helping managers to support their directs better, even if they were not doing agile development. That didn’t help me with finding the vector to start an agile transformation.

The PMO

PMO

My focus on better managers, combined with my PMI background, led me to explore driving an agile transformation from the program management office. I really thought I was on to something here. The PMO typically owns process or has a lot of influence on it. And as peers to the middle management can exert some strong influence with them. The problems though came from all directions. Teams have a somewhat understanding wariness for the “process of the month” from project managers. “These non-engineers want to tell us how to write software?” Next, while the PMO might be able to get an executive sponsor, more often than not that sponsorship extends only as far as the kick-off meeting. And while the PMO does own process, because agile calls for a fundamental change in how people managers interact with their directs, those managers are usually highly resistant.

So the bottom, the top and the middle all have their challenges for originating an agile transformation. So what do we do?

A Total ApproachStructure-Governance-Metrics-Predictability

While I was exploring coaching better managers, LeadingAgile ‘s founders, Mike and Dennis began to realize that only a systematic approach would work to successfully transform an enterprise scale organization to agile. By establishing an agile structure, governance and metrics, a company could bring clarity to their requirements, accountability (and ability) to the teams and be able to measurably track progress through working, tested software.

This approach doesn’t focus on just one approach vector. Instead it sets up an agile transformation plan from portfolio, through the program level (product owner teams) to the delivery level. When the agile pilot is done, it’s not a cutting edge XP practice or Lean Startup. Instead the pilot is testing the very first step the rest of the organization will also take. The executive sponsor is directly involved, much like a product owner should be. The managers not only know what is happening, they are directly a part of it and get the support they need to be able to support their teams, not drive them to a death march release. And of course the teams get the hands-on help to make a transition to a Shu level agile framework, the first step in a multi-legged journey of an agile transformation.

Team SliceNot unlike Agile itself

When we talk about creating a stable agile team, we often use the slice of cake analogy. The Scrum team (to pick an agile framework) should have all the skills needed to release an increment of potentially shippable product. An agile transformation needs to be a slice of cake through the organization, with everyone an equal player in the transformation.

When we talk about enterprise agile release ceremonies we have release planning, sprint planning and the standup. With an agile transformation, the portfolio is the release planning, the program is the sprint planning and the teams the daily standup.

ConclusionOrg Structure All

If you want a successful enterprise-scale agile transformation, you can’t start at the top, the bottom, or the middle. You have to start all along the continuum, at the same time.

And for me, it’s been a realization that my “better people, better teams” philosophy isn’t a “one leads to the next” progression scale. Instead you have to work with the company as a whole, to make all levels better, together. I still believe better companies will save the world and that’s what I’m doing when I help a company do an enterprise-scale agile transformation.

The post Where Does an Agile Transformation Start? Everywhere. appeared first on LeadingAgile.

Categories: Blogs

Everyone can be a leader

thekua.com@work - Mon, 06/01/2015 - 10:33

There are so many definitions of what leadership is so I’m not about to add another one. A nice simple one that I like from the Oxford dictionary is:

The action of leading a group of people or an organisation, or the ability to do this.

Many people assume that playing a role with a title that has “Leader” in it automatically makes them a leader – although this is not always the case. In fact, I have found that sometimes people who pursue roles simply because they have a more senior association with them are not really prepared to lead a group of people.

In my consulting life, I have worked in many teams in many different roles and I have seen many acts of leadership demonstrated by people who don’t have this role.

Examples help.

Example 1: On one of my first projects in the UK that I lead, a developer on the team was passionate about user experience design. He decided to do some ad-hoc user testing on the User Interface (UI) we had written, found someone willing to act as a test subject. He observed what they were doing and reported back to the team his findings. His initiative convinced us that setting aside more time to focus on usability would be a good thing to do. He demonstrated (at least to me) an act of leadership.

Example 2: During one of the Tech Lead courses I gave, I split the class into smaller groups for a number of exercises. I remember one particular group that had a large number of opinionated developers, all trying to get their view across. There was a female developer, who I noticed, listened quietly to all the opinions, waited for a pause before summarising what she heard and asking the group if that was correct. When she reflected back what she heard, she had summarised the different approached, integrated all the opinions and provided a cohesive story that everyone agreed with. She established a clear path that allowed the team to move forward. She demonstrated an act of leadership.

Example 3: On a particular client, there was the traditional divide between the development organisation and the operations organisation (sitting on a different floor). I remember during one of our planning sessions, a developer on the team who had met someone from operations decided to, unexpectedly, invite the operations person to the planning meeting. Although it was a surprise to us, we saw the appreciation from the operations person being involved earlier and probably changed the outcome of what we would have planned without them. He was passionate about the DevOps culture and demonstrated an act of leadership.

I do a lot of speaking and writing on leadership in our industry and what I like about these examples are acts of leadership that come without the authority of a title. Taking initiative, driving people towards a common goal, even in small incremental steps are acts of leadership that mean that everyone can be a leader.

Categories: Blogs