Skip to content


Who's Managing Your Company?

J.D. Meier's Blog - Mon, 07/21/2014 - 18:13

One of the best books I’m reading lately is The Future of Management, by Gary Hamel.

It’s all about how management innovation is the best competitive advantage, whether you look through the history of great businesses or the history of great militaries.  Hamel makes a great case that strategic innovation, product or service innovation, and operational innovation are fleeting advantages, but management innovation leads to competitive advantage for the long haul.

In The Future of Management, Hamel poses a powerful question …

“Who is managing your company?”

Via The Future of Management:

“Who's managing your company?  You might be tempted to answer, 'the CEO,' or 'the executive team,' or 'all of us in middle management.'  And you'd be right, but that wouldn't be the whole truth.  To a large extent, your company is being managed right now by a small coterie of long-departed theorists and practitioners who invented the rules and conventions of 'modern' management back in the early years of the 20th century.  They are the poltergeists who inhabit the musty machinery of management.  It is their edicts, echoing across the decades, that invisibly shape the way your company allocates resources, sets budgets, distributes power, rewards people, and makes decisions.”

That’s why it’s easy for CEOs to hop around companies …

Via The Future of Management:

“So pervasive is the influence of these patriarchs that the technology of management varies only slightly from firm to firm.  Most companies have a roughly similar management hierarchy (a cascade of EVPs, SVPs, and VPs).  They have analogous control systems, HR practices, and planning rituals, and rely on comparable reporting structures and review systems.  That's why it's so easy for a CEO to jump from one company to another -- the levers and dials of management are more or less the same in every corporate cockpit.”

What really struck me here is how much management approach has been handed down through the ages, and accepted as status quo.

It’s some great good for thought, especially given that management innovation is THE most powerful form of competitive advantage from an innovation standpoint (which Hamel really builds a strong case here throughout the entirety of the book.)

You Might Also Like

The New Competitive Landscape

Principles and Values Define a Culture

The Enterprise of the Future

Cognizant on The Next Generation Enterprise

Satya Nadella on the Future is Software

Categories: Blogs

100 Articles to Sharpen Your Mind

J.D. Meier's Blog - Mon, 07/21/2014 - 07:45

Actually, it's more than 100 articles for your mind.  I've tagged my articles with "mind" on Sources of Insight that focus on increasing your "intellectual horsepower":

Articles on Mind Power and the Power of Thoughts

Here are a few of the top mind articles that you can quickly get results with:

Note that if reading faster is important to you, then I recommend also reading How To Read 10,000 Words a Minute (it’s my ultimate edge) and The Truth About Speed Reading.

If there’s one little trick I use with reading (whether it’s a book, an email, or whatever), I ask myself “what’s the insight?” or “what’s the action?” or “how can I use this?"  You’d be surprised but just asking yourself those little focusing questions can help you parse down cluttered content fast and find the needles in the haystack.

Categories: Blogs

What is Your Minimum Agile Reading List?

Johanna Rothman - Sun, 07/20/2014 - 22:44

In preparation for my talk, Agile Projects, Programs, and Portfolio Management: No Air Quotes Required, I have created a Minimum Reading List for an Agile Transition. Note the emphasis on minimum.

I could have added many more books to this list. But the problem I see is that people don’t read anything. They think they do agile if they say they do agile.

But saying you do agile doesn’t mean anything if you don’t get to done on small stories and have the ability to change. I hope that if I suggest some small list of potential books, people will read the books, and realize, “I can do this!”

I am probably crazy-optimistic. But that hasn’t stopped me before.

I would like your help. Would you please review my list? Do you have better books? Do you have better suggestions? It’s my list. I might not change my mind. However, if you comment on that page, I would know what you think.

Thank you very much.


Categories: Blogs

R: ggplot – Plotting back to back bar charts

Mark Needham - Sun, 07/20/2014 - 18:50

I’ve been playing around with R’s ggplot library to explore the Neo4j London meetup and the next thing I wanted to do was plot back to back bar charts showing ‘yes’ and ‘no’ RSVPs.

I’d already done the ‘yes’ bar chart using the following code:

query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime"
allYesRSVPs = cypher(graph, query)
allYesRSVPs$time = timestampToDate(allYesRSVPs$time)
allYesRSVPs$eventTime = timestampToDate(allYesRSVPs$eventTime)
allYesRSVPs$difference = as.numeric(allYesRSVPs$eventTime - allYesRSVPs$time, units="days")
ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")
2014 07 20 01 15 39

The next step was to create a similar thing for people who’d RSVP’d ‘no’ having originally RSVP’d ‘yes’ i.e. people who dropped out:

query = "MATCH (e:Event)<-[:TO]-(response {response: 'no'})<-[:NEXT]-()
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime"
allNoRSVPs = cypher(graph, query)
allNoRSVPs$time = timestampToDate(allNoRSVPs$time)
allNoRSVPs$eventTime = timestampToDate(allNoRSVPs$eventTime)
allNoRSVPs$difference = as.numeric(allNoRSVPs$eventTime - allNoRSVPs$time, units="days")
ggplot(allNoRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="red")
2014 07 20 17 25 03

As expected if people are going to drop out they do so a day or two before the event happens. By including the need for a ‘NEXT’ relationship we only capture the people who replied ‘yes’ and changed it to ‘no’. We don’t capture the people who said ‘no’ straight away.

I thought it’d be cool to be able to have the two charts back to back using the same scale so I could compare them against each other which led to my first attempt:

yes = ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")
no = ggplot(allNoRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="red") + scale_y_reverse()

scale_y_reverse() flips the y axis so we’d see the ‘no’ chart upside down. The last line plots the two charts in a grid containing 1 column which forces them to go next to each other vertically.

2014 07 20 17 29 27

When we compare them next to each other we can see that the ‘yes’ replies are much more spread out whereas if people are going to drop out it nearly always happens a week or so before the event happens. This is what we thought was happening but it’s cool to have it confirmed by the data.

One annoying thing about that visualisation is that the two charts aren’t on the same scale. The ‘no’ chart only goes up to 100 days whereas the ‘yes’ one goes up to 120 days. In addition, the top end of the ‘yes’ chart is around 200 whereas the ‘no’ is around 400.

Luckily we can solve that problem by fixing the axes for both plots:

yes = ggplot(allYesRSVPs, aes(x=difference)) + 
  geom_histogram(binwidth=1, fill="green") +
  xlim(0,120) + 
  ylim(0, 400)
no = ggplot(allNoRSVPs, aes(x=difference)) +
  geom_histogram(binwidth=1, fill="red") +
  xlim(0,120) + 
  ylim(0, 400) +

Now if we re-render it looks much better:

2014 07 20 17 42 40

From having comparable axes we can see that a lot more people drop out of an event (500) as it approaches than new people sign up (300). This is quite helpful for working out how many people are likely to show up.

We’ve found that the number of people RSVP’d ‘yes’ to an event will drop by 15-20% overall from 2 days before an event up until the evening of the event and the data seems to confirm this.

The only annoying thing about this approach is that the axes are repeated due to them being completely separate charts.

I expect it would look better if I can work out how to combine the two data frames together and then pull out back to back charts based on a variable in the combined data frame.

I’m still working on that so suggestions are most welcome. The code is on github if you want to play with it.

Categories: Blogs

Neo4j 2.1.2: Finding where I am in a linked list

Mark Needham - Sun, 07/20/2014 - 17:13

I was recently asked how to calculate the position of a node in a linked list and realised that as the list increases in size this is one of the occasions when we should write an unmanaged extension rather than using cypher.

I wrote a quick bit of code to create a linked list with 10,000 elements in it:

public class Chains 
    public static void main(String[] args)
        String simpleChains = "/tmp/longchains";
        populate( simpleChains, 10000 );
    private static void populate( String path, int chainSize )
        GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path );
        try(Transaction tx = db.beginTx()) {
            Node currentNode = null;
            for ( int i = 0; i < chainSize; i++ )
                Node node = db.createNode();
                if(currentNode != null) {
                    currentNode.createRelationshipTo( node, NEXT );
                currentNode = node;

To find our distance from the end of the linked list we could write the following cypher query:

match n  where id(n) = {nodeId}  with n 
match path = (n)-[:NEXT*]->() 
RETURN id(n) AS nodeId, length(path) AS length 

For simplicity we’re finding a node by it’s internal node id and then finding the ‘NEXT’ relationships going out from this node recursively. We then filter the results so that we only get the longest path back which will be our distance to the end of the list.

I noticed that this query would sometimes take 10s of seconds so I wrote a version using the Java Traversal API to see whether I could get it any quicker.

This is the Java version:

try(Transaction tx = db.beginTx()) {
    Node startNode = db.getNodeById( nodeId );
    TraversalDescription traversal = db.traversalDescription();
    Traverser traverse = traversal
            .relationships( NEXT, Direction.OUTGOING )
            .sort( new Comparator<Path>()
                public int compare( Path o1, Path o2 )
                    return Integer.valueOf( o2.length() ).compareTo( o1 .length() );
            } )
            .traverse( startNode );
    Collection<Path> paths = IteratorUtil.asCollection( traverse );
    int maxLength = traverse.iterator().next().length();
    System.out.print( maxLength );

This is a bit more verbose than the cypher version but computes the same result. We’ve sorted the paths by length using a comparator to ensure we get the longest path back first.

I created a little program to warm up the caches and kick off a few iterations where I queried from different nodes and returned the length and time taken. These were the results:

(Traversal API) Node:    1, Length: 9998, Time (ms):  15
       (Cypher) Node:    1, Length: 9998, Time (ms): 26225
(Traversal API) Node:  456, Length: 9543, Time (ms):  10
       (Cypher) Node:  456, Length: 9543, Time (ms): 24881
(Traversal API) Node:  761, Length: 9238, Time (ms):   9
       (Cypher) Node:  761, Length: 9238, Time (ms): 9941
(Traversal API) Node:    1, Length: 9998, Time (ms):   9
       (Cypher) Node:    1, Length: 9998, Time (ms): 12537
(Traversal API) Node:  456, Length: 9543, Time (ms):   8
       (Cypher) Node:  456, Length: 9543, Time (ms): 15690
(Traversal API) Node:  761, Length: 9238, Time (ms):   7
       (Cypher) Node:  761, Length: 9238, Time (ms): 9202
(Traversal API) Node:    1, Length: 9998, Time (ms):   8
       (Cypher) Node:    1, Length: 9998, Time (ms): 11905
(Traversal API) Node:  456, Length: 9543, Time (ms):   7
       (Cypher) Node:  456, Length: 9543, Time (ms): 22296
(Traversal API) Node:  761, Length: 9238, Time (ms):   8
       (Cypher) Node:  761, Length: 9238, Time (ms): 8739

Interestingly when I reduced the size of the linked list to 1000 the difference wasn’t so pronounced:

(Traversal API) Node:    1, Length: 998, Time (ms):   5
       (Cypher) Node:    1, Length: 998, Time (ms): 174
(Traversal API) Node:  456, Length: 543, Time (ms):   2
       (Cypher) Node:  456, Length: 543, Time (ms):  71
(Traversal API) Node:  761, Length: 238, Time (ms):   1
       (Cypher) Node:  761, Length: 238, Time (ms):  13
(Traversal API) Node:    1, Length: 998, Time (ms):   2
       (Cypher) Node:    1, Length: 998, Time (ms): 111
(Traversal API) Node:  456, Length: 543, Time (ms):   1
       (Cypher) Node:  456, Length: 543, Time (ms):  40
(Traversal API) Node:  761, Length: 238, Time (ms):   1
       (Cypher) Node:  761, Length: 238, Time (ms):  12
(Traversal API) Node:    1, Length: 998, Time (ms):   3
       (Cypher) Node:    1, Length: 998, Time (ms): 129
(Traversal API) Node:  456, Length: 543, Time (ms):   2
       (Cypher) Node:  456, Length: 543, Time (ms):  48
(Traversal API) Node:  761, Length: 238, Time (ms):   0
       (Cypher) Node:  761, Length: 238, Time (ms):  12

which is good news as most linked lists that we’ll create will be in the 10s – 100s range rather than 10,000 which was what I was faced with.

I’m sure cypher will reach parity for this type of query in future which will be great as I like writing cypher much more than I do Java. For now though it’s good to know we have a backup option to call on when necessary.

The code is available as a gist if you want to play around with it further.

Categories: Blogs

R: ggplot – Don’t know how to automatically pick scale for object of type difftime – Discrete value supplied to continuous scale

Mark Needham - Sun, 07/20/2014 - 02:21

While reading ‘Why The R Programming Language Is Good For Business‘ I came across Udacity’s ‘Data Analysis with R‘ courses – part of which focuses exploring data sets using visualisations, something I haven’t done much of yet.

I thought it’d be interesting to create some visualisations around the times that people RSVP ‘yes’ to the various Neo4j events that we run in London.

I started off with the following query which returns the date time that people replied ‘Yes’ to an event and the date time of the event:

query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime"
allYesRSVPs = cypher(graph, query)
allYesRSVPs$time = timestampToDate(allYesRSVPs$time)
allYesRSVPs$eventTime = timestampToDate(allYesRSVPs$eventTime)
> allYesRSVPs[1:10,]
                  time           eventTime
1  2011-06-05 12:12:27 2011-06-29 18:30:00
2  2011-06-05 14:49:04 2011-06-29 18:30:00
3  2011-06-10 11:22:47 2011-06-29 18:30:00
4  2011-06-07 15:27:07 2011-06-29 18:30:00
5  2011-06-06 20:21:45 2011-06-29 18:30:00
6  2011-07-04 19:49:04 2011-07-27 19:00:00
7  2011-07-05 16:40:10 2011-07-27 19:00:00
8  2011-08-19 07:41:10 2011-08-31 18:30:00
9  2011-08-24 12:47:40 2011-08-31 18:30:00
10 2011-08-18 09:56:53 2011-08-31 18:30:00

I wanted to create a bar chart showing the amount of time in advance of a meetup that people RSVP’d ‘yes’ so I added the following column to my data frame:

allYesRSVPs$difference = allYesRSVPs$eventTime - allYesRSVPs$time
> allYesRSVPs[1:10,]
                  time           eventTime    difference
1  2011-06-05 12:12:27 2011-06-29 18:30:00 34937.55 mins
2  2011-06-05 14:49:04 2011-06-29 18:30:00 34780.93 mins
3  2011-06-10 11:22:47 2011-06-29 18:30:00 27787.22 mins
4  2011-06-07 15:27:07 2011-06-29 18:30:00 31862.88 mins
5  2011-06-06 20:21:45 2011-06-29 18:30:00 33008.25 mins
6  2011-07-04 19:49:04 2011-07-27 19:00:00 33070.93 mins
7  2011-07-05 16:40:10 2011-07-27 19:00:00 31819.83 mins
8  2011-08-19 07:41:10 2011-08-31 18:30:00 17928.83 mins
9  2011-08-24 12:47:40 2011-08-31 18:30:00 10422.33 mins
10 2011-08-18 09:56:53 2011-08-31 18:30:00 19233.12 mins

I then tried to use ggplot to create a bar chart of that data:

> ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")

Unfortunately that resulted in this error:

Don't know how to automatically pick scale for object of type difftime. Defaulting to continuous
Error: Discrete value supplied to continuous scale

I couldn’t find anyone who had come across this problem before in my search but I did find the as.numeric function which seemed like it would put the difference into an appropriate format:

allYesRSVPs$difference = as.numeric(allYesRSVPs$eventTime - allYesRSVPs$time, units="days")
> ggplot(allYesRSVPs, aes(x=difference)) + geom_histogram(binwidth=1, fill="green")

that resulted in the following chart:

2014 07 20 01 15 39

We can see there is quite a heavy concentration of people RSVPing yes in the few days before the event and then the rest are scattered across the first 30 days.

We usually announce events 3/4 weeks in advance so I don’t know that it tells us anything interesting other than that it seems like people sign up for events when an email is sent out about them.

The date the meetup was announced (by email) isn’t currently exposed by the API but hopefully one day it will be.

The code is on github if you want to have a play – any suggestions welcome.

Categories: Blogs

Why Testing in Women Testers Magazine

Johanna Rothman - Fri, 07/18/2014 - 16:48

I have an article in a new online magazine, Women Testers, the July 2014 edition. My article is called “Why Testing?”

When I was a tester or a developer, I asked many questions. As a project manager, program manager or consultant, I still ask many questions. One of those is the Why question. This article examines that question from a number of perspectives.

Go read that article and many others from people such as Alexandra Moreira, Bolette Stubbe Teglbjærg, Smita Mishra, Sara Tabor, Karen N. Johnson, and Mike Lyles.

I bet you’ll enjoy it!

Categories: Blogs

Project Lessons Learned vs. Sprint Retrospective – 17 Points of Comparison

Learn more about our Scrum and Agile training sessions on

Another fantastic article by Mike Caspar: Sprint Retrospective vs. Lessons Learned (a Generalization)

Mike says:

Consider reviewing these differences in your environment to determine if you are getting benefit from your Sprint Retrospectives and following their intent.


Here are a few other Agile Advice articles about Retrospectives.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
Categories: Blogs

Do Teams Gel or Jell?

Johanna Rothman - Thu, 07/17/2014 - 18:46

In my role as technical editor for, I have the opportunity to read many terrific articles. I also have the opportunity to review and comment on those articles.

One such comment is what do teams do? Do they “gel” or do they “jell”?

Gel is what you put in hair. When you “gel” things, you create a thick goo, like concrete. Teams are not a thick goo. Teams are flexible and responsive.

Jell is what you want teams to do. You want them firm, but not set in concrete. When teams jell, they might even jiggle a little. They wave. They adapt. They might even do a little dance, zigging here, zapping there.

You want to keep the people in the teams as much as possible, so you flow work through the teams. But you want the people in the teams to reconsider what they do on a regular basis. That’s called retrospecting. People who have their feet in concrete don’t retrospect. They are stuck. People who are flexible and responsive do.

So, think about whether you have a gelled or a jelled team. Maybe I’m being a nitpicker. I probably am. Our words mean something.

If you have an article you’d like to publish, send it to me. You and I will craft it into something great. Whether or not your team jells.


Categories: Blogs

Agile and the Definition of Quality

J.D. Meier's Blog - Thu, 07/17/2014 - 18:06

“Quality begins on the inside... then works its way out.” -- Bob Moawad

Quality is value to someone.

Quality is relative.

Quality does not exist in a non-human vacuum.

Who is the person behind a statement about quality?

Who’s requirements count the most?

What are people willing to pay or do to have their requirements met?

Quality can be elusive if you don’t know how to find it, or you don’t know where to look.  Worse, even when you know where to look, you need to know how to manage the diversity of conflicting views.

On a good note, Agile practices and an Agile approach can help you surface and tackle quality in a tractable and pragmatic way.

In the book Agile Impressions, by “the grandfather of Agile Programming”, Jerry Weinberg shares insights and lessons learned around the relativity of quality and how to make decisions about quality more explicit and transparent.

Example of Conflicting Ideas About Software Quality

Here are some conflicting ideas about what constitutes software quality, according to Weinberg:

“Zero defects is high quality.”
“Lots of features is high quality.”
Elegant coding is high quality.”
“High performance is high quality.”
”Low development cost is high quality.”
“Rapid development is high quality.”
“User-friendliness is high quality.”

More Quality for One Person, May Mean Less for Another

There are always trade-offs.  It can be a game of robbing Peter to pay Paul.

Via Agile Impressions:

“Recognizing the relativity of quality often resolves the semantic dilemma. This is a monumental contribution, but it still does not resolve the political dilemma:  More quality for one person may mean less quality for another.”

The Relativity of Quality

Quality is relative.

Via Agile Impressions:

“The reason for my dilemma lies in the relativity of quality. As the MiniCozy story crisply illustrates, what is adequate quality to one person may be inadequate quality to another.”

Quality Does Not Exist in a Non-Human Vacuum

So many

Via Agile Impressions:

“If you examine various definitions of quality, you will always find this relativity. You may have to examine with care, though, for the relativity is often hidden, or at best, implicit.

In short, quality does not exist in a non-human vacuum, but every statement about quality is a statement about some person(s).  That statement may be explicit or implicit. Most often, the “who” is implicit, and statements about quality sound like something Moses brought down from Mount Sinai on a stone tablet.  That’s why so many discussions of software quality are unproductive: It’s my stone tablet versus your Golden Calf.”

Ask, Who is the Person Behind that Statement About Quality?

The way to have more productive conversations about quality is to find out who is the person behind a specific statement about quality.

Via Agile Impressions:

“When we encompass the relativity of quality, we have a tool to make those discussions more fruitful.  Each time somebody asserts a definition of software quality, we simply ask, “Who is the person behind that statement about quality.”

Quality Is Value To Some Person

Whose requirements count the most?

Via Agile Impressions:

“The political/emotional dimension of quality is made evident by a somewhat different definition of quality.  The idea of ‘requirements’ is a bit too innocent to be useful in this early stage, because it says nothing about whose requirements count the most. A more workable definition would be this:

‘Quality is value to some person.’

By ‘value,’ I mean, ‘What are people willing to pay (do) to have their requirements met.’ Suppose, for instance, that Terra were not my niece, but the niece of the president of the MiniCozy Software Company.  Knowing MiniCozy’s president’s reputation for impulsive emotional action, the project manager might have defined “quality” of the word processor differently.  In that case, Terra’s opinion would have been given high weight in the decision about which faults to repair.”

The Definition of “Quality” is Always Political and Emotional

Quality is a human thing.

Via Agile Impressions:

“In short, the definition of ‘quality’ is always political and emotional, because it always involves a series of decisions about whose opinions count, and how much they count relative to one another. Of course, much of the time these political/emotional decisions– like all important political/emotional decisions–are hidden from public view. Most of us software people like to appear rational. That’s why very few people appreciate the impact of this definition of quality on the Agile approaches.”

Agile Teams Can Help Make Decisions About Quality More Explicit Transparent

Open processes and transparency can help arrive at a better quality bar.

Via Agile Impressions:

“What makes our task even more difficult is that most of the time these decisions are hidden even from the conscious minds of the persons who make them.  That’s why one of the most important actions of an Agile team is bringing such decisions into consciousness, if not always into public awareness. And that’s why development teams working with an open process (like Agile) are more likely to arrive at a more sensible definition of quality than one developer working alone. To me, I don’t consider Agile any team with even one secret component.”

The "Customer" Must Represent All Significant Decisions of Quality

The quality of your product will be gated by the quality of your representation.

Via Agile Impressions:

“Customer support is another emphasis in Agile processes, and this definition of quality guides the selection of the ‘customers.’ To put it succinctly, the ‘ customer’ must actively represent all of the significant definitions of ‘quality.’ Any missing component of quality may very likely lead to a product that’s deficient in that aspect of quality.”

If You Don’t Have Suitable Representation of Views on Quality, You’re Not Agile

It’s faster and far more efficient to ignore people and get your software done.  But it’s far less effective.  Your amplify your effectiveness for addressing quality by involving the right people, in the right way, at the right time.  That’s how you change your quality game.

Via Agile Impressions:

“As a consultant to supposedly Agile teams, I always examine whether or not they have active participation of a suitable representation of diverse views of their product’s quality. If they tell me, “We can be more agile if we don’t have to bother satisfying so many people, then they may indeed by agile, but they’re definitely not Agile.”

I’ve learned a lot about quality over the years.  Many of Jerry Weinberg’s observations and insights match what I’ve experienced across various projects, products, and efforts.   The most important thing I’ve learned is how much value is in the eye of the beholder and the stakeholder and that quality is something that you directly impact by having the right views involved throughout the process.

Quality is not something you can bolt on or something that you can patch.

While you can certainly improve things, so much of quality starts up front with vision and views of the end in mind.

You might even say that quality is a learning process of realizing the end in mind.

For me, quality is a process of vision + rapid learning loops to iterate my way through the jungle of conflicting and competing views and viewpoints, while brining people along the journey.

Categories: Blogs

Broadening Developer Horizons

Agile Coaching - Rachel Davies - Thu, 07/17/2014 - 16:28

XP is an approach that helps us to deliver valuable software iteratively, to apply it we need to setup our teams to make releasing change to customers as easy as possible. We avoid waiting around for individual team members to make changes, by applying classic XP practices -- Collective Code Ownership and Pair Programming. Each pair of developers is free to change any code that they need to without anyone vetting their changes, they ensure that all tests pass and keep code relatively clean by refactoring as they go. We share knowledge across the team by rotating pairs daily. If a pair runs into difficult decisions regarding design choices, they can call for a huddle with their team mates, sitting together in an open workspace means that's quick to do. This XP way of developing code is liberating as we can easily make changes in the right place rather than working around organisational barriers. It can be also be humbling, as our code is often improved by other developers as they pass through.

To work this way, we find it helps to build teams of extremely capable developers who can work on any area of the codebase rather than hiring a mix of frontend/backend/DBA specialists. Developers who only know enough to work in a single layer of the codebase limit who's available to pair on the piece of work which is most valuable to pick up next. At Unruly, we only hire “full-stack” developers, this gives us confidence that any pair of developers can work on any area of the codebase (within the products that their team is responsible for) without experiencing hand-offs and delays waiting for developers with a different skill set. It also helps avoid some of the friction that can spark due to single-layer thinking.

To make collective code ownership easier, some product teams select a homogeneous stack such as Clojure with ClojureScript or JavaScript all the way down using Node. At Unruly, our developers need to be fluent in JavaScript and Java with a smattering of Scala. Full-stack developers are bright people who can keep pace with developments in multiple languages and frameworks rather than immersing themselves in a single core development language. Being a full-stack developer is more than being able to write code in different languages, you have to understand idioms and patterns for UI, middleware, database realms too.

Being a full-stack developer is also much more than becoming a polyglot programmer. Laurence Gellert’s explains in his blog that there’s a greater breadth of skills that a “full-stack” developer needs. You’ll need to appreciate the environment that your live system runs within and have the technical chops to be at home with making environment changes. You'll also need to broaden your horizons beyond thinking about code and get to grips with developing a fuller understanding of the business you work in! Michael Feathers recently gave a talk in London where he used the term “Full Spectrum Developer” which neatly captures the idea that there's much more than being able to work across different software layers in a given architecture.

As the software craftsmanship movement has brought to the fore, serious developers need to take personal responsibility for improving their skills. Of course, becoming a full-stack developer is mUsing-laptop-on-snowy-mountainore than reading the odd business book in your spare time and writing toy programs in obscure languages when you get home from a long day at work. You can also get together with likeminded developers on a regular basis to hone your skills through Code & Coffee sessions outside work and work on pet projects like building games and mobile apps at home. But in my opinion, this only scatches the surface - you will only get to grips with being a full-spectrum developer by working in an environment that allows you to get your hands dirty across the full stack and interact directly with users and stakeholders. Typically these are startups or small companies that practice agile software development. If you take a look at our current open roles, you’ll see they’re much broader that you’d typically find in a large corporation.

As an agile coach working with product development teams at Unruly, my focus is on how we can support developers to expand their horizons, to gain a better understanding of our business and how they can help figure out the most valuable software to deliver iteratively. Our developers take responsibility for researching different strands of product development and identify the most valuable ideas to take through to implementation (I'll write-up more about how we do this in another post soon).

We also recognise that build learning time into our work week is essential for developers to stay abreast of new tools and frameworks. All of our developers get one day per week to dabble and learn new technologies — see my previous post about Gold Cards. We recognise that industry conferences can be places where you hear about new trends so developers get three days and an annual allowance to spend on attending any conference they feel is relevant to the personal development at work. Our developers also take turns running weekly coding dojos (during work time not through their lunch) to get hands-on practice time with new languages such as Go, Scala, Rust and mobile phone application development. Developers get the opportunity to share what they learned to other teams through lightning talks and this gives them practice in presenting too. All of these things are ways that organizations can support developers in broadening their horizons while at work rather than eating into their early mornings and evenings.

There are a few things for developers to weigh up when considering whether to specialise deeply or broaden their horizons. What do you sacrifice when following one path versus rewards to be gained? The main reward for full-spectrum developers is building greater confidence to dive into different technologies; you may spend less time writing code but become more able to deliver end-to-end solutions that hit the spot. As generalists, you likely have a wider choice of companies to work at and are more resiliant to industry trends. As specialists, you gain the pleasure of total immersion in a particular sphere of software while you build tolerance to the frustrations of waiting around for others to do their bit. It's up to you!

Categories: Blogs

Conventional HTML in ASP.NET MVC: Baseline behavior

Jimmy Bogard - Thu, 07/17/2014 - 16:11

Other posts in this series:

Now that we’ve got the pieces in place for building input/display/label conventions, it’s time to circle back and figure out what exactly we want our default behaviors to be for each of these components. Because it’s so easy to modify the tags generated programmatically, we can establish some pretty decent site-wide behavior for our system.

First, in order to establish a baseline, we need to examine what our current implicit standards are. Right now I’m solely focused on only the input/label/display elements, and not how these elements are typically composed together (label + input etc.). Looking at several of our inputs, we see a prevalent pattern. All of the input elements (ALL of them) have a CSS class appended to them for Bootstrap, “form-control”. Appending this in our normal method of the MVC templated helpers is actually quite difficult and esoteric. For us, it’s a snap.

First, let’s create our own HtmlConventions class that inherits from the default:

public class OverrideHtmlConventions 
    : DefaultHtmlConventions {

We’ll then redirect our container configuration to use this convention library instead:

public class FubuRegistry : Registry
    public FubuRegistry()
        var htmlConventionLibrary 
             = new HtmlConventionLibrary();
        var conventions
             = new OverrideHtmlConventions();

The OverrideHtmlConventions class is where we’ll apply our own conventions on top of the existing ones. The base conventions class lets us apply conventions to several classes of items:

  • Displays
  • Editors
  • Labels

And a couple of things I won’t cover as I’ve never used them:

  • Forms
  • Templates

There’s no real difference between the Displays/Editors/Templates conventions – it’s just a way to segregate strategies and conventions for building different kinds of HTML elements.

Conventions work by pairing a filter and a behavior. The filter is “whom to apply” and the behavior is “what to do”. You have many different levels of applying filters:

  • Always (global)
  • Attribute existence (w/ filtering on value)
  • Property metadata

The last one is interesting – you have the entire member metadata to work with. You can look at the property name, the property type and so on.

From there, your behaviors can be as simple or complex as you need. You can:

  • Add/remove CSS classes
  • Add/remove HTML attributes
  • Add/remove data attributes
  • Replace the entire HTML tag with a new, ground-up version
  • Modify the HTML tag and its children

You have a lot of information to work with, the original value, new value, containing model and more. It’s pretty crazy, and a lot easier to work with than the MVC metadata (which goes through this ModelMetadata abstraction).

I want to set up our “Always” conventions first, which means really only adding CSS classes. The input elements are easy:


Our input elements become a bit simpler now:

@Html.TextBoxFor(m => m.Email, new { @class = "form-control" })
@Html.Input(m => m.Email)

Our labels are a bit more interesting. Looking across the app, it appears that all labels have two CSS classes applied, one pertaining to styling a label, and one pertaining to width. At this point we need to make a judgment call. Do we standardize that all labels are a certain width? Or do we force all of our views to explicitly set this class?

Luckily, we can still adopt a site-wide convention and replace this CSS class as necessary. Personally, I’d rather standardize on how screens should look rather than each new screen becoming a point of discussion on how wide/narrow things are. Standardize, but allow deviation. Our label configuration now becomes:


Then in our HTML, we can replace our labels with our convention-based version:

<div class="form-group">
    @Html.Label(m => m.Email)
    <div class="col-md-10">
        @Html.Input(m => m.Email)
<div class="form-group">
    @Html.Label(m => m.Password)
    <div class="col-md-10">
        @Html.PasswordFor(m => m.Password, new { @class = "form-control" })
<div class="form-group">
    @Html.Label(m => m.ConfirmPassword)
    <div class="col-md-10">
        @Html.PasswordFor(m => m.ConfirmPassword, new { @class = "form-control" })

It turns out in this app so far we’re not using display elements, but we could go a similar path (surrounding the element with a span tag etc).

So what about those other methods, the “PasswordFor” and so on? In the next article, we’ll look at replacing all of the form helpers with our version, based solely on metadata that already exists on our view models.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Speaking at Agile2014

AvailAgility - Karl Scotland - Thu, 07/17/2014 - 15:46

Agile 2014

I’m going to be at Agile2014 in Orlando this year – the first time for a few years – and I’m looking forward to reconnecting with lots of people I haven’t seen for a very long time. If you see me, make sure you say “hi!”.

This post is to highlight a couple of things I’ll be doing there, or at least one thing I hope to be doing, and one I’ll definitely be!

First the hope. I’ve submitted a Pecha Kucha on Heuristics for Kanban Thinking, with which I want to introduce the questions asked on the Kanban Canvas. Here’s the description:

This talk will explore how heuristics can be used to frame Kanban Thinking and enable a problem solving capability. I will introduce a set questions which can be used to encourage creative thinking from multiple perspectives, from understanding the problems, to imagining the desired impacts and then designing potential interventions.

Please vote this up to help it get chosen. Otherwise I’ll just have to find another way of talking about the canvas (although I’ll probably run an Open Jam session anyway!).

Secondly, the definite. I’m running a workshop on the Enterprise Agile stage with my friend and former Rally colleague Rachel Weston Rowell. (Its on Tuesday Jul 29, 14:00-15.15 in Osceola A). The topic is Getting the X-Factor: Corporate Planning for the Agile Business. Here’s the description:

The pace of change is accelerating as technology advances, the economy becomes more global and markets become increasingly disruptive. As a result organisations are surviving for dramatically shorter periods of times. For example the average lifespan of organisations on the Standard and Poors 500 index has reduced by over 50 years on the last century, from 67 to 15 years. To survive, businesses need to change the way they operate at a corporate level, as well as becoming more Agile in their delivery capability. This involves moving to a model of co-creating and deploying an evolving corporate strategy, rather than centrally selecting and defining its rigid implementation, in order to create clear alignment, transparency and adaptability.

Join Karl and Rachel as they share the latest learnings from Rally Software’s journey of evolving their quarterly planning and steering. They will introduce one tool they have recently discovered and had positive experiences with, the X-matrix, through which strategy deployment can be achieved. This is a simple, single A3 page format, which visualises the correlations and contributions between strategies, tactics, improvements, results and departments. In this session you will work through completing an X-matrix for an example organisation.

Please come along! We ran this at RallyON, got great feedback, and had a lot of fun.


Categories: Blogs

Scrum Sprint Planning is Shopping Time !

Agile Thinks and Things - Oana Juncu - Thu, 07/17/2014 - 08:27
"Throw a purpose in the middle of the group, it starts to self-organize" - Michelle Holliday at CultureCon , Boston 2014.I do believe The secret of Agile magic dynamics is its purposeful mindset: every practice is organized to sustain the one thing : creating a product ( eventually software) at a regular pace, whose usefulness is validated at the same pace. Scrum calls this pace a sprint. Because the inner motivation of each individual, team or organization is given by the validated usefulness of what is delivered by each individual, team or organization, the demo at the end of the sprint is fundamental. So better make it truly entertaining.  The other cool tip of Scrum product development is the sprint planning, a thrilling moment where we'll plan how exciting the story of the new product will be at the end of the sprint.Sprint Planning - Back to Basics There are no simpler game rules than those of  the  sprint planning in an software product development context : the development team tells how many story points they can address during the sprint. The Product Owner picks her favorite user stories ( well favorite means with the highest Business Value ;) ) for a total amount of story points that fits the number given by the development team. That's all . Sprint planning is like shopping : the Product Owner can do some favorite User Stories Shopping in the Backlog Store for the amount of Story Points credited by her development team. Ain't that a cool plan ? I think it is! So why so many teams struggle with this ?
Sprint Planning Is NOT Estimation PlanningMany Scrum teams  seem to suffer the lack of a light purposeful - from an User Experience perspective -  version of backlog , that can be enriched in respect with the feed-back from different
users and stakeholders, the changes in strategy , the zoom-in in details and specific scenarios. This leads often to an unclear stack of User Stories , eventually not very user centric. Often the confrontation with the backlog takes place at the sprint planning time, when the development team performs aka "emergency estimation" that enables to plan some-work-no-matter-what,  for the next sprint . This "emergency" approach is exhausting , the session may take hours and everyone has only one dear wish : to get over with it!  I even wonder if the noEstimates  movement didn't rise from the legitimate despair of some participants in such sprint planning sessions .
The story of sprint planning worths some rewriting , based on a key rule : there is no user story
estimation during the sprint planning . As an Agile team ( ie a product centric team) do whatever it
takes from product envisioning to story mapping to have a fluent Backlog Story. Then , if Planning Poker sessions need to be hold, those are to be planned during the sprint, in respect with their business value and the alignment with the product vision ( ie Purpose) .  These sessions are generators of  insights and synchronisation opportunities among all members of a team. These insights cannot be achieved under sprint start time constraints.
NoEstimates And The Pleasure of Shopping TimeThe movement and the approach of  NoEstimates is highly interesting and revolutionary. Basically , it states that anything need to be spent to deliver what has the most important Business Value , why should people waste time to estimate that most important work.  While I fully agree with the mindset of noEstimates, I always found planning poker sessions a great token for discussion among
development team members that empowers a shared understanding of what needs to be done.
And it would be a pity to loose the excitement of  the "sprint planning shopping time"  for the next
"demo party "  where the product team prepares  what the new adventures of the users of the new product will be ,  how the  the new blast quality of the code can be shown in the demo and ... what type of cookies will be served at the demo time to customers ( why not ! )
Related postsWhy Agile Incremental Development Hooks Us
Story mapping tells the story of Your Product
Demo Driven Development 

Categories: Blogs

The Last Job on Earth: Part IV

Indefinite Articles - John Brothers - Wed, 07/16/2014 - 20:00

So, hopefully at this point I’ve established that computer programming and understanding the law and the legal process are somewhat similar.

And that understanding the legal process without tremendous amounts of specialized knowledge is incredibly difficult.

And hopefully I’ve convinced you that the claim that “Everyone can code” is far, far too simplistic, given the tremendous amount of domain knowledge involved.

But let’s add another claim:  programming is a form of art.    Sure, it’s a form of engineering.  But Stonehenge was also the output of engineering, and yet we recognize now that there is beauty and art involved in Stonehenge, not just engineering.   Even the Mona Lisa involved modest amounts of engineering:  the viscosity of the paint, the pressure of brush upon canvas, the manufacture of the canvas itself.   All of those things are engineering disciplines.   But the output was far more meaningful than the engineering involved.

When you start a software project, it is often an exceptionally wide-open field.  You may have some constraints, some requirements, but significant portions of the work to be done are undiscovered.

There’s an old story about how to create a statue of an elephant.  You take a block of stone, and you chip away everything that doesn’t look like an elephant.   That’s deceptively simple, isn’t it?   Really, the art isn’t in the chipping, it’s in the recognition that bits and pieces are starting to look like an elephant.     In many ways, building software is taking a big empty block of “all possible solutions”, chipping away to remove things we don’t want, adding more to fill in parts we do want, until there’s nothing left but the system we want (or the customers want).

Just like sculpture, software is a creative process.







Categories: Blogs

Most excellent life tips

Indefinite Articles - John Brothers - Wed, 07/16/2014 - 14:25

Usually these life tips are bizarre or stupid. These are the best I’ve seen.

Categories: Blogs

R: Apply a custom function across multiple lists

Mark Needham - Wed, 07/16/2014 - 07:04

In my continued playing around with R I wanted to map a custom function over two lists comparing each item with its corresponding items.

If we just want to use a built in function such as subtraction between two lists it’s quite easy to do:

> c(10,9,8,7,6,5,4,3,2,1) - c(5,4,3,4,3,2,2,1,2,1)
 [1] 5 5 5 3 3 3 2 2 0 0

I wanted to do a slight variation on that where instead of returning the difference I wanted to return a text value representing the difference e.g. ’5 or more’, ’3 to 5′ etc.

I spent a long time trying to figure out how to do that before finding an excellent blog post which describes all the different ‘apply’ functions available in R.

As far as I understand ‘apply’ is the equivalent of ‘map’ in Clojure or other functional languages.

In this case we want the mapply variant which we can use like so:

> mapply(function(x, y) { 
    if((x-y) >= 5) {
        "5 or more"
    } else if((x-y) >= 3) {
        "3 to 5"
    } else {
        "less than 5"
  }, c(10,9,8,7,6,5,4,3,2,1),c(5,4,3,4,3,2,2,1,2,1))
 [1] "5 or more"   "5 or more"   "5 or more"   "3 to 5"      "3 to 5"      "3 to 5"      "less than 5"
 [8] "less than 5" "less than 5" "less than 5"

We could then pull that out into a function if we wanted:

summarisedDifference <- function(one, two) {
  mapply(function(x, y) { 
    if((x-y) >= 5) {
      "5 or more"
    } else if((x-y) >= 3) {
      "3 to 5"
    } else {
      "less than 5"
  }, one, two)

which we could call like so:

> summarisedDifference(c(10,9,8,7,6,5,4,3,2,1),c(5,4,3,4,3,2,2,1,2,1))
 [1] "5 or more"   "5 or more"   "5 or more"   "3 to 5"      "3 to 5"      "3 to 5"      "less than 5"
 [8] "less than 5" "less than 5" "less than 5"

I also wanted to be able to compare a list of items to a single item which was much easier than I expected:

> summarisedDifference(c(10,9,8,7,6,5,4,3,2,1), 1)
 [1] "5 or more"   "5 or more"   "5 or more"   "5 or more"   "5 or more"   "3 to 5"      "3 to 5"     
 [8] "less than 5" "less than 5" "less than 5"

If we wanted to get a summary of the differences between the lists we could plug them into ddply like so:

> library(plyr)
> df = data.frame(x=c(10,9,8,7,6,5,4,3,2,1), y=c(5,4,3,4,3,2,2,1,2,1))
> ddply(df, .(difference=summarisedDifference(x,y)), summarise, count=length(x))
   difference count
1      3 to 5     3
2   5 or more     3
3 less than 5     4
Categories: Blogs

Making Sense of Complexity

TV Agile - Tue, 07/15/2014 - 22:23
A recent Gartner report identified the importance of the Cynefin framework for IT departments as a sense-making methodology, suggesting a significant market share by 2016. Cynefin is emerging as one of the main approaches to understanding complexity within the Agile community and provides a means to integrate, and understand the proper boundary conditions between methods […]
Categories: Blogs


Indefinite Articles - John Brothers - Tue, 07/15/2014 - 18:14

I changed permissions on my home directory, making sure I can still do stuff with my apps

Categories: Blogs

Validation and the Standing Desk

Leading Agile - Mike Cottmeyer - Tue, 07/15/2014 - 16:36

Validation is an engineering activity. In many ways it’s very much how engineers tell a product, “you’re awesome!”

Unfortunately, many people don’t really understand what engineering validation really is. They think it’s something that happens at the end of all the other work. Often people who don’t understand validation think it’s performed on the finished product. Specifically that it’s only performed on the finished product in the form of “testing.”

Yes, you test the product at the end, but how do you know the product will work when placed into service in the users’ environments? As they intended to use it and as you intended it to be used? In fact, how do you know–when you only test at the end–whether the idea behind how you think the user will use it and how they think they will want to use it are in sync?

Validation takes place much earlier than at the end when the product is tested. Validation can even strongly influence design and construction. When used to validate requirements, design, construction or integration concepts, validation is likely to mitigate the risk of spending expensive engineering and construction hours on a product that might not be as useful or might be over-engineered for the way in which it will be used.

As an engineer I have countless examples of validation at work in the real world. In fact, a few years ago I wrote this blog and video about the Apple iPhone 4 signal interference fiasco where it was found that a known electrical design flaw in the body of the phone was allowed to remain in the product on the claim that “no one will use it that way” only later to be “validated” by the user community. Still, plenty of people don’t understand how validation adds value prior to testing.

For some time I had contemplated using a standing desk but never did anything about it. (A desk that’s up high enough to work while standing.)  A back muscle injury taking longer than I’d like to heal got me thinking about the desk again and so I began researching standing desks in earnest. Of course, I also looked at a number of “do it yourself” options that would be aesthetically pleasing, cost-effective and would satisfy my life-hack geekery.

I also looked at the space in which this desk would be placed and realized I would have to find somewhere for my current desk to live. Though not ideal or long-term, a temporary home for the current desk would likely be found in a kid’s room until we’d need to supplement it with a way for more than one kid to work at it at once.

This is when I had the following thought: just raise your current desk! (Note that these mental iterations are full of both design, integration and validation considerations, as well as “verification” which are all subjects for other times.) I turned to envisioning various ways to raise the current desk and settled on an idea that would have occurred to me first had this been presented to me in college: ‘milk crates.’

I would place a pair of milk crates under each of the legs of the desk (which are actually rails/skids and not four classic legs) and they would simply raise the entire desk. I wouldn’t need to reconfigure the desk in any other way. Everything on the desk–as well as how I use it–could remain as is. What more could I ask for? As long as the milk crates would raise the desk enough for me to stand at it and use it comfortably, I’d be all set.

So I researched the sizes and styles of milk crates, compared these dimensions to the desk and verified that raising the desk by the height or depth of a milk crate would be sufficient for me. I selected inexpensive milk crates from the national department store chain with the bulls-eye logo.

Due to the particular construction of my desk (note earlier comment about non-classic legs) as well as the lower structural strength of the lesser expensive crates, I decided that placing wood boards on the crates and placing the desk onto the boards would provide better strength, durability, and stability to the whole “towering” assembly. BONUS! I happened to have bookshelf boards left over from a bookcase we had dismantled and disposed of long ago. After moving within our first home from basement to top floor then moving with us again to two homes these “assemble-at-home” shelving units just couldn’t handle the stress and their exoskeletons gave up their respective ghosts. But their shelves were perfectly in-tact and strong. In other words, perfect for their next role as planks for my desk to stand on.

Milk crates and boards strategically placed by the sides of my desk, I was ready to have my wife help me heave the desk into the air as we enlist one of our kids to slide the board-topped crates into place underneath.

But, one last check.validation

I’d really hate for the whole affair to go down only to find that the height wouldn’t actually be enough. Yes I verified the dimensions, and yes, I verified that the design was logical, and I had even placed the board on a pair of crates and stood on them to ensure that the crates we bought were as structurally sound as they were expected to be. But really, how hard could it be to simulate the new height of the desk? Not hard at all. All I need would be to place a milk crate *on* the desk and see whether the added height would meet my needs.

And that’s when it happened.

Not only would the added height meet my needs but this simple validation action resulted in a complete redesign of my idea that is not only easier to carry out, but I can also easily undo it if we want. In fact, this validation activity resulted in a design that uses less material, requires less (almost no) lifting and even adds usable space to my desk.

Instead of putting the desk on four milk crates and two boards I put two milk crates on my desk! With the open end of the crates facing forward and one board across the two crates, my laptop, mouse and phone now sit on the board, and the space beneath is available for stuff. The only adjustment required was to move the second monitor to the top of the hutch (which I wasn’t using productively anyway).

The space behind the crates is still accessible and now has a bunch of the stuff I barely touched–from previously on top of the hutch.

Among the many unexpected benefits of this validation one was much more profoundly unexpected. Had anyone suggested putting crates on top of my desk instead of underneath? Before I saw what it looked like when I actually did it, the idea would likely have been rejected on the face of it. (Superficially, it is not as pleasing to the eye.)

This is why we do validation.

Had I waited to “test” my newly heightened desk after it was up on crates, I would have certainly been pleased with the results. My desk would be as it always was–only higher–and it would be at a good height and entirely usable. It would have consumed the resources allocated to it and been on budget. Instead, the validation gave me an even more functional product, for half the resources and budget and perhaps 20% of the expected manual effort.

Or… you can just keep waiting for “testing” to do your validation.

The post Validation and the Standing Desk appeared first on LeadingAgile.

Categories: Blogs