Skip to content

Feed aggregator

New TeamForge Provides Tighter Integrations with JIRA

Scrum Expert - Tue, 09/22/2015 - 17:32
CollabNet has announced a new release of TeamForge. TeamForge now allows enterprise software teams to easily manage a wider range of tools and processes to streamline the increasingly complex software development lifecycle. Organizations leveraging open source, scaling Agile and adopting DevOps now have deeper levels of traceability and reporting, and tighter integrations with JIRA, Git, Subversion and other leading tools to create a unified software ...
Categories: Communities

Managers Don’t Have to Hate Agile

Danube - Tue, 09/22/2015 - 17:00

Earlier this year Forbes published an article titled “Why Do Managers Hate Agile?”  The author, Steve Denning, builds a case for managers hating Agile due to “management” and “Agile” being defined as two different worlds. It’s like Men are From Mars and Women are from Venus, only we’re talking about the IT world and management and developers, instead of men and women. The article caught my attention for the obvious reason that CollabNet sells products and services to help support Agile development efforts, sparking the question, “Why would managers hate Agile?”

In the article, the first world of “Management” is referenced as “The vertical world of hierarchical bureaucracy”.  Management is seen as the boss at the top of the totem pole and individuals at the bottom. This world is governed by roles, rules, plans and reports.

The second world of “Agile” is referenced as “The horizontal world of Agile,” and described as one operating horizontally with a focus on the customer instead of a vertical dynamic with people reporting upwards to bosses. Therefore, there is tension between the two worlds and the perception that managers hate Agile because it turns their world on its side and creates confusion. Management may not understand the needs of Agile developers and vice versa. The two entities speak a different language and don’t know how to communicate with each other. Sound familiar?

So what is the solution to relieve the tension for Agile and enable these two different worlds to not only get along, but also to work collaboratively to produce innovative products and services for their customers? The answer is…transparency. It sounds so simple…transparency, visibility, awareness. Yes, this is the key for managers to truly understand Agile developers and begin to work together.


The concept of transparency is one that CollabNet wholeheartedly embraces. Our TeamForge platform is specifically designed to give developers the tools they want to use and managers the end-to-end visibility and traceability they need – across the entire tool chain – and advanced data analytics and trending reports. As a result, transparency creates trust between management and developers.

When managers have visibility into the work of their developers and access to the reports they need, then the two worlds can come together not as enemies, but as partners. And the result is that managers don’t have to hate Agile.

What is your experience with the two Worlds of “Management” and “Agile”?


The post Managers Don’t Have to Hate Agile appeared first on

Categories: Companies

The Responsibility Process, Context, and Safety

Agile For All - Bob Hartman - Tue, 09/22/2015 - 16:15

The Responsibility Process™ is a practice (some may argue it’s more or less than a practice) that helps us move towards more self-mastery. Being able to facilitate ourselves is all about emotional intelligence – our ability to recognize and react appropriately in the moment to our emotions.

The Responsibility Process has been around for quite some time, so I’m still surprised that so few people are aware of it. The Responsibility Process is a structure to become aware of the states we all move through or get stuck in, as we aim to move to a mindset of responsibility. The Process was created by Christopher Avery who spent years researching the idea (you can read more at Christopher Avery’s website or just search Google!).

A Summary

Through years of research, Avery determined that responsibility is not a character trait that some people have and others don’t.


Falling out of the mental state of responsibility does not look quite like this, but it might feel this way!

Responsibility is something that we can see and learn — it is a mental state.

That alone is an interesting concept, since we often say ‘that person is irresponsible.’ Maybe we mean, ‘based on my perception, that person is acting irresponsibly.’?

There are a number of mental states that we go through when something interrupts our movement towards a goal. Ideally, we move through them to get to responsibility. These are emotions or feelings that, if we can work from responsibility and be aware and engaged with this process, we are better able to manage.

I’m going to explain the states from bottom to top (referencing the poster), since these are the states we typically move through sequentially. When we are able to ‘figure out’ one state, we are effectively moving on to the next one.

When we fall out of the mental state of responsibility, we end up searching for answers and end up in these other states.

  • Denial – We are not aware of the issue. Some say this is ignoring the problem, but it’s really not even knowing there is an issue to contend with.

    The Responsibility Process

    The Responsibility Process Poster — you can download official posters at Christopher Avery’s website above.

  • Lay Blame – When we are here we believe that external forces are at fault. This is all about giving away our power because we are convinced that someone else is in control here. If we realize we have no power to effect any change here, we can move to the next state.
  • Justify – Justify is about rationalizing why the situation makes sense. You might hear “it has always been this way” as one type of justification. Something ‘out there’ must change for the issue to go away. Moving to this state from blaming someone for a problem might be a solid step forward, but realize that if you are now justifying why that person caused a problem (e.g. they have never been trained) you still are not at a place of ownership. In this justify, you are still in a powerless state, believing again that external forces must change.
  • Shame – Shame is self-blame. As we move past believing that external forces are at fault, we turn the fault inward on ourselves. We might say “I can’t believe I did that again” or “I’m such an idiot of falling for that one.”
  • Obligation – This state is about believing you are doing what you must. We make commitments because we have to do ‘this!’ But if we are stuck doing things where we feel we do not have any choice, we don’t have ownership.
  • Quit – this is simply a way to attempt to avoid the issue. You might be trying to avoid the drama or trauma of the situation or you might simply be exhausted by obligation or shame. In any case, we check out. This can feel a lot like Stonewalling.
  • Responsibility – Avery says this means “I have the power and the ability to find and resolve the real problem.”  When I have the mindset that this is mine to solve or to act on, I can access new options to make changes — often that simply did not exist in the other states of the process.

An example: I was in a situation a while back where I volunteered to help someone with a class. While I was only a volunteer, it was an important part of the class. I got a call that a client needed me at the same time. I felt obligated both to take the work and to help with the class. I was mad at myself for getting into the situation and quite upset that I was going to let this person down. The person I was going to volunteer for said they understood, but was also clearly not happy about the situation (which I understood!). I could justify why I needed to take the client work, since it paid and volunteer position did not — or say “I need to be “responsible” for my family!” So many options! When I actually shifted to responsibility, I realized that I did need to take the client work, but also needed to go out of my way to find someone who could step in to replace me. Finding a replacement truly seemed impossible when I was in the coping states of blame, justify, shame, and obligation. I did find someone to replace me and they were very excited to do it. In the end, shifting allowed me to see more options and press through them even though they were challenging.


Ah, awareness. If we all had it all the time perhaps we could gleefully ride to responsibility and simply wave to the coping states on the way there!

We may only be in a state for a second or even less. You might find that as you start to blame you switch mid-sentence to justify. So they are not all necessarily resting places where we stay for a long period of time. Imagine yourself riding a bike towards responsibility and notice off to the side that blame and justify are there, sitting on the curb. They ask you to stop and chat for a while, but you simply smile and wave to them and continue on your journey to responsibility.

Doing the self-work to move to a responsibility mindset is a critical part of growing as a human and is a requirement for good leadership (regardless of your role or title).

Context and Safety

While I’ve used The Responsibility Process for myself and within organizations, as I started to work on this article, I realized I tended to use it with certain assumptions in place. I look at it and talk about it with a focus on personal responsibility (I’ll talk about teams in a future article). I also tend to use it in situations where people have options to shift their mindset, not extremely oppressive situations.

I have tried not to use The Responsibility Process in contexts where safety was clearly missing or where there was major privilege, rank, and/or power at play. To shift, you need to be at choice to move along the states of the process to get to responsibility. You need to see options. Some may argue that we always have a choice, but if you are in a place of rank or privilege making that argument, your judgement may be impaired. Beyond business situations (where this can occur as well) consider this idea and how it might apply to many of the situations out in the world today.

An example: If you are in a situation with a terrible boss who yells at you and belittles you, with all power aligned to that person, you can’t find another position, and need to support yourself or family — the idea of simply choosing to move out of justify or blame would likely strike you as absurd. That said, lets say you were able to move beyond blame and justify. You decide you are done with shame (I’m terrible and that’s why I get yelled at) and have moved past obligation (to do the job and help all the organizations end customers). The responsibility mindset allows you to theoretically move past those — and you continue to look for another job, but also stay in the hostile environment because you are responsible for your family. Saying this is no easy task would be an understatement!

If you don’t have safety, getting to responsibility is a lot more challenging. If people are not safe, they will have less options and less ability to change mindsets. Increasing safety may be something to focus on if you want people to exercise more responsibility. That may increase peoples options to choose from.

Being Responsible

Write the steps down and paste it on your wall or your laptop. When you run into a situation ask yourself where you are. Assess what it means for you to be in each step and what it would be like to move to the next one in the process. Focus on yourself and your mindset. Work on being in a responsibility mindset and see what it does for you! Love to hear how it turns out for you!

Subscribe to receive emails with my new posts and Agile Safari cartoons and Follow me on Twitter or checkout my page for more ways to connect!

The post The Responsibility Process, Context, and Safety appeared first on Agile For All.

Categories: Blogs

Using Strength-based Questions in Retrospectives

Ben Linders - Tue, 09/22/2015 - 15:29
Agile retrospectives help teams to learn from how they are doing and find ways to improve themselves. In stead of learning from what went wrong, from problems, mistakes or failures, teams can also learn from things that went well, by asking strength-based questions in their retrospectives. Such questions will help them to use their existing skills and experience to become great in doing things that they are already good at. Continue reading →
Categories: Blogs

Agile Bodensee, Constance, Germany, September 30 – October 1 2015

Scrum Expert - Tue, 09/22/2015 - 15:21
The Agile Bodensee conference is a two-day event focused on Agile and Scrum and that takes places on the shore of the Lake Constance (the Bodensee in German) in the southern part of Germany. The first day is dedicated to workshops and the second day to an open space discussion. Everything is in German. The Agile Bodensee conference follows the open space format for conferences. Open ...
Categories: Communities

Dependencies Are Evil

Leading Agile - Mike Cottmeyer - Tue, 09/22/2015 - 12:00

It seems inarguable to me that dependencies limit agility.

If I am able to make a decision on my own that only involves me, I have full freedom of movement. If I want to go to the local pub for dinner, and on my way, decide that I want to go to a steakhouse, I go to the steakhouse. I don’t have to call anyone, rearrange a schedule, broker agreement or anything. I just go to the steakhouse.

Now let’s say that I am meeting my wife for dinner. She is on her way to the pub, I am on my way to the pub. I decide that I want to go to the steakhouse. I can call her. I may get a hold of her, I might not. She may be up for steak, she might not. Either way, I have now lost some of my ability to decide.

I’ve created a dependency on my wife around dinner.

Now let’s say my wife and I are going to dinner with our in-laws. I love my in-laws but they are getting older and don’t like plans to change. I know they aren’t up for the pub, but they love steak and really, really like to make plans in advance. If I am going to have steak with my in-laws there is no room for plans to change… ever.

My wife and I have created a rather hard dependency with my in-laws.

In short, when it’s just me, I am autonomous. The more people I have to coordinate with, the less I am able to change direction. Depending on who is going with me, I may not be able to change at all.

Dependencies will inevitably limit my ability to act autonomously. Dependencies have limited my personal agility.

Again, this just seems inarguable to me.

When it comes to dealing with dependencies, I have two choices. I can either choose to break dependencies or I can choose to manage them. I cannot pretend they don’t exist.

If I go to the steakhouse, and ditch my wife waiting for me at the pub, there are consequences.

If I back out on my in-laws and go to the pub, there are consequences.

The same thing works with agile teams.

If I am a small Scrum team, I have 6-8 people, operate off a single backlog, have a single Product Owner, have a single ScrumMaster, and don’t share any of my people with any other Scrum teams, I can be totally agile. I can burn my backlog at my own rate, change direction, and basically do what we need to do.

Let’s say I start sharing team members between Scrum teams. Now I have to accommodate during Sprint planning their partial participation on my team. If their other team gets behind, that impacts my team, because the shared team member might not be available as planned to do my work.

Let’s say that one of the Scrum teams is a back-end team and the other is a front-end team. If I have to change the front-end and the back-end in order to get ready for the release, not only do I create a requirements dependency, but I also have a scheduling dependency. Both have to be ready on-time for the release to happen.

I know I said at the beginning of this post that dependencies were evil, but I’m not being judgmental here.

If you have dependencies like this, you have to manage them.

That said, they will limit your agility.

If you pretend otherwise, you are fooling yourself.

The thing about dependencies…

Many dependencies are self-inflicted. They get introduced by how we architect the product. They get introduced by how we staff teams. They get introduced by how we bring work into the organization, by how we create roadmaps, and by how we sell changes to our customers.

Very few dependencies can’t be broken with sufficient time and money and attention.

Here is the deal though… from YOUR point of view, dependencies might not be able to be broken. In your role, YOU might not be able to break them. That said, SOMEONE can choose to invest the time, money, or attention necessary to break them.

We just have to find those people, and they have to decide if it’s less expensive to break dependencies, or to keep managing them.

So again… dependencies are evil.

Dependencies reduce freedom of movement, thus agility.

They won’t stop you from doing Scrum, but they will stop you from getting benefit from Scrum.

You can choose to manage dependencies.

You can choose to break dependencies.

If you can’t break them, I guarantee someone in your organization can.

It just depends on if it’s worth it to break them.

What you can’t do is pretend Scrum will work well in their presence.

Dependencies break Scrum.

Dependencies are evil.

The post Dependencies Are Evil appeared first on LeadingAgile.

Categories: Blogs

Publishing ES6 code to npm

Xebia Blog - Tue, 09/22/2015 - 08:58

This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week!

Most of the software we work with at Xebia is open source. Our primary expertise is in open source technology, which ranges our entire application stack. We don’t just consume open source projects, we also contribute back to them and occasionally release some of our own. Releasing an open source project doesn’t just mean making the GitHub repository public. If you want your project to be used, it should be easy to consume. For JavaScript this means publishing it to npm, the package manager for JavaScript code.

Nowadays we write our JavaScript code using the ES6 syntax. We can do this because we’re using Babel to compile it down to ES5 before we run it. When you’re publishing the code to npm however you can’t expect your package consumers to use Babel. In fact if you’re using the Require Hook, it excludes anything under node_modules by default and thus will not attempt to compile that code.


The reality is that we have to compile our code to ES5 before publishing to npm, which is what people writing libraries in CoffeeScript have been doing for a long time. Luckily npm provides a helpful way to automate this process: the prepublish script. There’s a whole list of scripts which you can use to automate your npm workflow. In package.json we simply define the scripts object:

  "name": "my-awesome-lib",
  "version": "0.0.0",
  "scripts": {
    "compile": "babel --optional runtime -d lib/ src/",
    "prepublish": "npm run compile"
  "main": "lib/index.js",
  "devDependencies": {
    "babel": "^5.8.23",
    "babel-runtime": "^5.8.24"

This will make npm automatically run the compile script when we run `npm publish` on the command line, which in turn triggers babel to compile everything in ./src to ./lib (the de-facto standard for compiled sources in npm packages, originating from CommonJS). Finally we also define the main entry point of our package, which denotes the file to import when require(‘my-awesome-lib’) is called.

An important part of the above configuration is the runtime option. This will include polyfills for ES6 features such as Promise and Map into your compiled sources.

Ignoring files

It’s easy to simply publish all of our source code to npm, but we don’t have to. In fact it’s much nicer to only publish the compiled sources to npm, so package consumers only have to download the files they need. We can achieve this using a file called .npmignore:


This file is very similar to .gitignore, in fact npm will fall back to .gitignore if .npmignore is not present. Since we probably want to ignore ./lib in our git repository because it holds compiled sources, npm would normally not even publish that directory, so we have to use .npmignore to get the desired result. However if you want your library to also be directly consumable via a git URL you can commit ./lib to git as well.


Now that everything is set up, the only thing left to do is publish it. Well, not so fast. First we should specify our version number:

npm version 1.0.0

This will update package.json with the new version number, commit this change to git and make a git tag for v1.0.0, all in one go. This of course assumes you're using git, otherwise it will just update package.json. An even better way is to have npm automatically determine the next version number by specifying the scope of change. We should also specify a commit message:

npm version patch -m "Bump to %s because reasons"

For more options check out the npm documentation. Now finish up by pushing your new version commit and tag to GitHub and publish our package to npm:

git push --follow-tags
npm publish
Categories: Companies

Lunch for Integrating Teams

Lunch is a funny thing for many developers. With a tendency towards introverts often people get into habits of eating alone either with food from home or just running out to pick something up and bring it back. Lunch as a social outing is a bit unusual and unexpected.

I’ve viewed eating with others as one of my favorite parts of the working day for a long time. Even when I work remote I often schedule lunches with former colleagues or friends to keep up with them. At my present gig I work in the office Mondays, Tuesdays, and Thursdays. Tuesdays are a company sponsored team lunch which has been a great way to get everyone out eating together. Mondays and Thursdays I typically get a bite with another one of our developer teams. I enjoy the conversations and it’s a free way to keep up with another team and explain what’s going on with our project. I know there’s a small cost involved in eating out, but it’s only two days a week and far outweighed by sharing knowledge across the organization. And I do like to get out of the office.

So if as a developer you tend towards the lunch from home and eating at your desk, try out lunch out with some co-workers at least once a week. It’s a cheap experiment and a great way keep up with your organization.

Categories: Blogs

SparkR: Add new column to data frame by concatenating other columns

Mark Needham - Tue, 09/22/2015 - 00:30

Continuing with my exploration of the Land Registry open data set using SparkR I wanted to see which road in the UK has had the most property sales over the last 20 years.

To recap, this is what the data frame looks like:

./spark-1.5.0-bin-hadoop2.6/bin/sparkR --packages com.databricks:spark-csv_2.11:1.2.0
> sales <- read.df(sqlContext, "pp-complete.csv", "com.databricks.spark.csv", header="false")
> head(sales)
                                      C0     C1               C2       C3 C4 C5
1 {0C7ADEF5-878D-4066-B785-0000003ED74A} 163000 2003-02-21 00:00  UB5 4PJ  T  N
2 {35F67271-ABD4-40DA-AB09-00000085B9D3} 247500 2005-07-15 00:00 TA19 9DD  D  N
3 {B20B1C74-E8E1-4137-AB3E-0000011DF342} 320000 2010-09-10 00:00   W4 1DZ  F  N
4 {7D6B0915-C56B-4275-AF9B-00000156BCE7} 104000 1997-08-27 00:00 NE61 2BH  D  N
5 {47B60101-B64C-413D-8F60-000002F1692D} 147995 2003-05-02 00:00 PE33 0RU  D  N
6 {51F797CA-7BEB-4958-821F-000003E464AE} 110000 2013-03-22 00:00 NR35 2SF  T  N
  C6  C7 C8              C9         C10         C11
3  L  58      WHELLOCK ROAD                  LONDON
4  F  17           WESTGATE     MORPETH     MORPETH
                           C12            C13 C14
1                       EALING GREATER LONDON   A
2               SOUTH SOMERSET       SOMERSET   A
3                       EALING GREATER LONDON   A
6                SOUTH NORFOLK        NORFOLK   A

This document explains the data stored in each field and for this particular query we’re interested in fields C9-C12. The plan is to group the data frame by those fields and then sort by frequency in descending order.

When grouping by multiple fields it tends to be easiest to create a new field which concatenates them all and then group by that.

I started with the following:

> sales$address = paste(sales$C9, sales$C10, sales$C11, sales$C12, sep=", ")
Error in as.character.default(<S4 object of class "Column">) :
  no method for coercing this S4 class to a vector

Not so successful! Next I went even more primitive:

> sales$address = sales$C9 + ", " + sales$C10 + ", " + sales$C11 + ", " + sales$C12
> head(sales)
                                      C0     C1               C2       C3 C4 C5
1 {0C7ADEF5-878D-4066-B785-0000003ED74A} 163000 2003-02-21 00:00  UB5 4PJ  T  N
2 {35F67271-ABD4-40DA-AB09-00000085B9D3} 247500 2005-07-15 00:00 TA19 9DD  D  N
3 {B20B1C74-E8E1-4137-AB3E-0000011DF342} 320000 2010-09-10 00:00   W4 1DZ  F  N
4 {7D6B0915-C56B-4275-AF9B-00000156BCE7} 104000 1997-08-27 00:00 NE61 2BH  D  N
5 {47B60101-B64C-413D-8F60-000002F1692D} 147995 2003-05-02 00:00 PE33 0RU  D  N
6 {51F797CA-7BEB-4958-821F-000003E464AE} 110000 2013-03-22 00:00 NR35 2SF  T  N
  C6  C7 C8              C9         C10         C11
3  L  58      WHELLOCK ROAD                  LONDON
4  F  17           WESTGATE     MORPETH     MORPETH
                           C12            C13 C14 address
1                       EALING GREATER LONDON   A      NA
2               SOUTH SOMERSET       SOMERSET   A      NA
3                       EALING GREATER LONDON   A      NA
6                SOUTH NORFOLK        NORFOLK   A      NA

That at least compiled but all addresses were ‘NA’ which isn’t what we want. After a bit of searching I realised that there was a concat function that I could use for exactly this task:

> sales$address = concat_ws(sep=", ", sales$C9, sales$C10, sales$C11, sales$C12)
> head(sales)
                                      C0     C1               C2       C3 C4 C5
1 {0C7ADEF5-878D-4066-B785-0000003ED74A} 163000 2003-02-21 00:00  UB5 4PJ  T  N
2 {35F67271-ABD4-40DA-AB09-00000085B9D3} 247500 2005-07-15 00:00 TA19 9DD  D  N
3 {B20B1C74-E8E1-4137-AB3E-0000011DF342} 320000 2010-09-10 00:00   W4 1DZ  F  N
4 {7D6B0915-C56B-4275-AF9B-00000156BCE7} 104000 1997-08-27 00:00 NE61 2BH  D  N
5 {47B60101-B64C-413D-8F60-000002F1692D} 147995 2003-05-02 00:00 PE33 0RU  D  N
6 {51F797CA-7BEB-4958-821F-000003E464AE} 110000 2013-03-22 00:00 NR35 2SF  T  N
  C6  C7 C8              C9         C10         C11
3  L  58      WHELLOCK ROAD                  LONDON
4  F  17           WESTGATE     MORPETH     MORPETH
                           C12            C13 C14
1                       EALING GREATER LONDON   A
2               SOUTH SOMERSET       SOMERSET   A
3                       EALING GREATER LONDON   A
6                SOUTH NORFOLK        NORFOLK   A
1                             READING ROAD, NORTHOLT, NORTHOLT, EALING
3                                      WHELLOCK ROAD, , LONDON, EALING
4                           WESTGATE, MORPETH, MORPETH, CASTLE MORPETH

That’s more like it! Now let’s see which streets have sold the most properties:

> byAddress = summarize(groupBy(sales, sales$address), count = n(sales$address))
> head(arrange(byAddress, desc(byAddress$count)), 10)
                                                            address count
1                          BARBICAN, LONDON, LONDON, CITY OF LONDON  1398
8                             QUEENSTOWN ROAD, , LONDON, WANDSWORTH  1153
10                      QUEENSTOWN ROAD, LONDON, LONDON, WANDSWORTH  1079

Next we’ll drill into the data further but that’s for another post.

Categories: Blogs

SparkR: Error in invokeJava(isStatic = TRUE, className, methodName, …) : java.lang.ClassNotFoundException: Failed to load class for data source: csv.

Mark Needham - Tue, 09/22/2015 - 00:06

I’ve been wanting to play around with SparkR for a while and over the weekend deciding to explore a large Land Registry CSV file containing all the sales of properties in the UK over the last 20 years.

First I started up the SparkR shell with the CSV package loaded in:

./spark-1.5.0-bin-hadoop2.6/bin/sparkR --packages com.databricks:spark-csv_2.11:1.2.0

Next I tried to read the CSV file into a Spark data frame by modifying one of the examples from the tutorial:

> sales <- read.df(sqlContext, "pp-complete.csv", "csv")
15/09/20 19:13:02 ERROR RBackendHandler: loadDF on org.apache.spark.sql.api.r.SQLUtils failed
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
  java.lang.ClassNotFoundException: Failed to load class for data source: csv.
	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:67)
	at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:87)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
	at org.apache.spark.sql.api.r.SQLUtils$.loadDF(SQLUtils.scala:156)
	at org.apache.spark.sql.api.r.SQLUtils.loadDF(SQLUtils.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
	at java.lang.reflect.Method.invoke(
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:132)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:79)
	at org.apache.spark.api.r.RBackendH

As far as I can tell I have loaded in the CSV data source so I’m not sure why that doesn’t work.

However, I came across this github issue which suggested passing in the full package name as the 3rd argument of ‘read.df’ rather than just ‘csv’:

> sales <- read.df(sqlContext, "pp-complete.csv", "com.databricks.spark.csv", header="false")
> sales
DataFrame[C0:string, C1:string, C2:string, C3:string, C4:string, C5:string, C6:string, C7:string, C8:string, C9:string, C10:string, C11:string, C12:string, C13:string, C14:string]

And that worked much better! We can now carry on and do some slicing and dicing of the data to see if there are any interesting insights.

Categories: Blogs

Strategic Context

Agile Elements - Mon, 09/21/2015 - 22:24

I’ve used Balanced Score Card (BSC)  and similar approaches for a number of years to organize the elements of and facilitate teams in creating a Strategic Plan. Recently, a CEO suggested he preferred Objectives, goals, strategies and measures (OGSM). No worries, the terms are a little different, but the pieces are similar. I went about mapping the results we had gotten from our BSC facilitation to the OGSM model:


This exercise got me a little concerned since the result was missing much of the context in the strategic plan. We have objectives, but why do we have them and how do they tie to our mission? What customers will we target? How will we win in the market? How do we differentiate? What strategic themes do initiatives support? Have we covered all the business perspectives needed for long-term success?

To me, being able to communicate why we have the pieces of our strategy and strategic plan is as important as the pieces themselves. OGSM strips away the structure for this shared understanding. A recent article in HBR makes the resulting problem clear.  In Stop Distinguishing Between Execution and Strategy, Roger Martin makes a simple, but powerful observation. Strategy, he says, is about “making choices under uncertainty and competition.” Given this, we need to arm the entire organization with the ability to make the best choices at all levels. When we start taking pieces of the plan out of context, he reasons, we dangerously separate doing from making choices and therefore, from the strategy.

In the military, we were similarly fond of restating Field Marshall Helmuth von Moltke who famously said, “No plan survives first enemy contact.” To counter this, we made sure all operational orders communicated context by starting with a Situation Report, including a clear Mission Statement, which is always given twice, and communicating Intent and Concept of the Operation. This context is given before the more specific Maneuver Instructions. To maximize any unit’s chances of success, everyone, down to the lowest private, needs to know the context so they can make the right choices as unanticipated events unfold in the heat of battle.

OGSM’s short coming is that it starts with maneuver instructions. It stops short of asking and answering “Why?” I suggest keeping contextual elements from BCS (or a similar organizing structure) to add intent to strategic planning so that everyone, at all levels in the organization, can best accomplish the mission.

Categories: Blogs

Link: Agile Coaching and Flight Instruction – An Emotional Connection

Learn more about transforming people, process and culture with the Real Agility Program

My friend Mike Caspar has another great blog post: Similarities between Agile Coaching and Flight Instruction.  Check it out!

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!

The post Link: Agile Coaching and Flight Instruction – An Emotional Connection appeared first on Agile Advice.

Categories: Blogs

Three Thoughts From August’s DFW Scrum Meetup

DFW Scrum User Group - Mon, 09/21/2015 - 15:30
This writeup comes from Quentin Donnellan who originally posted it on his blog on August 18, 2015.  I’ve got a flight to Kansas City to meet the SpiderOak marketing team tomorrow morning, so I’ll make this quick! I just got back home from my … Continue reading →
Categories: Communities

How To Do User Notifications From RabbitMQ Messages

Derick Bailey - new ThoughtStream - Mon, 09/21/2015 - 13:30

It’s a common question with a dozen variations… “How do I store messages for my users that aren’t connected, with my message queue / broker?”

Rmq user notifications

There will be some subtle differences in how it’s asked, but the question often includes details such as a web server, a RabbitMQ instance, and clients connected via websockets.

Updating A User Via WebSockets

When a user is connected to a web server via websockets, it is fairly simple to send a message to them after receiving a message from RabbitMQ. The general workflow often looks like this:

  • User’s browser connects to SignalR/ on web server
  • Web server checks a queue for updates that happen during a long running process
  • When a message for a logged in user comes in, broadcast it through the websocket to the user

But, what if the user isn’t logged in? What does the next part of this process look like?

  • If the user is not logged in … store the message? leave it in the queue? or ?

There are options here, which all come down to storing the message for the user and delivering it to them the next time they log in. The question, then, is where to store the message.

Re-queue The Message?

If a message for a specific user comes in through RabbitMQ, and the web server sees that the user is offline, what do you do with it?

Nack The Message Back To The Queue?

One option would be to re-queue the message – have the web server ‘nack’ the message back on to the queue. Except this creates a problem of messages thrashing around between RabbitMQ and the web server.

When your consumer is handed a message from the queue and nacks it, it goes back in to the queue. Eventually, the queue will say “hey, this message is available. go process it!” for the same message, again. It will be delivered to a consumer again, which will see the use is still offline and send it back to the queue. 

Now imagine this scenario for 1,000 users in your system. You may have 20 or 30 messages for each user and find yourself looking at 30,000 messages just going round and round in circles on your servers, eating up processing and memory and network resources. 

Queue Per User?

You could potentially work around this by having a queue per user. You could set up your code so that the web server only subscribes to a queue for the user when that user is logged in and available. 

This doesn’t solve the problem.

Now you have 30,000 messages distributed across 1,000 queues – a bad situation in itself. This requires RabbitMQ to persist the queues and the messages within the queues. And what happens when all 1,000 users log in to your system at the same time? Now your web server has 1,000 queue subscriptions to deal with. 

The Real Problem …

Repeat after me:

RabbitMQ is not a database. RabbitMQ is a message broker and queueing system.

Yes, it has a datastore built in to it for message persistence. That storage system is purpose built to support the features of RabbitMQ as a message queue. It is not a generalized database like MongoDB, Oracle, SQL Server, MySQL, Postgres, etc. 

Stop trying to use RabbitMQ as a database. Use a database for your application’s database. 

Store The Message In A Database

Getting back to the original scenario, what should you do when the user is not online and you can’t send the message to them immediately?

Store the message in a database.

Add a field to the database record that says who this message belongs to. When the user reconnects later, query the database for any messages that this user needs to see and send them along at that time.

The full process started above, then becomes this:

  • User’s browser connects to SignalR/ on web server
  • Web server checks a queue for updates that happen during a long running process
  • When a message for a logged in user comes in
    • If the user is logged in, broadcast the message through the websocket to the user
    • If the user is not logged in, store the message in a database
  • When the user logs in again, query the database and send all waiting messages

It’s what you would have done before the idea of a message queue came in to play, right? It should be what you would do now that you have a message queue, as well.

No Golden Hammers

There are no golden hammers, no silver bullets, no magic wands to solve all the problems your system faces.

Yes, a message queue adds new capabilities to your system. But it does not fundamentally change the existing feature set or tools that your system already uses.

A database is still the best way to store data, long term. A query for data related to a specific user or other object in your system is still best handled by a database with a query language.

A message queue is best suited for communication between processes and systems, in an asynchronous manner. 

Use the tools you have for the purpose for which they were built. 

Categories: Blogs

The First Critical Steps Toward Scaling Your Software Organization

Leading Agile - Mike Cottmeyer - Mon, 09/21/2015 - 12:00

I’ve noticed a pattern I want to share with you guys.

When a company is small, still probably led by it’s founder, likely around 20-40 people… there is a great sense of camaraderie. You’ve got folks that all know each other, all working toward a common goal, all trying to solve the same problems. To a large extent folks are willing to work across role boundaries and do whatever it takes to get the work done. The code is new and fresh, we probably haven’t accumulated a bunch of technical debt. We are able to move fast and get stuff done.

As the company finds success and starts to grow, they begin adding more people to the team. Often, as they add more people, things become less clear, people aren’t always as connected to the core purpose of the organization, communication slows down… and because it just seems to be the way things are done… they decide people need a manager. They inevitably hire a manager for the developers, a manager for the quality folks, and a manager for the business analysts.

I think it’s human nature that we have a tendency to optimize around the stuff we are responsible for delivering. Back in the early days, everyone was responsible for delivering the product, but the product got too big. Now, as a dev manager, I am responsible for delivering great code. As the QA manager, I am responsible for delivering great test plans and test execution and finding defects. As the BA manager, I am responsible for making sure we have the best requirements.

Somehow as we grew, we forgot to make someone responsible for the product.

To remedy this situation, we begin hiring project managers. In theory, if the functional managers are responsible for optimizing their function, the project manager will be responsible for optimizing the delivery of the project. But now we have a matrixed organization, and people are rarely inclined to make the project side the strong side of the matrix. The functional manager is still largely in charge of the work, with the project manager relegated to performing a support function for the project.

If the project manager is really good, they will build relationships, and work with the management team to help get work done. They will help keep track of stuff and make sure work is progressing. That said, they still don’t have any power on the project, just influence. If they aren’t particularly good, you get folks running around asking for estimates, building Gantt charts, filling out project documentation, and providing meaningless status reports.

They become the checklist managers on the project and get in everyone’s way.

In order for this system to work, we have to slow down the speed of the game. We have to be really clear on all our requirements up front. We have to make sure we are providing great estimates. We have to make sure we know who is going to do the work before the work is even approved. In reality, we find that more often than not, we don’t know the requirements up front, we don’t have very good mechanisms for estimating, and we don’t know who is going to do the work.

If we are operating in a market where, at least to some extent, the requirements need to emerge… we need to experiment with new technology… we need to have some room to inspect and adapt… this is going to be VERY, VERY frustrating for the project manager. Think about the incongruence. Think about all the things that would have to go right for the project manager to even have a shot at getting the plan somewhat close to realistic. Think about all the other projects the functional team might be working on across the matrix that would have to go right too, all of which are beyond the project managers control.

Honestly… in most of the software product development and IT organizations we work in… this is a recipe for failure. If you are able to slow down the game enough to do good project management… you lose. If you don’t manage your schedule, cost, and scope… you are out of control and you lose. Think about all the variables at play that have to be managed. Think about your actual ability to get them right. Think about the time and money necessary to get them right.

I don’t think you have the ability, the time, or the money to get them right.

Okay, so what do we do?

I think we need to go back to the beginning, to look back on how we started, to look for ways to organize around small cross-functional units that have all the people and skills necessary to deliver the product. We need to make sure the unit has a clear vision around what to build and autonomy around how to build it. This means that we need to stop organizing around what we do and start to organize around what we build. Instead of organizing around what we do, organize around the value we produce.

Agilists tend to understand this, but the advice is always to organize around end-user value. The advice is to organize around the product or the feature set. The challenge is that some products are just too big and complicated. They are built on legacy architectures. They are serving multiple internal and external customers. They don’t have sufficient safety in the code base. They aren’t instrumented with tests and we don’t have the infrastructure necessary to do continuous integration or continuous deployment.

I think about the problem similar to what we face when we take a tightly coupled legacy system and refactor it into a services oriented architecture. The service is the value. The team is the group of folks that deliver the service. The service has to operate behind a contract. The contract has to be maintained and validated with tests. We need to break dependencies between services. Over time we need to change how we invest in projects, and products, and the platform.

In effect the service over time becomes the product.

How you do all this is a topic for a different blog post… but this is the problem that our company is focused on solving. It’s interesting, the patterns are understood both at the team level and at scale… it’s the organizational change management that is the hard part to solve. How do you get people to understand it, then how do you get them to do it? How do you get them to see the nature of the problem and get them to work together to fix it?

Anyway… getting back to the point of the blog post.

I think there is one critical organizational mistake that companies make when they first begin to scale. They scale around functional area. When things get hard, they hire project managers to make it better. For project managers to be successful, they have to slow the game down such that the projects can be managed. Work starts moving slower, so we pack more things into the pipeline, hire more project managers to manage the work, and basically grind everything to a halt.

Here is the pattern I’ve seen at least 5 times in the last year or so… company is great until about 40, functional until about 150 but struggling… company grinds to a halt at 400.

The answer is to make decisions when you’re small that allow you to scale around the value you provide rather than the activity you do.

If you’ve already organized around the activity you do, you have to refactor your organization around the products and services you provide.

In my opinion, it’s the only way.

In a fast moving product delivery company… I can’t see anyway to be fast, responsive to change, and able to scale… if you choose to organize around function and rely on project managers to stitch together the value. It’s not the project managers fault. It’s an impossible problem for them to solve. We can talk about where project managers and project management fit… and I do believe there is a valuable role for them, but… in this context, it just doesn’t work.

Any examples to the contrary?

The post The First Critical Steps Toward Scaling Your Software Organization appeared first on LeadingAgile.

Categories: Blogs

The Union-Find Algorithm in Scala: a Purely Functional Implementation

Xebia Blog - Mon, 09/21/2015 - 10:29

In this post I will implement the union-find algorithm in Scala, first in an impure way and then in a purely functional manner, so without any state or side effects. Then we can check both implementations and compare the code and also the performance.

The reason I chose union-find for this blog is that it is relatively simple. It is a classic algorithm that is used to solve the following problem: suppose we have a set of objects. Each of them can be connected to zero or more others. And connections are transitive: if A is connected to B and B is connected to C, then A is connected to C as well. Now we take two objects from the set, and we want to know: are they connected or not?
This problem comes up in a number of area's, such as in social networks (are two people connected via friends or not), or in image processing (are pixels connected or separated).
Because the total number of objects and connections in the set might be huge, the performance of the algorithm is important.

Quick Union

The implementation I chose is the so called Quick Union implementation. It scales well but there are still faster implementations around, one of which is given in the references below the article. For this post I chose to keep things simple so we can focus on comparing the two implementations.

The algorithm keeps track of connected elements with a data structure: it represents every element as a Node which points to another element to which it is connected. Every Node points to only one Node it is connected to, and this Node it is called its parent. This way, groups of connected Nodes form trees. The root of such a connected tree is a Node which has an empty parent property.
When the question is asked if two Nodes are connected, the algorithm looks up the roots of the connected trees of both Nodes and checks if they are the same.

The tricky part in union find algorithms is to be able to add new connections to a set of elements without losing too much performance. The data structure with the connected trees enables us to do this really well. We start by looking up the root of both elements, and then set the parent element of one tree to the root of the other tree.

Some care must still be taken when doing this, because over time connected trees might become unbalanced. Therefore the size of every tree is kept in its root Node; when connecting two subtrees, the smaller one is always added to the larger one. This guarantees that all subtrees remain balanced.

This was only a brief description of the algorithm but there are some excellent explanations on the Internet. Here is a nice one because it is visual and interactive: visual algo

The Impure Implementation

Now let's see some code! The impure implementation:

import scala.annotation.tailrec

class IUnionFind(val size: Int) {

  private case class Node(var parent: Option[Int], var treeSize: Int)

  private val nodes = Array.fill[Node](size)(new Node(None, 1))

  def union(t1: Int, t2: Int): IUnionFind = {
    if (t1 == t2) return this

    val root1 = root(t1)
    val root2 = root(t2)
    if (root1 == root2) return this

    val node1 = nodes(root1)
    val node2 = nodes(root2)

    if (node1.treeSize < node2.treeSize) {
      node1.parent = Some(t2)
      node2.treeSize += node1.treeSize
    } else {
      node2.parent = Some(t1)
      node1.treeSize += node2.treeSize

  def connected(t1: Int, t2: Int): Boolean = t1 == t2 || root(t1) == root(t2)

  private def root(t: Int): Int = nodes(t).parent match {
    case None => t
    case Some(p) => root(p)

As you can see I used an array of Nodes to represent the connected components. Most textbook implementations use two integer arrays: one for the parents of every element, and the other one for the tree sizes of the components to which the elements belong. Memory wise that is a more efficient implementation than mine. But apart from that the concept of the algorithm stays the same and in terms of speed the difference doesn't matter much. I do think that using Node objects is more readable than having two integer arrays, so I chose for the Nodes.

The purely functional implementation

import scala.annotation.tailrec

case class Node(parent: Option[Int], treeSize: Int)

object FUnionFind {
  def create(size: Int): FUnionFind = {
    val nodes = Vector.fill(size)(Node(None, 1))
    new FUnionFind(nodes)

class FUnionFind(nodes: Vector[Node]) {

  def union(t1: Int, t2: Int): FUnionFind = {
    if (t1 == t2) return this

    val root1 = root(t1)
    val root2 = root(t2)
    if (root1 == root2) return this

    val node1 = nodes(root1)
    val node2 = nodes(root2)
    val newTreeSize = node1.treeSize + node2.treeSize

    val (newNode1, newNode2) =
      if (node1.treeSize < node2.treeSize) {
        val newNode1 = Node(Some(t2), newTreeSize)
        val newNode2 = Node(node2.parent, newTreeSize)

        (newNode1, newNode2)
      } else {
        val newNode2 = FNode(Some(t1), newTreeSize)
        val newNode1 = FNode(node1.parent, newTreeSize)

        (newNode1, newNode2)
    val newNodes = nodes.updated(root1, newNode1).updated(root2, newNode2)
    new FUnionFind(newNodes)

  def connected(t1: Int, t2: Int): Boolean = t1 == t2 || root(t1) == root(t2)

  private def root(t: Int): Int = nodes(t).parent match {
    case None => t
    case Some(p) => root(p)

Comparing to the first implementation, some parts remained the same. Such as the Node, except for the fact that it is not an inner class anymore. Also the connected and the root methods did not change.
What did change is the method which deals with updating the connections: union. In the purely functional implementation we can't update any array, so instead it creates a new FUnionFind object and returns it at the end. Also two Node objects need to be created when subtrees are merged; the root of the smaller one because it gets a new parent, and the root of the larger one because its tree size needs to be increased.
Perhaps surprisingly, the pure implementation needs more lines of code than the impure one.

The Performance

The pure implementation has to do a bit of extra work when it creates the new objects in its union method. The question is how much this costs in terms of performance.
To find this out, I ran both implementations through a series of performance tests (using ScalaMeter) where I added a large number of connections to a set of objects. I added a (impure) Java 8 implementation to the test as well.
Here are the results:

Nr of elements and connections Impure Pure Java 8 10000 2.2 s 3.8 s 2.3 s 15000 4.4 s 7.9 s 4.2 s 20000 6.2 s 10.3 s 6.3 s

Not surprisingly, the time grows with the number of connections and elements. The growth is a bit faster than linear, that's because the asymptotic time complexity of the quick union algorithm is of the order n log(n).
The pure algorithm is about 65% slower than the impure implementation. The cause is clear: in every call to union the pure algorithm has to allocate and garbage collect three extra objects.

For completeness I added Java 8 to the test too. The code is not given here but if you're interested, there's a link to the complete source below the article. Its implementation is really similar to the Scala version.


Purely functional code has a couple of advantages over non pure implementations; because of the lack of side effects it can make it easier to reason about blocks of code, also concurrency becomes easier because there is no shared state.
In general it also leads to more concise code because collection methods like map and filter can easily be used. But in this example that was not the case, the pure implementation even needed a few lines extra.
The biggest disadvantage of the pure union-find algorithm was its performance. It depends on the requirements of the project where the code is used if this is a showstopper, or if the better concurrent behavior of the pure implementation outweighs this disadvantage.

Explanation of a faster union-find with path compression
All the source code in the article, including the tests

Categories: Companies

Presentation on Agile Retrospectives from Agile Greece Summit published

Ben Linders - Mon, 09/21/2015 - 07:22
The presentation that I gave at the first Agile Greece Summit about "The Why, What and How of Agile Retrospectives"has been published. Continue reading →
Categories: Blogs

Resource Efficiency vs. Flow Efficiency, Part 5: How Flow Changes Everything

Johanna Rothman - Sun, 09/20/2015 - 23:15

The discussion to now:


When you move from resource efficiency (experts and handoffs from expert to expert) to flow efficiency (team works together to finish work), everything changes.


FlowEfficiencyThe system of work changes from the need for experts to shared expertise.

The time that people spend multitasking should decrease or go to zero because the team works together to finish features. The team will recognize when they are done—really done—with a feature. You don’t have the “It’s all done except for…” problem.

Managers don’t need to manage performance. They manage the system, the environment that makes it possible for people to perform their best. Managers help equip teams to manage their own performance.

The team is accountable, not a person. That increases the likelihood that the team will estimate well and that the team delivers what they promised.

If you are transitioning to agile, and you have not seen these things occur, perform a retrospective. Set aside at least two hours and understand your particular challenges. I like Agile Retrospectives: Making Good Teams GreatGetting Value out of Agile Retrospectives – A Toolbox of Retrospective Exercises, and The Retrospective Handbook: A Guide for Agile Teams for creating a retrospective that will work for you. You have unique challenges. Learn about them so you can address them.

I hope you enjoyed this series. Let me know if you have questions or comments.

Categories: Blogs

Scrum as an agent of culture change

Agile For All - Bob Hartman - Sun, 09/20/2015 - 23:05
peter drucker

“Culture eats strategy for breakfast.”

Attributed to Peter Drucker by Dick Clark, former CEO and Chariman of Merck


Peter Drucker invented most modern management practices. He was an in-demand coach to hundreds of top leaders in the world’s largest organizations. When he suggested that “culture eats strategy for breakfast” to Dick Clark, he wasn’t actually promoting an either/or mindset. He was pointing out that the amount of time most executives spent working on strategy paled in comparison to the amount of time they spent working on culture. Drucker was suggesting that they would be better served by bringing the two into balance:Balancing-Culture-and-Strategy

What is Culture?

A quick Amazon search turns up over 10,000 business and management related books with the word “Culture” in the title. I’ve personally heard dozens of trainers, coaches, and consultants extol the virtues of cultural change. And yet, how do you change culture? Culture is the set of unwritten rules and behaviors that are encouraged and followed in an organization or group. It’s not something that is written down. It’s a deeply engrained, unconscious set of behavior patterns. Agile For All coach Jake Calabrese uses the metaphor of river beds, where years of flowing water carves channels in the earth. No one tells the water where to flow, it simply follows the path of least resistance, reinforcing that path and making it harder and harder for the water to take a different one. It’s possible to change the path that the water flows, but it takes conscious focus, time and effort.


In most organizations with a typical hierarchy, the top leaders have a tremendous influence on the culture of the firm in two ways. First, they decide what to reward and punish by way of financial incentives, promotions, hiring, and firing. Second, they model the behavior that others are expected to follow. Leadership behaviors and beliefs are deeply engrained habits – difficult to change even when done mindfully and with coaching. Think of it like asking someone to change their eating and exercise habits – it’s tough to do for the same reasons. Human psychology creates an infinitely complex set of reward- and punishment-based learned habits, most of which are completely unnoticed. So how do we change culture in an organization?

Check out Part 2 of this series to see some examples of how Scrum can be a pattern for creating sustainable, positive cultural change.

The post Scrum as an agent of culture change appeared first on Agile For All.

Categories: Blogs

Resource Efficiency vs. Flow Efficiency, Part 4: Defining Accountability

Johanna Rothman - Sun, 09/20/2015 - 22:46

This is the next in a series of posts about resource efficiency vs. flow efficiency:

Managers new to agile often ask, “How do I know people will be accountable?” Let’s tease apart the pieces of accountability:

  • Accountable to the project for finishing their own work
  • Accountable to their team for participating fully by doing their work
  • Accountable to help other people learn what they did by documenting their work
  • Accountable for meeting their estimates
  • Accountable for how the project spends money
  • … There might be more accountabilities

Let’s take the first two together:

Accountable for finishing their own work and by doing their work

I suspect that managers mean, “How do I know each person does their work? How do I know they aren’t asking other people to do their work? How do I know these people are learning to do their own work?”

Those are good questions. I have yet to see a single-person feature. At the least, a developer needs someone to test their work. Yes, it’s possible to test your own work. Some of you are quite good at that, I bet. Many people are not. If you want to prevent rework, build in checking in some form or another: pairing, design review, code review, unit tests, system tests, something.

So the part about “own work” seems a little micro-managing to me.

The part about doing their work is a little trickier. When people get stuck, what do they do? Often, they ask someone else for help. The help might be: talk to the duck, an explanation about what is going on in that area of the product, pairing on solving the problem, or even talking to more people.

There is no such thing as “cheating” at work. Managers are right to be concerned that people work to their capabilities. And, if those people are stuck, I don’t want them mired in the problem. We want people to learn, not be stuck.

Here’s the answer: You can’t know as a manager. You never did know as a manager.

The team knows who is hardworking and who slacks. The team knows how people are learning and if they are stuck. Even in waterfall days, the team knew. Managers may have had the illusion they knew, but they didn’t. Managers only knew what people told them.

Accountable for documentation

For me, the question is who will use the documentation? I am always amazed at how many managers want documentation other than end-user documentation and how few teams find this useful. In agile, you could make it part of the definition of done.

If people build documentation time into their estimates and the team finds the internal documentation useful, the team will pressure each person to deliver their documentation. The team knows whether the person documents their work.

Accountable for living up to estimates

When I ask managers if they want good estimates or completed features, they almost always say, “completed features.” Too often, the people multitask on fixing defects or production support work while they are supposed to work on a feature. I do understand the need for estimates, but holding people to their estimates? Sometimes, that’s about money.

Accountable for how the project spends money

Of all the accountabilities, this one makes the most sense to me. However, it doesn’t make sense in agile, where the customer/product owner is the one responsible. As long as the team completes features, the PO determines when either there is no more value in the backlog, or there is no more money for the project. With any luck, the team has met the release criteria by this time.

For me, the idea of accountability is a team-based idea, not a person-based idea. In flow efficiency, you can ask the team to be accountable for:

  • Finishing features
  • Knowing where they are with respect to the features in progress
  • Helping the PO understand the value of features and how long features will take
  • Providing an estimate, if necessary
  • If the team works in iterations, committing to work and not overcommitting to too much work

When you look at accountability like this, it looks pretty different than a single person’s accountability.

Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.