Skip to content

Feed aggregator

Android: JUnit XML Reports with Gradle

A little madness - Wed, 03/18/2015 - 08:32

The Android development tools project has seen big changes over the last year. The original Eclipse ADT development environment was superseded late last year by Android Studio — a new IDE based on Intellij. Under the hood Android Studio also uses a new command line build system based on Gradle, replacing the previous Ant-based system. I’ve been keen to find out how these changes impact the integration of Android test reports with continuous integration servers like Pulse.

  • Android JUnit Report is redundant.
  • Run on-device Android tests with: ./gradlew connectedAndroidTest
  • Collect reports from: app/build/outputs/androidTest-results/connected/*.xml



The original Ant-based build system for Android didn’t produce XML test reports for instrumentation tests (i.e. those that run on-device), prompting me to create the Android JUnit Report project. Android JUnit Report produced XML output similar to the Ant JUnit task, making it compatible with most continuous integration servers. The good news is: Android JUnit Report is now redundant. The new Gradle-based build system produces sane XML test reports out of the box. In fact, they’re even more complete than those produced by Android JUnit Report, so should work with even more continuous integration servers.

The only downside is the documentation, which is a little confusing (while there are still documents for the old system about) and not very detailed. With a bit of experimentation and poking around I found how to run on-device (or emulator) tests and where the XML reports were stored. With a default project layout as created by Android Studio:


You get a built-in version of Gradle to use for building your project, launched via gradlew. To see available tasks, run:

$ ./gradlew tasks

(This will download a bunch of dependencies when first run.) Amongst plenty of output, take a look at the Verification Tasks section:

Verification tasks
check - Runs all checks.
connectedAndroidTest - Installs and runs the tests for Debug build on connected devices.
connectedCheck - Runs all device checks on currently connected devices.
deviceCheck - Runs all device checks using Device Providers and Test Servers.
lint - Runs lint on all variants.
lintDebug - Runs lint on the Debug build.
lintRelease - Runs lint on the Release build.
test - Run all unit tests.
testDebug - Run unit tests for the Debug build.
testRelease - Run unit tests for the Release build.

The main testing target test does not run on-device tests, only unit tests that run locally. For on-device tests you use the connectedAndroidTest task. Try it:

$ ./gradlew connectedAndroidTest
:app:processDebugAndroidTestJavaRes UP-TO-DATE


Total time: 33.372 secs

It’s not obvious, but this produces compatible XML reports under:


with names based on the application module and device. In your continuous integration setup you can just collect all *.xml files in this directory for reporting.

Although the new build system has killed the need for my little Android JUnit Report project, this is a welcome development. Now all Android developers get better test reporting without an external dependency. Perhaps it will even encourage a few more people to use continuous integration servers like Pulse to keep close tabs on their tests!

Categories: Companies

Reducing the size of Docker Images

Xebia Blog - Wed, 03/18/2015 - 02:00

Using the basic Dockerfile syntax it is quite easy to create a fully functional Docker image. But if you just start adding commands to the Dockerfile the resulting image can become unnecessary big. This makes it harder to move the image around.

A few basic actions can reduce this significantly.

Categories: Companies

Neo4j: Detecting potential typos using EXPLAIN

Mark Needham - Wed, 03/18/2015 - 00:46

I’ve been running a few intro to Neo4j training sessions recently using Neo4j 2.2.0 RC1 and at some stage in every session somebody will make a typo when writing out of the example queries.

For example one of the queries that we do about half way finds the actors and directors who have worked together and aggregates the movies they were in.

This is the correct query:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED]-(director)
RETURN,, COLLECT(movie.title) AS movies

which should yield the following results:

==> +-----------------------------------------------------------------------------------------------------------------------+
==> |           |    | movies                                                                      |
==> +-----------------------------------------------------------------------------------------------------------------------+
==> | "Hugo Weaving"       | "Andy Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Hugo Weaving"       | "Lana Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Laurence Fishburne" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Keanu Reeves"       | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Carrie-Anne Moss"   | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> +-----------------------------------------------------------------------------------------------------------------------+

However, a common typo is to write ‘DIRECTED_IN’ instead of ‘DIRECTED’ in which case we’ll see no results:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN,, COLLECT(movie.title) AS movies
==> +-------------------------------------+
==> | | | movies |
==> +-------------------------------------+
==> +-------------------------------------+
==> 0 row

It’s not immediately obvious why we aren’t seeing any results which can be quite frustrating.

However, in Neo4j 2.2 the ‘EXPLAIN’ keyword has been introduced and we can use this to see what the query planner thinks of the query we want to execute without actually executing it.

Instead the planner makes use of knowledge that it has about our schema to come up with a plan that it would run and how much of the graph it thinks that plan would touch:

EXPLAIN MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN,, COLLECT(movie.title) AS movies

2015 03 17 23 39 55

The first row of the query plan describes an all nodes scan which tells us that the query will start from the ‘director’ but it’s the second row that’s interesting.

The estimated rows when expanding the ‘DIRECTED_IN’ relationship is 0 when we’d expect it to at least be a positive value if there were some instances of that relationship in the database.

If we compare this to the plan generated when using the proper ‘DIRECTED’ relationship we can see the difference:

2015 03 17 23 43 11

Here we see an estimated 44 rows from expanding the ‘DIRECTED’ relationship so we know there are at least some nodes connected by that relationship type.

In summary if you find your query not returning anything when you expect it to, prefix an ‘EXPLAIN’ and make sure you’re not seeing the dreaded ‘0 expected rows’.

Categories: Blogs

Who has two thumbs, excels at pair programming, and wants to work with us in Denver? Is it you?

Pivotal Tracker Blog - Wed, 03/18/2015 - 00:15

At Pivotal Tracker, we’re trying to make life better for developers all over the world, one project at a time. Our philosophy is that a good tool helps you do your job and gets out of your way, allowing you to focus on what’s important. We’re looking for a few great engineers to join our team to work on improving the greatest agile communication tool around. If this sounds like something you can get behind, read on to learn about life on the Tracker team.

Great culture

Daily catered breakfasts. Start the day off right with a catered breakfast while you catch up with your team, then hit the ground running—together.

Ping-pong. When you need to get up and stretch your legs and reset your brain, grab a ping-pong paddle and show off your skills (bragging rights included).

Small team. The Pivotal Tracker team is lean and mean, which means you’ll have an immediate impact.

Collaboration is key. We build things as a team, so we make decisions as a team. We believe a highly collaborative approach is part of the DNA of success.

Fun Fridays. At the end of the week, get the weekend started a little early with some head-to-head gaming.

Curling with the Tracker Team

Curling with the Tracker Team


Great location

The City. Denver is one of the fastest-growing cities in the country for a reason. Try out one of the innovative, chef-owned restaurants that are popping up everywhere or catch a show in the second-largest performing arts center in the country.

Get outside. With more than 300 days of sunshine a year, Denver is a city of active people. Whether you want to hike a fourteener, explore the trails on a mountain bike in the summer, or hit the slopes in the winter, Colorado has something to inspire you to get off the couch.

Brand-new office. We recently finished building a brand-new office building in the LoHi neighborhood. We have a full coffee bar, a dedicated B-Cycle station, and great patios for when you need a breath of fresh Colorado air.

Downtown Denver

Downtown Denver

Colorado Mountains in the Summer

Colorado Mountains in the summer

Great discipline

We pairall the time. Two heads are better than one, which is why pairing is a core part of our discipline. You’ll ramp up faster and spend less time dealing with roadblocks.

TDD. Good engineers write good code; great engineers write tests. We practice test-driven development as much as possible.

Refactoring. We think that there’s always room to do things better, which is why we encourage refactoring as a regular part of the process.

Regular retros. Our process is just as important as our code, which is why we have regular retros to check in often and make sure everything is running as smoothly as possible.

Pivotal Tracker Pivots Pairing

Pivotal Tracker Pivots pairing


You got this far, so why not go ahead and apply?!

Does this sound too good to be true? Look, we’re not making this up; come see for yourself! If you think we’d be a good match, apply online now and let’s get to work.

The post Who has two thumbs, excels at pair programming, and wants to work with us in Denver? Is it you? appeared first on Pivotal Tracker.

Categories: Companies

Is “Protecting the Team” the Right Thing?

Illustrated Agile - Len Lagestee - Tue, 03/17/2015 - 23:30

If you were to ask a Scrum Master what they do a common response is “we protect the team.” From the context of protecting the team from themselves or an aggressive product owner as Mike Cohn describes, I would agree. Protecting the team from complacency or overwork is a worthy endeavor.

For many Scrum Masters, protecting means shielding the team from outside distractions and interferences. These distractions and interferences come in different forms but most of them are from other humans. Here are three I have witnessed and experienced:

  • The “trespassers” have lost their voice of influence on a product or project. This may be a senior leader with a history of ownership on a product. As an organization grows, there is a need for them to relinquish control over their product but this is often a challenge for many senior leaders. They feel the need to strongly interject their opinions on the direction of a product vision or backlog. For the product owner, this leads to a lack of autonomy and a feeling of frustration. For the senior leader, this leads to intruding on product owner territory to get their ideas heard.
  • The “uninvited guests” have lost their assignment to direct the team. This is typically a manager with direct reports on the team. Prior to agile, they would be the ones who would assign work to the team and would always know what the team was doing. Status reports often originate from the uninvited guests (who are now looking from the outside in).
  • The “requestors” have lost their direct connection to team. This is typically a business person who in the past, had the ear of a developer and now must bypass the product owner. When something needs to be fixed or tweaked, a quick call to the developer and in just a few minutes the changes were made. This behavior often continues even after a team has assigned a product owner.

Our natural response to these situations are to protect, to shield, and to make life easier for the team by limiting the number of “distractions.” But just how should a Scrum Master handle them?

As an example, when the “trespasser” attempts to influence a product backlog, is a Scrum Master expected to tell the leader to back off? I have found very few who will. Most recognize their performance review, salary, bonus, and reputation are tied to the perception the leader has of them and are not willing to take the risk.

Beyond the personal impact, being in a mode of protecting also:

  • Increases isolation. As we continue to deflect people away from the team without creating an avenue for communication and conversation, we are conditioning them to never return. While this may seem like a good thing, this is where silos are born.
  • Fosters distrust. When people are isolated it is natural for doubt and suspicion to begin. For leaders, this is typically the time they will feel the need to get involved.
  • Solves nothing. Shielding the team will buy some time…until the next time. There is a short-term alleviation of discomfort or inconvenience but the real issues triggering the need to protect won’t go away.

As an alternative to protecting the team, here are a few areas for the Scrum Master and team to focus on to begin transforming into a culture where protection is no longer necessary:

Become a radiating team. I mentioned this in my last blog post. By naturally radiating work progress, the team begins to feel open and welcoming. Nothing feels hidden or mysterious.

Create connection points and conversations. The sprint review is a great place to start. Make this session open to all and facilitate healthy dialog around what reviewed and the direction of the product. Design other serendipitous occasions for people on the team to interact and engage with stakeholders and leaders.

Focus on co-creating opportunities. When the feeling or sense of protection emerges, use it to seek out ways to build things together. There are advantages to this:

  • Co-creation will illuminate lack of trust (and build trust) very quickly. For many organizations, a culture of distrust is just below the surface and is rarely addressed. By co-creating, we can begin to address this painful dysfunction and find ways to rebuild trust where needed.
  • Co-creation will amplify the strengths of each participant. When we spend time with each other, we learn how to leverage the best each has to offer.
  • Co-creation has transparency built-in. No need for status reports or additional meetings as vested parties have all contributed to the work. The Agile Leadership Engagement Grid walks through an approach for this type of transparency and connection at different levels in the enterprise.

SHARE YOUR THOUGHTS: Are there situations where you feel you must protect your team? Do you have any techniques to welcome interaction and co-creation? Please add your comments below.

Becoming a Catalyst - Scrum Master Edition

The post Is “Protecting the Team” the Right Thing? appeared first on Illustrated Agile.

Categories: Blogs

Stabilization Sprints and Velocity

Agile Learning Labs - Tue, 03/17/2015 - 22:42

Here is a question that just showed up in my in-box regarding how to calculate a scrum team’s velocity when they are doing stabilization sprints. This notion of stabilization sprints has become more popular lately, as they are included in SAFe (Scaled Agile Framework).


We do a 2-week stabilization sprint every 4th sprint where we complete regression testing, etc. but don’t take any new stories. Is there a rule of thumb around including a stabilization sprint in the team’s velocity?


The purpose of tracking a scrum team’s velocity is to give stakeholders (and the team) predictability into the rate at which they will complete the planned deliverables (the stories). Velocity is the rate of delivery. The stabilization work doesn’t represent specific deliverables that the stakeholders have asked for; it is simply a cost that you are paying every 4th sprint, because you aren’t really done with the stories during the non-stabilization sprints.

You can reduce this cost by having a more robust definition of done. Look at each thing that gets done during stabilization and ask “How could we do that during each sprint, for each story, so that done really means done?” As you move more work out of stabilization and into your definition of done, your predictability gets better because there are fewer surprises to be discovered during stabilization. The amount of stabilization time that you need goes down, and you can measure the cost savings in terms of reduced time and effort (which is money). By the way, you can learn more about definition of done this Wednesday at the Scrum Professionals MeetUp.

Therefore, my recommendation is to not assign points to the stabilization work.

Here are a couple of other posts related to velocity:



Categories: Companies

Agile and Scrum Trello Extensions

Scrum Expert - Tue, 03/17/2015 - 19:27
Trello is a free on-line project management tool that provides a flexible and visual way to organize anything. This approach is naturally close to the visual boards used in the Scrum or Kanban approaches. As the tool as an open architecture, some extensions have been developed for a better implementation of Agile project management in Trello. Updates March 17 2015: added Screenful for Trello extension The visual representation and the card system used by Trello already make it possible to use it for Scrum project that need a virtual board to display their ...
Categories: Communities

20 Common Logical Fallacies – Don’t Be a Victim!

Agile For All - Bob Hartman - Tue, 03/17/2015 - 16:15
The 20 Most Common Logical Fallacies
  1. Appeal to ignorance – Thinking a claim is true (or false) because it can’t be proven true (or false).
  2. Ad hominem – Making a personal attack against the person saying the argument, rather than directly addressing the issue.
  3. Strawman fallacy – Misrepresenting or exaggerating another person’s argument to make it easier to attack.
  4. Bandwagon fallacy – Thinking an argument must be true because it’s popular.
  5. Naturalistic fallacy – Believing something is good or beneficial just because it’s natural.
  6. Cherry picking – Only choosing a few examples that support your argument, rather than looking at the full picture.
  7. False dilemma – Thinking there are only two possibilities when there may be other alternatives you haven’t considered.
  8. Begging the question – Making an argument that something is true by repeating the same thing in different words.
  9. Appeal to tradition – Believing something is right just because it’s been done around for a really long time.
  10. Appeal to emotions – Trying to persuade someone by manipulating their emotions – such as fear, anger, or ridicule – rather than making a rational case.
  11. Shifting the burden of proof – Thinking instead of proving your claim is true, the other person has to prove it’s false.
  12. Appeal to authority – Believing just because an authority or “expert” believes something than it must be true.
  13. Red herring – When you change the subject to a topic that’s easier to attack.
  14. Slippery slope – Taking an argument to an exaggerated extreme. “If we let A happen, then Z will happen.”
  15. Correlation proves causation – Believing that just because two things happen at the same time, that one must have caused the other.
  16. Anecdotal evidence – Thinking that just because something applies toyou that it must be true for most people.
  17. Equivocation – Using two different meanings of a word to prove your argument.
  18. Non sequitur – Implying a logical connection between two things that doesn’t exist. “It doesn’t follow…”
  19. Ecological fallacy – Making an assumption about a specific person based on general tendencies within a group they belong to.
  20. Fallacy fallacy – Thinking just because a claim follows a logical fallacy that it must be false.

Faulty thinking is part of life. We’re not perfect, nor do we think perfectly. It is, however, helpful to identify faulty thinking in our own mental processes. Sometimes, merely being aware of how we think can help us stay away from potential pitfalls in our logic.

It also helps to be aware when people use logical fallacies, especially to ‘rationalize’ their thinking. Don’t be afraid to call it out for what it is. Getting people together to collaborate can be a challenge in itself, candor, honesty, and arriving at a shared understanding is crucial for any decision making process.

Be a head above. Bring people together when making decisions, just make sure we aren’t dealing with dissonance in irrational ways… :)

[HT: TheMotionMachine]

The post 20 Common Logical Fallacies – Don’t Be a Victim! appeared first on Agile For All.

Categories: Blogs

One month of mini habits

Mark Needham - Tue, 03/17/2015 - 03:32

I recently read a book in the ‘getting things done’ genre written by Stephen Guise titled ‘Mini Habits‘ and although I generally don’t like those types of books I quite enjoyed this one and decided to give his system a try.

The underlying idea is that there are two parts of actually doing stuff:

  • Planning what to do
  • Doing it

We often get stuck in between the first and second steps because what we’ve planned to do is too big and overwhelming.

Guise’s approach for overcoming this inaction is to shrink the amount of work to do until it’s small enough that we don’t feel any resistance to getting started.

It should be something that you can do in 1 or 2 minutes – stupidly small – something that you can do even on your worst day when you have no time/energy.

I’m extremely good at procrastinating so I thought I’d give it a try and see if it helped. Guise suggests starting with one or two habits but I had four things that I want to do so I’ve ignored that advice for now.

My attempted habits are the following:

  • Read one page of a data science related paper/article a day
  • Read one page of a computer science related paper/article a day
  • Write one line of data science related code a day
  • Write 50 words on blog a day
Sooooo….has it helped?

In terms of doing each of the habits I’ve been successful so far – today is the 35th day in a row that I’ve managed to do each of them. Having said that, there have been some times when I’ve got back home at 11pm and realised that I haven’t done 2 of the habits and need to quickly do the minimum to ‘tick them off’.

The habit I’ve enjoyed doing the most is writing one line of data science related code a day.

My initial intention was that this was going to only involved writing machine learning code but at the moment I’ve made it a bit more generic so it can include things like the Twitter Graph or other bits and pieces that I want to get started on.

The main problem I’ve had with making progress on mini projects like that is that I imagine its end state and it feels too daunting to start on. Committing to just one line of code a day has been liberating in some way.

One tweak I have made to all the habits is to have some rough goal of where all the daily habits are leading as I noticed that the stuff I was doing each day was becoming very random. Michael pointed me at Amy Hoy’s ‘Guide to doing it backwards‘ which describes a neat technique for working back from a goal and determining the small steps required to achieve it.

Writing at least 50 words a day has been beneficial for getting blog posts written. Before the last month I’ve found myself writing most of my posts at the end of month but I have a more regular cadence now which feels better.

Computer science wise I’ve been picking up papers which have some sort of link to databases to try and learn more of the low level detail there. e.g. I’ve read the LRU-K cache paper which Neo4j 2.2’s page cache is based on and have been flicking through the original CRDTs paper over the last few days.

I also recently came across the Papers We Love repository so I’ll probably work through some of the distributed systems papers they’ve collated next.

Other observations

I’ve found that if I do stuff early in the morning it feels better as you know it’s out of the way and doesn’t linger over you for the rest of the day.

I sometimes find myself wanting to just tick off the habits for the day even when it might be interesting to spend more time on one of the habits. I’m not sure what to make of this really – perhaps I should reduce the number of habits to the ones I’m really interested in?

With the writing it does sometimes feel like I’m just writing for the sake of it but it is a good habit to get into as it forces me to explain what I’m working on and get ideas from other people so I’m going to keep doing it.

I’ve enjoyed my experience with ‘mini habits’ so far although I think I’d be better off focusing on fewer habits so that there’s still enough time in the day to read/learn random spontaneous stuff that doesn’t fit into these habits.

Categories: Blogs

State of Scrum Survey

Notes from a Tool User - Mark Levison - Mon, 03/16/2015 - 21:45
State of Scrum Survey 2015

State of Scrum Survey 2015

How are you using (or not using) Scrum in your organization and projects?

That’s what the Scrum Alliance, and ProjectsAtWork want to find out with their annual “State of Scrum” survey.

Who is using Scrum, how are they using it, and why are they using it… and if they’re not, why not?

Please take a few moments to complete the survey. Two participants will win a $500 gift card. The results will be compiled and presented later this year, and all respondents who request the report will receive a copy.

Your answers will be strictly confidential.


Categories: Blogs

From Agile Hangover to Antifragile Organisations

Scrum Expert - Mon, 03/16/2015 - 19:37
Many organisations have been swept up in agile process adoption, with good reasons! The Agile Party is coming to a close and many organisations are now beginning to look at where they are and have come to the disheartening realisation that, rather than in a new world of embracing change and competitiveness, they have a lot of new processes, not much to show for it, and people are disillusioned enough to begin to revert to older, familiar ways… This is the unfortunate age of the ‘agile hangover’. In this talk Russ ...
Categories: Communities

A Retake on the Agile Manifesto

TV Agile - Mon, 03/16/2015 - 18:49
The Agile Manifesto was the spark that brought about a shift in how software was being developed and as a result a wave of new Agile Methodologies such as SCRUM, XP, and Continuous Delivery have been introduced as “better ways of developing software”. Many development organizations have adopted these agile methodologies to improve their communication, […]
Categories: Blogs

Kanban in One Minute – Visualize the Workflow

Learn more about our Scrum and Agile training sessions on

Great new video about Kanban by Michael Badali.  This is the third video in a regular series:

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
Categories: Blogs

Quality Project Management

Leading Answers - Mike Griffiths - Mon, 03/16/2015 - 05:49
How do we define quality as a project manager? Is it managing a project really well, or managing a successful project? How about managing a successful project really well, that sounds pretty good. However it poses the next question: What... Mike Griffiths
Categories: Blogs

Do You Organize Your Work for Yourself? Or Your Team?

Agile Artisans - Mon, 03/16/2015 - 02:00

We tackle our work in the way that seems right to us. We look ahead at the work on our plate and do our best to get it done. I often say that everyone makes great choices... given their own context and point of view. Unfortunately, that point of view sometimes leads us to a local optimization, where things look efficient until we step back and take a look at the bigger picture. Then we realize our local optimization wasn't nearly as efficient as we thought.

This often takes shape in how we break out our team's work. Some times we break everything down into layers (horizontal slicing) while others slice the work into smaller, but working, bits of functionality (vertical slicing).

Horizontal feels more efficient because it lets different product area specialists (like SQL or UI gurus or server-side code jockeys) work quickly and knock out a lot of st

Categories: Companies

Starting with the Mutual Learning Model

Thought Nursery - Jeffrey Fredrick - Mon, 03/16/2015 - 01:53

On February 25th we held the first meeting of the London Action Science MeetupStarting with the Mutual Learning Model. My goal for this session was to describe Action Science as a discipline, introduce key concepts, and illustrate some of those concepts through a hands-on exercise. I’m going to cover the same ground in this blog post including the exercise. I’d love to get your feedback in the comments, especially your experiences with the exercise.

First, what is Action Science anyway? According to the book (see Preface) it is “a science that can generate knowledge that is useful, valid, descriptive of the world, and informative of how we might change it.” More simply, it is the science of effective action in organizations. But this is no ivory tower science, remote and theoretical. Chris Argyris as scientist was also an interventionist. To practice action science is to learn how to be more effective, to help others to be more effective, and to increase the opportunities for organizational learning.

For my introduction to Action Science we began with the Mutual Learning Model as described in Eight Behaviors for Smarter Teams. This white paper from Roger Schwartz has been my go-to introduction for people interested in improving the relationships in their teams. Formerly called Ground Rules for Effective Teams, it provides a Shu-level set of behaviors that you can use to have better, more productive conversations. It also provides the motivating mindset that must be present, the mutual learning mindset, and the core values that go with it:

  • Transparency
  • Curiosity
  • Informed Choice
  • Accountability
  • Compassion

The mutual learning mindset and the accompanying values are easy to espouse. Who doesn’t want to learn? But one of the key areas of exploration in Action Science is the gap between what people claim to value — Espoused Theory — and the values that can be inferred from their actual behavior — Theory in Use.

While effective behavior starts with the theory-in-use of the mutual learning mindset (what Argyris called Model II), there’s a common default action strategy that is used instead, the Unilateral Control Model (what Argyris termed Model I). In contrast to the mutual learning model, the governing values of the unilateral control model are:

  • Achieve the purpose as I define it
  • Win, don’t lose
  • Suppress negative feelings
  • Emphasize rationality

Read these lists again, and test these values against your own experience, reflect on your own behavior… Heading into a meeting where you think there’s an important decision to be made, where there’s something of significance on the line, how do you behave? If you are like me — and according to Argyris if you are like virtually everyone — you will act consistent with the unilateral control model. You will believe your understanding of the situation is correct, you will believe the inferences you make are reality, and you will act so that “the right thing” gets done.

“Well… maybe. But so what?”, I hear you ask. The implication of the unilateral control model is limited learning; I am listening only to formulate my response, only speaking to persuade. It is defensive relationships; you know that I’m acting to get my way. It is reduced opportunity for double-loop learning; we are unlikely to question our norms, goals and values when we are struggling over whose strategy to use. Now perhaps in your context these implications don’t matter. But in teams that want to excel, companies that want to innovate, organizations looking to evolve, we can’t afford to lose learning opportunities. And on a more personal level the defensiveness engendered by the unilateral control model is ultimately unpleasant to live with.

“Okay, I’m convinced! Are we done?” Actually, now we are ready to try that hands-on exercise I mentioned at the start. It is a variation of the two-column case study, tool for exploring and understanding our behavior in stressful conversations, the kinds where it is difficult to produce mutual learning behavior. To perform the exercise you’ll need three items: a full page of lined paper (A4 / 8.5 x 11), and two pens, preferably blue and red. You’ll also need 15-20 minutes and a desire to learn more about yourself. Do you have everything you need to begin?

Prepare by folding the paper in half vertically so that you’ve got two lined columns on either half of the paper. Now think of a difficult conversation you’ve recently had, or are expecting, or a have been putting off. At the top of the page on the left hand of the fold write (blue pen) a sentence describing what you would desire out of that conversation and on the right what actually happened or what you fear would happen. Then in the right hand column write out the dialog as you remember or imagine it. (If this was an actual conversation don’t worry about remembering exact wording; for this exercise the sentiment as you remember it is more valuable.) There isn’t much room on the half-page so this process should only take 5-10 minutes and capture the essence of the exchange. Take the time to write out the dialog now, on the right hand side, before proceeding.

Once you’ve captured the key dialog on the right hand of the paper turn your attention to the left hand column. In this column write down what you expect you’d be thinking and feeling during the conversation. This could be what you are thinking as you formulate your words, this could be your feelings in reaction to theirs. This will likely take less time than the right hand column, only 3-5 minutes. Take the time to fill in the left hand column with your thoughts and feelings before proceeding.

Red pen time. Take a read through your dialog and circle in red every question you asked, every occurrence of the question mark. Write this on the top of the page on the right hand side as the denominator of a fraction. Next, review all those circled questions and count all the genuine questions, every question asked with a real interest in learning something from the answer. A genuine question is not a statement in disguise: “If the team feel they need the time isn’t that good enough?” When you’ve counted the genuine questions write that on the top of the page as the numerator of your fraction.

Red pen time, part two. Look through the left hand column and circle every thought or feeling that did not appear in the right hand column, that is the thoughts and feelings that you did not express. Write a fraction at the top of the page which is the number of unexpressed thoughts/feelings over the total number.

How did you do? In our meetup session the total number of questions was low (0-2) and the number of genuine questions even lower (0-1); very little red ink on the right hand column. The left hand column, by contrast, looked like it was bled upon. Most of the thoughts and feelings were not expressed. Now consider those lists of values again and compare it to your marks in red. If we espouse curiosity, why do we ask so few genuine questions? If we espouse transparency, why aren’t we sharing our thoughts and feelings? This gap between our espoused theory and our theory in use was clearly exhibited by the exercise in the meetup. Being aware of that gap is required to start with the mutual learning model.

What’s next? Experience and reflection has shown me that while I believe the mutual learning mindset is better, it takes effort, real mindful willful effort for me to produce it. But unilateral control behavior? I can produce that effortlessly! I know how to persuade, how to influence, how to build consensus. I know when to press my point and when to back off to avoid conflict. That sounds good, but if our intent is to learn this instinct leads me in the wrong direction. It is a symptom of what Argyis terms Skilled Incompetence. The good news is we can unlearn our incompetence and learn redesign our conversation. Starting down that road is the where we will head in the next Action Science meetup, March 25th. Hope to see you there!

Categories: Blogs

Python: Transforming Twitter datetime string to timestamp (z’ is a bad directive in format)

Mark Needham - Mon, 03/16/2015 - 00:43

I’ve been playing around with importing Twitter data into Neo4j and since Neo4j can’t store dates natively just yet I needed to convert a date string to timestamp.

I started with the following which unfortunately throws an exception:

from datetime import datetime
date = "Sat Mar 14 18:43:19 +0000 2015"
>>> datetime.strptime(date, "%a %b %d %H:%M:%S %z %Y")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/", line 317, in _strptime
    (bad_directive, format))
ValueError: 'z' is a bad directive in format '%a %b %d %H:%M:%S %z %Y'

%z is actually a valid option used to extract the timezone but my googling suggests it not working is one of the idiosyncrasies of strptime.

I eventually came across the python-dateutil library, as recommended by Joe Shaw on StackOverflow.

Using that library the problem is suddenly much simpler:

$ pip install python-dateutil
from dateutil import parser
parsed_date = parser.parse(date)
>>> parsed_date
datetime.datetime(2015, 3, 14, 18, 43, 19, tzinfo=tzutc())

To get to a timestamp we can use calendar as I’ve described before:

import calendar
timestamp = calendar.timegm(parser.parse(date).timetuple())
>>> timestamp
Categories: Blogs

Scrum Data Warehouse Project

Learn more about our Scrum and Agile training sessions on WorldMindware.comMay people have concerns about the possibility of using Scrum or other Agile methods on large projects that don’t directly involve software development.  Data warehousing projects are commonly brought up as examples where, just maybe, Scrum wouldn’t work. I have worked as a coach on a couple of such projects.  Here is a brief description of how it worked (both the good and the bad) on one such project: The project was a data warehouse migration from Oracle to Teradata.  The organization had about 30 people allocated to the project.  Before adopting Scrum, they had done a bunch of up-front analysis work.  This analysis work resulted in a dependency map among approximately 25,000 tables, views and ETL scripts.  The dependency map was stored in an MS Access DB (!).  When I arrived as the coach, there was an expectation that the work would be done according to dependencies and that the “team” would just follow that sequence. I learned about this all in the first week as I was doing boot-camp style training on Scrum and Agile with the team and helping them to prepare for their first Sprint. I decided to challenge the assumption about working based on dependencies.  I spoke with the Product Owner about the possible ways to order the work based on value.  We spoke about a few factors including:
  • retiring Oracle data warehouse licenses / servers,
  • retiring disk space / hardware,
  • and saving CPU time with new hardware
The Product Owner started to work on getting metrics for these three factors.  He was able to find that the data was available through some instrumentation that could be implemented quickly so we did this.  It took about a week to get initial data from the instrumentation. In the meantime, the Scrum teams (4 of them) started their Sprints working on the basis of the dependency analysis.  I “fought” with them to address the technical challenges of allowing the Product Owner to work on the migration in order based more on value – to break the dependencies with a technical solution.  We discussed the underlying technologies for the ETL which included bash scripts, AbInitio and a few other technologies.  We also worked on problems related to deploying every Sprint including getting approval from the organization’s architectural review board on a Sprint-by-Sprint basis.  We also had the teams moved a few times until an ideal team workspace was found. After the Product Owner found the data, we sorted (ordered) the MS Access DB by business value.  This involved a fairly simple calculation based primarily on disk space and CPU time associated with each item in the DB.  This database of 25000 items became the Product Backlog.  I started to insist to the teams that they work based on this order, but there was extreme resistance from the technical leads.  This led to a few weeks of arguing around whiteboards about the underlying data warehouse ETL technology.  Fundamentally, I wanted to the teams to treat the data warehouse tables as the PBIs and have both Oracle and Teradata running simultaneously (in production) with updates every Sprint for migrating data between the two platforms.  The Technical team kept insisting this was impossible.  I didn’t believe them.  Frankly, I rarely believe a technical team when they claim “technical dependencies” as a reason for doing things in a particular order. Finally, after a total of 4 Sprints of 3 weeks each, we finally had a breakthrough.  In a one-on-one meeting, the most senior tech lead admitted to me that what I was arguing was actually possible, but that the technical people didn’t want to do it that way because it would require them to touch many of the ETL scripts multiple times – they wanted to avoid re-work.  I was (internally) furious due to the wasted time, but I controlled my feelings and asked if it would be okay if I brought the Product Owner into the discussion.  The tech lead allowed it and we had the conversation again with the PO present.  The tech lead admitted that breaking the dependencies was possible and explained how it could lead to the teams touching ETL scripts more than once.  The PO basically said: “awesome!  Next Sprint we’re doing tables ordered by business value.” A couple Sprints later, the first of 5 Oracle licenses was retired, and the 2-year $20M project was a success, with nearly every Sprint going into production and with Oracle and Teradata running simultaneously until the last Oracle license was retired.  Although I don’t remember the financial details anymore, the savings were huge due to the early delivery of value.  The apprentice coach there went on to become a well-known coach at this organization and still is a huge Agile advocate 10 years later! Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
Categories: Blogs

Python: Checking any value in a list exists in a line of text

Mark Needham - Sat, 03/14/2015 - 04:52

I’ve been doing some log file analysis to see what cypher queries were being run on a Neo4j instance and I wanted to narrow down the lines I looked at to only contain ones which had mutating operations i.e. those containing the words MERGE, DELETE, SET or CREATE

Here’s an example of the text file I was parsing:

$ cat blog.txt
MERGE (n:Person {name: "Mark"}) RETURN n
MATCH (n:Person {name: "Mark"}) ON MATCH SET n.counter = 1 RETURN n

So I only want lines 2 & 3 to be returned as the first one only returns data and doesn’t execute any updates on the graph.

I started off with a very crude way of doing this;

with open("blog.txt", "r") as ins:
    for line in ins:
        if "MERGE" in line or "DELETE" in line or "SET" in line or "CREATE" in line:
           print line.strip()

A better way of doing this is to use the any command and make sure at least one of the words exists in the line:

mutating_commands = ["SET", "DELETE", "MERGE", "CREATE"]
with open("blog.txt", "r") as ins:
    for line in ins:
        if any(command in line for command in mutating_commands):
           print line.strip()

I thought I might be able to simplify the code even further by using itertools but my best attempt so far is less legible than the above:

import itertools
commands = ["SET", "CREATE", "MERGE", "DELETE"]
with open("blog.txt", "r") as ins:
    for line in ins:
        if len(list(itertools.ifilter(lambda x: x in line, mutating_commands))) > 0:
            print line.strip()

I think I’ll go with the 2nd approach for now but if I’m doing something wrong with itertools and it’s much easier to use than the example I’ve shown do correct me!

Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.