Skip to content

Feed aggregator

The Power of Questions in Artifact Design

Leading Agile - Mike Cottmeyer - Thu, 02/19/2015 - 09:39

QuestionMarkEffective coaches use powerful questions, not to obtain answers but to stimulate thought so insight emerges and knowledge is generated.

Why not leverage this power by designing artifacts explicitly around questions? This will work for either formal artifacts used to capture and persist knowledge, or informal artifacts used to guide a conversation or workshop.

In a recent engagement this became so clear to me that I decided to capture my thoughts on this topic and some examples of changes made along the way to facilitate better outcomes for the particular group in question.

Questions Help Clarify the Intent of a Conversation
  • Getting from an artifact template to valuable knowledge is rarely a straight line, and people tend to simply regurgitate known information
  • Question-driven conversations will help surface latent dissonance and divergence
  • Helps the coach and stakeholders focus on where the knowledge resides and who may provide unique perspectives

Whatever question is asked, your brain goes right to work formulating answers. Use this to your advantage by asking open-ended questions. Since the brain will not be able to formulate an answer, deeper thought will result.

Some things to keep in mind for clarifying intent:

  • Mentally map out the journey and craft questions that serve as a compass rather than turn-by-turn GPS
  • Use open powerful questions
  • Based on the context, be willing to change the question to surface what may be hidden


Before: “Customers”

After: “Who is impacted by your work?”

  • The team in question’s thoughts in terms of the impact they are making rather than identifying “customers”
  • Within the team, refocusing on who they impact opened up the conversation to consider others who had not been viewed as “customers”
  • At the same time, other stakeholders were then identified and a good conversation ensued about the differences between customers and stakeholders
Questions Move the Focus from Results to Dialogue
  • Fields in a template or on a form appear to be “complete” regardless of quality as our minds will focus on filling in blanks, not the actual content
  • Questions can push a group beyond converging on a shared understanding to creating a vibrant, divergent pace for exploration and dialogue
  • Be aware of questions loaded with assumptions, initial results may continue to hide them

Some things to keep in mind for shifting focus from results to dialogue:

  • Wherever you can, replace artifact labels with a question
  • Use questions to discover new possibilities by challenging convention and challenging convergence
  • Review questions for loaded assumptions and either make them explicit or craft the question to remove the assumption


Before: “Customer Outcome”

After: “What stories would your customers tell after experiencing this value?”

  • Interesting conversations emerged about divergent views in the team about the true outcome they hoped to deliver
  • Helped to focus the conversation on the highest leverage dimensions of the outcome
  • Increased the energy of the entire group and even fed back into conversations about options considered
Questions Increase Engagement by Appealing to Credibility, Logic, and Emotion
  • Credibility should be apparent through the questions themselves
  • Logic can be expressed through the mental journey from answering one question to asking and answering the next
  • Emotional appeal can establish a state of receptivity for new ideas, for vigorous conversation, for openness to what is possible

Some things to consider for Ethos, Logos, and Pathos:

  • Use your expertise to craft questions based on principles rather than dogma
  • Design an end-to-end chain of questions that lead the group from surfacing what they do know, don’t know, and assumptions to meaningful dialogue
  • Craft questions that touch on the group sense of identity, self-interest, and passion. Engagement will improve and the resulting dialogue will be rich


Before: “Unique Value Proposition”

After: “Why is this team the best suited to work in this problem / solution space?”

  • Created energy in the team around why they were uniquely positioned to make the most impact
  • Helped a still forming team to rally around their passion and unique identity
  • Vibrant conversation that led directly to articulating a compelling vision for the team in short order
Wrapping Up

You must change the system to change the results – to change the results, change the thinking, to change the thinking, use questions. And use them not just in coaching; use them whenever and wherever you desire to influence the thought process and generate new points of view. The added benefit of not just using them to coach the group and facilitate the conversation, but to design or refine the artifact itself is the powerful combination of having gone through the thought process in the workshop and having the prompts persisted for reinforcement and continued refinement in the future.

Here are two partial iterations – of probably 5 total that I observed – of a template for facilitating a program level strategic conversation. However, the most impressive version I encountered was observing a new group take the most current template, and then update it even further to match how they needed to think through their particular strategy. That indicated to me that they embraced not just a template but an adaptive mental model for how to effectively frame the meaningful conversation they required.

Partial Canvas of an Early Version Partial Canvas of a Later Version

The post The Power of Questions in Artifact Design appeared first on LeadingAgile.

Categories: Blogs

Python’s pandas vs Neo4j’s cypher: Exploring popular phrases in How I met your mother transcripts

Mark Needham - Thu, 02/19/2015 - 02:52

I’ve previously written about extracting TF/IDF scores for phrases in documents using scikit-learn and the final step in that post involved writing the words into a CSV file for analysis later on.

I wasn’t sure what the most appropriate tool of choice for that analysis was so I decided to explore the data using Python’s pandas library and load it into Neo4j and write some Cypher queries.

To do anything with Neo4j we need to first load the CSV file into the database. The easiest way to do that is with Cypher’s LOAD CSV command.

First we’ll load the phrases in and then we’ll connect them to the episodes which were previously loaded:

LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-himym/data/import/tfidf_scikit.csv" AS row
MERGE (phrase:Phrase {value: row.Phrase});
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-himym/data/import/tfidf_scikit.csv" AS row
MATCH (phrase:Phrase {value: row.Phrase})
MATCH (episode:Episode {id: TOINT(row.EpisodeId)})
MERGE (phrase)-[:USED_IN_EPISODE {tfidfScore: TOFLOAT(row.Score)}]->(episode);

Now we’re ready to start writing some queries. To start with we’ll write a simple query to find the top 3 phrases for each episode.

In pandas this is quite easy – we just need to group by the appropriate field and then take the top 3 records in that grouping:

top_words_by_episode = df \
    .sort(["EpisodeId", "Score"], ascending = [True, False]) \
    .groupby(["EpisodeId"], sort = False) \
>>> print(top_words_by_episode.to_string())
        EpisodeId              Phrase     Score
3976            1                 ted  0.262518
2912            1              olives  0.195714
2441            1            marshall  0.155515
8143            2                 ted  0.292184
5197            2              carlos  0.227454
7482            2               robin  0.195150
12551           3                 ted  0.232662
9040            3              barney  0.187255
11254           3              mcneil  0.170619
15641           4             natalie  0.562485
16763           4                 ted  0.191873
16234           4               robin  0.102671
20715           5            subtitle  0.310866
18121           5          coat check  0.181682
20861           5                 ted  0.169973

The cypher version looks quite similar, the main difference being that we use the COLLECT to generate an array of phrases by episode and then take the top 3:

MATCH (e:Episode)<-[rel:USED_IN_EPISODE]-(phrase)
WITH e, rel, phrase
ORDER BY, rel.tfidfScore DESC
RETURN, e.title, COLLECT({phrase: phrase.value, score: rel.tfidfScore})[..3]
==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | | e.title                                     | COLLECT({phrase: phrase.value, score: rel.tfidfScore})[..3]                                                                                                               |
==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | 1    | "Pilot"                                     | [{phrase -> "ted", score -> 0.2625177493269755},{phrase -> "olives", score -> 0.19571419072701732},{phrase -> "marshall", score -> 0.15551468983363487}]                  |
==> | 2    | "Purple Giraffe"                            | [{phrase -> "ted", score -> 0.292184496766088},{phrase -> "carlos", score -> 0.22745438090499026},{phrase -> "robin", score -> 0.19514993122773566}]                      |
==> | 3    | "Sweet Taste of Liberty"                    | [{phrase -> "ted", score -> 0.23266190616714866},{phrase -> "barney", score -> 0.18725456678444408},{phrase -> "officer mcneil", score -> 0.17061872221616137}]           |
==> | 4    | "Return of the Shirt"                       | [{phrase -> "natalie", score -> 0.5624848345525686},{phrase -> "ted", score -> 0.19187323894701674},{phrase -> "robin", score -> 0.10267067360622682}]                    |
==> | 5    | "Okay Awesome"                              | [{phrase -> "subtitle", score -> 0.310865508347106},{phrase -> "coat check", score -> 0.18168178787561182},{phrase -> "ted", score -> 0.16997258596683185}]               |
==> | 6    | "Slutty Pumpkin"                            | [{phrase -> "mike", score -> 0.2966610054610693},{phrase -> "ted", score -> 0.19333276951599407},{phrase -> "robin", score -> 0.1656172994411056}]                        |
==> | 7    | "Matchmaker"                                | [{phrase -> "ellen", score -> 0.4947912795578686},{phrase -> "sarah", score -> 0.24462913913669443},{phrase -> "ted", score -> 0.23728319597607636}]                      |
==> | 8    | "The Duel"                                  | [{phrase -> "ted", score -> 0.26713931416222847},{phrase -> "marshall", score -> 0.22816702335751904},{phrase -> "swords", score -> 0.17841675237702592}]                 |
==> | 9    | "Belly Full of Turkey"                      | [{phrase -> "ericksen", score -> 0.43145756691027665},{phrase -> "mrs ericksen", score -> 0.1939318283559959},{phrase -> "kendall", score -> 0.1846969793866628}]         |
==> | 10   | "The Pineapple Incident"                    | [{phrase -> "ted", score -> 0.439756993033922},{phrase -> "trudy", score -> 0.36367907631894536},{phrase -> "carl", score -> 0.16413071244131686}]                        |
==> | 11   | "The Limo"                                  | [{phrase -> "moby", score -> 0.48314164479037003},{phrase -> "party number", score -> 0.30458929780262456},{phrase -> "ranjit", score -> 0.1991061739767796}]             |
==> +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

In the cypher version we get one row per episode whereas with the Python version we get 3 rows. It might be possible to achieve this effect with pandas too but I wasn’t sure how to do so.

Next let’s find the top phrases for a single episode – the type of query that might be part of an episode page on a How I met your mother wiki:

top_words = df[(df["EpisodeId"] == 1)] \
    .sort(["Score"], ascending = False) \
>>> print(top_words.to_string())
      EpisodeId                Phrase     Score
3976          1                   ted  0.262518
2912          1                olives  0.195714
2441          1              marshall  0.155515
4732          1               yasmine  0.152279
3347          1                 robin  0.130418
209           1                barney  0.124412
2146          1                  lily  0.122925
3637          1                signal  0.103793
1366          1                goanna  0.098138
3524          1                 scene  0.095342
710           1                   cut  0.091734
2720          1              narrator  0.086462
1147          1             flashback  0.078296
1148          1        flashback date  0.070283
3224          1                ranjit  0.069393
4178          1           ted yasmine  0.058569
1149          1  flashback date robin  0.058569
525           1                  carl  0.058210
3714          1           smurf pen1s  0.054365
2048          1              lebanese  0.054365
MATCH (e:Episode {title: "Pilot"})<-[rel:USED_IN_EPISODE]-(phrase)
WITH phrase, rel
ORDER BY rel.tfidfScore DESC
RETURN phrase.value AS phrase, rel.tfidfScore AS score
==> +-----------------------------------------------+
==> | phrase                 | score                |
==> +-----------------------------------------------+
==> | "ted"                  | 0.2625177493269755   |
==> | "olives"               | 0.19571419072701732  |
==> | "marshall"             | 0.15551468983363487  |
==> | "yasmine"              | 0.15227880637176266  |
==> | "robin"                | 0.1304175242341549   |
==> | "barney"               | 0.12441175186690791  |
==> | "lily"                 | 0.12292497785945679  |
==> | "signal"               | 0.1037932464656365   |
==> | "goanna"               | 0.09813798750091524  |
==> | "scene"                | 0.09534236041231685  |
==> | "cut"                  | 0.09173366535740156  |
==> | "narrator"             | 0.08646229819848741  |
==> | "flashback"            | 0.07829592155397117  |
==> | "flashback date"       | 0.07028252601773662  |
==> | "ranjit"               | 0.06939276915589167  |
==> | "ted yasmine"          | 0.05856877168144719  |
==> | "flashback date robin" | 0.05856877168144719  |
==> | "carl"                 | 0.058210117288760355 |
==> | "smurf pen1s"          | 0.05436505297972703  |
==> | "lebanese"             | 0.05436505297972703  |
==> +-----------------------------------------------+

Our next query is a negation – find the episodes which don’t mention the phrase ‘robin’. In python we can do some simple set operations to work this out:

all_episodes = set(range(1, 209))
robin_episodes = set(df[(df["Phrase"] == "robin")]["EpisodeId"])
>>> print(set(all_episodes) - set(robin_episodes))
set([145, 198, 143])

In cypher land a query will suffice:

MATCH (episode:Episode), (phrase:Phrase {value: "robin"})
WHERE NOT (episode)<-[:USED_IN_EPISODE]-(phrase)
RETURN AS id, episode.season AS season, episode.number AS episode

And finally a mini recommendation engine type query – how many of the top phrases in Episode 1 were used in other episodes:

First python:

phrases_used = set(df[(df["EpisodeId"] == 1)] \
    .sort(["Score"], ascending = False) \
phrases = df[df["Phrase"].isin(phrases_used)]
print (phrases[phrases["EpisodeId"] != 1] \
    .groupby(["Phrase"]) \
    .size() \
    .order(ascending = False))

Here we’ve pulled it out into a few steps – first we identify the top phrases, then we find out where they occur across the whole data set and finally we filter out the occurrences in the first episode and count the other occurrences.

marshall    207
barney      207
ted         206
lily        206
robin       204
scene        36
signal        4
goanna        3
olives        1

In cypher we can write a query to do this as well:

MATCH (episode:Episode {title: "Pilot"})<-[rel:USED_IN_EPISODE]-(phrase)
WITH phrase, rel, episode
ORDER BY rel.tfidfScore DESC
MATCH (phrase)-[:USED_IN_EPISODE]->(otherEpisode)
WHERE otherEpisode <> episode
RETURN phrase.value AS phrase, COUNT(*) AS numberOfOtherEpisodes
ORDER BY numberOfOtherEpisodes DESC
==> +------------------------------------+
==> | phrase     | numberOfOtherEpisodes |
==> +------------------------------------+
==> | "barney"   | 207                   |
==> | "marshall" | 207                   |
==> | "ted"      | 206                   |
==> | "lily"     | 206                   |
==> | "robin"    | 204                   |
==> | "scene"    | 36                    |
==> | "signal"   | 4                     |
==> | "goanna"   | 3                     |
==> | "olives"   | 1                     |
==> +------------------------------------+

Overall there’s not much in it – for some of the queries I found it easier in cypher and for others easier with pandas. It’s always useful to have multiple tools in the toolbox!

Categories: Blogs

Diamond Kata - Thoughts on Incremental Development

Mistaeks I Hav Made - Nat Pryce - Thu, 02/19/2015 - 01:35
Some more thoughts on my experience doing the Diamond Kata with property-based tests… When test-driving development with example-based tests, an essential skill to be learned is how to pick each example to be most helpful in driving development forwards in small steps. You want to avoid picking examples that force you to take too big a step (A.K.A. “now draw the rest of the owl”). Conversely, you don’t want to get sidetracked into a boring morass of degenerate cases and error handling when you’ve not yet addressed the core of the problem to be solved. Property-based tests are similar: the skill is in picking the right next property to let you make useful progress in small steps. But the progress from nothing to solution is different. Doing TDD with example-based tests, I’ll start with an arbitrary, specific example (arbitrary but carefully chosen to help me make useful progress), and write a specific implementation to support just that one example. I’ll add more examples to “triangulate” the property I want to implement, and generalise the implementation to pass the tests. I continue adding examples and triangulating, and generalising the implementation until I have extensive coverage and a general implementation that meets the required properties. Example TDD progress Doing TDD with property-based tests, I’ll start with a very general property, and write a specific but arbitrary implementation that meets the property (arbitrary but carefully chosen to help me make useful progress). I’ll add more specific properties, which force me to generalise the the implementation to make it meet all the properties. The properties also double-check one another (testing the tests, so to speak). I continue adding properties and generalising the implementation until I have a general implementation that meets the required properties. Property TDD progress I find that property-based tests let me work more easily in larger steps than when testing individual examples. I am able to go longer without breaking the problem down to avoid duplication and boilerplate in my tests, because the property-based tests have less scope for duplication and boilerplate. For example, if solving the Diamond Kata with example-based tests, my reaction to the “now draw the rest of the owl” problem that Seb identified would be to move the implementation towards a compositional design so that I could define the overall behaviour declaratively and not need to write integration tests for the whole thing. For example, when I reached the test for “C”, I might break the problem down into two parts. Firstly, a higher-order mirroredLines function that is passed a function to generate each line, with the type (in Haskell syntax): mirroredLines :: (Char -> Char -> String) -> Char -> String I would test-drive mirroredLines with a stub function to generate fake lines, such as: let fakeLine ch maxCh = "line for " ++ [ch] Then, I would write a diamondLine function, that calculates the actual lines of the diamond. And declare the diamond function by currying: let diamond = mirroredLines diamondLine I wouldn’t feel the need to write tests for the diamond function given adequate tests of mirroredLines and diamondLine.
Categories: Blogs

Please Help Me Title Essays on Estimation

Johanna Rothman - Wed, 02/18/2015 - 23:40

I have finished the content for Essays on Estimation. But, I need a new title. The book is more than loosely coupled essays. It reads like a real book, with progression and everything.

I have a number of ideas. They are (in no particular order):

  1. Predicting the Unpredictable: Essays on Estimating Project Costs and Dates
  2. Essays on Estimation: Pragmatic Approaches for Estimating Cost and Schedule
  3. How Much Will This Project Cost or When Will it be Done? Essays on Estimation
  4. Essays on Estimation: How to Predict When Your Project Will be Done
  5. Pragmatic Estimation: How to Create and Update Schedule or Cost Estimates
  6. Practical Approaches to Estimation: Create and Update Your Prediction Without Losing Your Hair
  7. Essays on Estimation: Practical Approaches for Schedule and Cost Estimates

Do you like any of these ideas? Have a better idea? I would like a title that explains what’s in the book.

I numbered these so you could respond easily in the comments with the number, if you like. Or, you can type out the entire title or a new title. I am open to ideas.

Thank you in advance.

Categories: Blogs

Benefits of Pair Programming Revisited

Powers of Two - Rob Myers - Wed, 02/18/2015 - 22:55

Let's take a fresh look at pair programming, with an eye towards systems optimization.

Pair programming is perhaps the most controversial of the many agile engineering practices. It appears inefficient on the surface, and may often measure as such based on code output (the infamous KLOC metric) or number of coding tasks completed. But dismissing it without exploring the impact to your overall value-stream could actually slow down throughput.  We'll take a look at the benefits of pair programming--some subtle, some sublime--so you are well-equipped to evaluate the impact.

Pair programming is the practice of having two people collaborating simultaneously on a coding task. The pair can be physically co-located, as in the picture below, or they can use some form of screen-sharing and video-chat software.  The person currently using the keyboard and mouse is called the "Driver" and the other person is the "Navigator."  I say "currently" because people will often switch roles during a pairing session.

Low-stress, co-located pair programming looks like this:  Neither Driver nor Navigator has to lean sideways to see the screen, or to type on the keyboard. The font is large enough so both can read the code comfortably. We're not sitting so close that our chairs collide, but not so far that we need to mirror the displays.Misconceptions
There are many misconceptions about pair programming, leading people to conclude that "it's two doing the work of one." Here are a few of the more common misapprehensions...Navigator as ObserverThe Navigator is not watching over your shoulder, per se.

The Navigator is an active participant. She discusses design and implementation options where necessary; keeps the overall task, and the bigger design picture, in mind; manages the short list of emerging sub-tasks; selects the next step, or test, or sub-task to work on; and catches many things that the compiler may not catch. (If there is a compiler!)  There isn't really time for boredom.

If she is literally looking over your shoulder, then you're not at a workstation suitable for pairing. Cubes are generally bad for pairing, because the desks are small or curved inwards. Co-located pairing is side-by-side, at a table that has a straight edge or is curved outwards.

Often only one wireless keyboard and one wireless mouse are used.  Wireless devices make switching Drivers much easier.

Unless the pair is not co-located, one screen for the code is sufficient. Other monitors can be used for research, test results, whatever. You may want to avoid having the same image displayed on two adjacent screens.  It may seem easier at first, but eventually one of you will gesture at the screen and the other will have to lean over to see the target of the gesture.
Pairing as Just Sitting TogetherPairing is not two people working on two separate files, even if one file contains the unit tests and the other contains the implementation code.  Both people agree on the intent of the next test, and on the shape of the implementation.  They are collaborating, and sharing critical information with each other.

The Navigator may occasionally turn to use a second, nearby computer to do some related research (e.g., the exact syntax for the needed regular expression). This is always in response to the ongoing conversation and task. It is not "oh, look, I got e-mail from Dr. Oz again...!"
Navigator as AdvisorPair programming is not a master/apprentice relationship. It's the meeting of two professionals to collaborate on a programming task.  They both bring different talents and experience to the task. Both people tend to learn a lot, because everyone has various levels of experience with various tools and technologies.

In 2003 I was tech lead and XP coach on a growing team.  We had just hired an excellent candidate nearly fresh out of college. He and I sat down to pair on the next programming task.  I described how we were planning to redo the XML-emitting Java code to meet the schema of our biggest client, instead of supporting our long-lived internal schema.  I explained that we expected to have to change quite a bit of code, and perhaps the database schema as well, and that we'd be scheduling it in bits and pieces over the upcoming months.  I reassured him that we had plenty of test coverage to feel confident in our drastic overhaul.

He frowned, and said, "Why not just run the data through an XSLT transformation?!"  (XSLT is a rich programming language written as XML, designed for such transformations. Until this point, I hadn't given it much consideration.)

He saved us months of rework! To my delight, I learned a new technology (new for me anyway). My contribution to the efforts was to show him how we could use JUnit to test-drive the XSLT transformations.  Both parties learned a great deal from each other.

In software development, there are no "juniors" or "seniors," just people with varying degrees of knowledge and experience with a wide variety of technologies and techniques.

Systemic BenefitsFewer DefectsThis is the most-cited benefit of pair programming.  It's relatively easy to measure over time.

In my own experience, it's not clear that this is the main benefit.  I've always combined pair programming with TDD, and TDD catches a broad variety of defects very quickly.  In that productive but scientifically uncontrolled environment, measuring defects caught by pairing becomes much more difficult.

But this is where systems thinking comes in:  Pair programming reduces rework, allowing a task or story that is declared "done" to be done, without having to revisit the code excessively in the future.  Pair programming may be slower, the way quality craftsmanship always appears slower:  The code remains as valuable in the future.

The benefits that follow reflect this.  Pair programming is an investment.
Better DesignI've noticed that even the most experienced developer, when left to himself, will on occasion write code that only one person can quickly understand:  Himself.  And often even he won't understand it months from now.

But if two people write the code together, it instantly breaks that lone-wolf barrier, resulting in code that is understandable to, and changeable by, many.
Continuous Code ReviewBecause most (around 80%) of defects and design issues are caught and fixed immediately with pair programming, the need for code reviews diminishes.
Many times I've seen this nasty cycle:
All code must be reviewed by the Architect. The Architect is overburdened with code reviews.  The Architect rubber-stamps your changes in order to keep up with demand for code reviews. Defects slip past the code-review gate and make their way into production. All code must be reviewed by the Architect. This shows up in the value-stream (or your Kanban wall) as an overloaded code-review queue.

Also, if the Architect does catch a mistake, the task is sent back to the developers for repair and rework. This shows up in the value-stream as a loop backwards from code-review to development. And rework is waste.  The longer the delay between the time a defect is introduced and the time it is fixed, the greater the waste.

From a Lean, systems, or Theory of Constraints standpoint, the removal of a gated activity (the code review) in favor of a parallel or collaborative activity (pair programming) at the constraint (the most overburdened queue) may improve throughput.
Enhanced Development Skills The educational value of pair programming is immense, ongoing, and narrowed in scope to actual tasks that the team encounters.

An individual who encounters a challenging technological hurdle may assume he knows the right thing to do when he doesn't, or spend a great deal of time in detailed research, or succumb to feelings of inadequacy and try to find a circuitous, face-saving route around the hurdle.

When a pair encounters a hurdle that neither has seen before, they know immediately that it's a significant challenge rather than a perceived inadequacy, and that they have a number of options. Those options are explored in just enough detail to overcome the hurdle efficiently.

People don't often pair with the same person for an extended period of time, so there's opportunity for a broad, and just-deep-enough, continuous education in relevant technologies, tools, techniques.

Through this ongoing process of shared learning and cross pollination, the whole development team becomes more and more capable.

For example, perhaps your SQL-optimization expert pairs with someone who is interested in the topic today.  Tomorrow, the SQL-optimization expert can go on vacation, without bringing development to a halt, and without having a queue of unfinished work pile up on her desk while she's in Hawai'i.

Not everyone has to be an expert in everything.  The task can be completed sufficiently to complete the story, and perhaps a more specific story or task will bring the tanned-and-rested expert's attention to the mostly-optimized SQL query at a later time.

This is an important risk-mitigation strategy, because having too few people who know how to perform a critical task is asking for trouble.
Improved FlowImagine you are the leader of a development team.  You walk in after a nice relaxing weekend and see one of your developers hard at work. "Hey, Travis, how was your weekend?"

Travis gets this frustrated look on his face (generally, developers should not play poker), "Uh...what? Oh.  Fine!" and he waves you away dismissively.  You've pulled him from The Zone.

What if, instead, you had walked in to see Travis and Fred sitting together, conversing quietly, and looking intently at a single computer screen.  Wouldn't you save your greeting for later?

Or, what if you had something important to ask? "Hey guys, are you going to make the client meeting at 3pm today?"

Travis continues to stare intently at the screen, and types something; but Fred spins his chair, "Oh, right!  I'll add that to our calendars." He writes a brief note on a 3x5 card that was already in his hand, and smiles reassuringly, "We'll be there!"

See the difference? Fred has handled the priority interruption without pulling Travis out of The Zone, without forcing the pair to task-switch (another popular form of waste). And Travis will be able to get Fred caught up very quickly, and they'll be on their way to finishing the task.
Mutual Encouragement"Encouragement" contains the root word "courage."  With two, it's always easier to face our fears, our fatigue, and our foibles.

Even if both Driver and Navigator are fatigued (e.g., after a big team lunch, or a long morning of meetings), together they may muster enough energy and enthusiasm to carry on and complete a task.
Enhanced Self-ControlHave you ever taken a "brief" personal break, only to discover 90 minutes later that your still involved in that phone call with Mom, Facebook update, or silent daydream?

Don't feel bad. It's natural.

If you and your pair-programming partner agree to a 15 minute break, however, you will be more likely to time your activities to return to the pairing workstation in 15 minutes, and you're more likely to engage in actual restful activities, rather than checking e-mail for 13 minutes before walking to the coffee shop.

Also, while writing code, neither Driver nor Navigator will allow themselves to become repeatedly distracted by e-mail pop-ups or cell phone ringtones.  If it's not urgent, it can wait. Or, either person can call for a break.

Human Systems
We have to remember that humans make up this complex adaptive system we use to build things, and so human nature has an extremely large impact on how we build things.

Pairing helps alleviate distraction, fatigue, brittle code, skills-gaps, embarrassment over inadequacy, communication issues, fear of failure. Pairing thus improves overall throughput by decreasing rework and hidden, "deferred" work.

I find that pair programming is usually faster when measured by task completion over time.  On average, if you give two people two dev tasks, A and B, they will be done with both tasks sooner if they pair up and complete A and B sequentially, rather than if one takes A and the other takes B.

On the surface, this may seem to contradict my earlier systems-related advice about replacing a gate with collaboration.  But there is no gate, explicit or implicit, between these developers or between most software development tasks.

Also, much depends on where a change is applied relative to the actual constraint.  If you optimize upstream from the constraint, you'll likely gum up the system even more.  (You didn't think scaled agility was going to be delivered in a pretty, branded, simple, mass-produced, gift-wrapped box...did you?! ;-)

But if you discover that your current constraint is within the development process, then allowing the team to use pair programming may considerably improve overall throughput of value. (Emphasis on value. "Velocity" isn't value. Lines of code are not necessarily valuable either. Working, useable, deployed code is valuable.)

Try It
I've used pair programming on a number of teams since 1998, and I've always found it beneficial in the ways described above, and many other ways.

All participants in my technical classes, since 2002, pair on all coding labs.  It's a great experience for participants: they often learn a great deal from each other as well as from the labs. It also benefits me, the instructor:  I can tell when a pair is in The Zone, stuck, finished, or not getting along; all by listening carefully to the levels and tone of conversations in the room.

I recommend, as with all new practices, you and your team outline a simple plan (or "test") to determine whether or not the new practice is impacting the working environment for the better.  Then try it out in earnest for 30 days, and re-evaluate at a retrospective.  Pair programming, as with many seemingly counter-intuitive agile practices, may just surprise you.

Categories: Blogs

February Newsletter: Kanban for DevOps, 7 Lean Metrics, New Custom Card Icons, and More

Here’s the February 2015 edition of the LeanKit monthly newsletter. Make sure you catch the next issue in your inbox and subscribe today. Kanban for DevOps: 3 Reasons IT Ops Uses Lean Flow Heard the joke about being “done” and being “done-done?” In part one of this three-part blog series, Dominica DeGrandis explores why using a […]

The post February Newsletter: Kanban for DevOps, 7 Lean Metrics, New Custom Card Icons, and More appeared first on Blog | LeanKit.

Categories: Companies

The Agile Appraisals Manifesto

Growing Agile - Wed, 02/18/2015 - 10:55
Last week (9 – 11 February 2015) we attended the Scrum Coach Retreat in South Africa. Some friends of ours formed a group dedicated to looking at how appraisals are done at large corporates and if they could be made more agile. We loved what they came up with so much we had to blog about it!

The Authors : 

Philip (@7ft_phil), Justin (@the_jus), Candice (@candicemesk), Yusuf (@ykaloo)

The Situation Given      Appraisal must happen in a corporate agile team When    Doing agile and doing it right in terms of delivery of software. We want to    Ensure that the appraisal process supports the agile process we embrace. Eg. Adapting to change, reviews more regularly, inspect and adapt, teams over individuals, encourage courageous feedback. Proposed Elevator Pitch We’ve all experienced performance appraisals, and we’ve all realised that they violate the agile values that we hold dear. Perhaps subjectivity from the person in power led to an unfair or skewed result, perhaps the infrequency of assessments meant recent events dominated rather  than the big picture? Do you truly feel that the feedback received allowed you to improve as much as you can, and do you feel the reward is fair? Imagine guidelines that shape appraisals past the pitfalls and rewards an agile friendly outcome. AgileAppraisalsManifesto Agile Appraisals Manifesto Fairness over malice. Actionable feedback over numbers. Frequent reality checks over enticing gaming of the process. Rewards reflective of business value added over rewarding the status quo.  Principles of the Agile Appraisals Manifesto
  • We respect individuals regardless of the power dynamic in the process.
  • Criteria should be visible and the process transparent.
  • The process is fluid and can change as long as the manifesto is applied.
  • Appraisals should be regular enough to ensure no surprises.
  • We embrace and support and reward changing actions where business reality demands.
  • We recognise the value of work done.
  • Appraisals should be done by people with context.
  • We assess as frequently as possible, but no more frequent than would be disruptive.
  • Metrics should be easy to understand , assessable and achievable, and have a shelf life.
  • We provide frequent actionable feedback to help teams and people improve.

Thank you so much for sharing this with us Phil, Justin, Candice and Yusuf. We think its a great place to start conversations to improve how appraisals are done, especially for organisations where getting rid of appraisals is too big a step right now.

Categories: Companies

Try, Option or Either?

Xebia Blog - Wed, 02/18/2015 - 10:45

Scala has a lot of different options for handling and reporting errors, which can make it hard to decide which one is best suited for your situation. In Scala and functional programming languages it is common to make the errors that can occur explicit in the functions signature (i.e. return type), in contrast with the common practice in other programming languages where either special values are used (-1 for a failed lookup anyone?) or an exception is thrown.

Let's go through the main options you have as a Scala developer and see when to use what!

A special type of error that can occur is the absence of some value. For example when looking up a value in a database or a List you can use the find method. When implementing this in Java the common solution (at least until Java 7) would be to return null when a value cannot be found or to throw some version of the NotFound exception. In Scala you will typically use the Option[T] type, returning Some(value) when the value is found and None when the value is absent.

So instead of having to look at the Javadoc or Scaladoc you only need to look at the type of the function to know how a missing value is represented. Moreover you don't need to litter your code with null checks or try/catch blocks.

Another use case is in parsing input data: user input, JSON, XML etc.. Instead of throwing an exception for invalid input you simply return a None to indicate parsing failed. The disadvantage of using Option for this situation is that you hide the type of error from the user of your function which, depending on the use-case, may or may not be a problem. If that information is important keep on reading the next sections.

An example that ensures that a name is non-empty:

def validateName(name: String): Option[String] = {
  if (name.isEmpty) None
  else Some(name)

You can use the validateName method in several ways in your code:

// Use a default value

 validateName(inputName).getOrElse("Default name")

// Apply some other function to the result

// Combine with other validations, short-circuiting on the first error
// returning a new Option[Person]
 for {
   name <- validateName(inputName)
   age <- validateAge(inputAge)
 } yield Person(name, age)

Option is nice to indicate failure, but if you need to provide some more information about the failure Option is not powerful enough. In that case Either[L,R] can be used. It has 2 implementations, Left and Right. Both can wrap a custom type, respectively type L and type R. By convention Right is right, so it contains the successful result and Left contains the error. Rewriting the validateName method to return an error message would give:

def validateName(name: String): Either[String, String] = {
 if (name.isEmpty) Left("Name cannot be empty")
 else Right(name)

Similar to Option Either can be used in several ways. It differs from option because you always have to specify the so-called projection you want to work with via the left or right method:

// Apply some function to the successful result

// Combine with other validations, short-circuiting on the first error
// returning a new Either[Person]
for {
 name <- validateName(inputName).right
 age <- validateAge(inputAge).right
} yield Person(name, age)

// Handle both the Left and Right case
validateName(inputName).fold {
  error => s"Validation failed: $error",
  result => s"Validation succeeded: $result"

// And of course pattern matching also works
validateName(inputName) match {
  case Left(error) => s"Validation failed: $error",
  case Right(result) => s"Validation succeeded: $result"

// Convert to an option:

This projection is kind of clumsy and can lead to several convoluted compiler error messages in for expressions. See for example the excellent and in detail discussion of the Either type in the The Neophyte's Guide to Scala Part 7. Due to these issues several alternative implementations for a kind of Either have been created, most well known are the \/  type in Scalaz and the Or type in Scalactic. Both avoid the projection issues of the Scala Either and, at the same time, add additional functionality for aggregating multiple validation errors into a single result type.


Try[T] is similar to Either. It also has 2 cases, Success[T] for the successful case and Failure[Throwable] for the failure case. The main difference thus is that the failure can only be of type Throwable. You can use it instead of a try/catch block to postpone exception handling. Another way to look at it is to consider it as Scala's version of checked exceptions. Success[T] wraps the result value of type T, while the Failure case can only contain an exception.

Compare these 2 methods that parse an integer:

// Throws a NumberFormatException when the integer cannot be parsed
def parseIntException(value: String): Int = value.toInt

// Catches the NumberFormatException and returns a Failure containing that exception
// OR returns a Success with the parsed integer value
def parseInt(value: String): Try[Int] = Try(value.toInt)

The first function needs documentation describing that an exception can be thrown. The second function describes in its signature what can be expected and requires the user of the function to take the failure case into account. Try is typically used when exceptions need to be propagated, if the exception is not needed prefer any of the other options discussed.

Try offers similar combinators as Option[T] and Either[L,R]:

// Apply some function to the successful result
parseInt(input).map(_ * 2)

// Combine with other validations, short-circuiting on the first Failure
// returning a new Try[Stats]
for {
  age <- parseInt(inputAge)
  height <- parseDouble(inputHeight)
} yield Stats(age, height)

// Use a default value

// Convert to an option

// And of course pattern matching also works
parseAge(inputAge) match {
  case Failure(exception) => s"Validation failed: ${exception.message}",
  case Success(result) => s"Validation succeeded: $result"

Note that Try is not needed when working with Futures! Futures combine asynchronous processing with the Exception handling capabilities of Try! See also Try is free in the Future.

Since Scala runs on the JVM all low-level error handling is still based on exceptions. In Scala you rarely see usage of exceptions and they are typically only used as a last resort. More common is to convert them to any of the types mentioned above. Also note that, contrary to Java, all exceptions in Scala are unchecked. Throwing an exception will break your functional composition and probably result in unexpected behaviour for the caller of your function. So it should be reserved as a method of last resort, for when the other options don’t make sense.
If you are on the receiving end of the exceptions you need to catch them. In Scala syntax:

try {
} catch {
  case e: Exception => println("Oops")
} finally {

What is often done wrong in Scala is that all Throwables are caught, including the Java system errors. You should never catch Errors because they indicate a critical system error like the OutOfMemoryError. So never do this:

try {
} catch {
  case _ => println("Oops. Also caught OutOfMemoryError here!")

But instead do this:

import scala.util.control.NonFatal

try {
} catch {
  case NonFatal(_) => println("Ooops. Much better, only the non fatal exceptions end up here.")

To convert exceptions to Option or Either types you can use the methods provided in scala.util.control.Exception (scaladoc):

import scala.util.control.Exception._

val i = 0
val result: Option[Int] = catching(classOf[ArithmeticException]) opt { 1 / i }
val result: Either[Throwable, Int] = catching(classOf[ArithmeticException]) either { 1 / i }

Finally remember you can always convert an exception into a Try as discussed in the previous section.


  • Option[T], use it when a value can be absent or some validation can fail and you don't care about the exact cause. Typically in data retrieval and validation logic.
  • Either[L,R], similar use case as Option but when you do need to provide some information about the error.
  • Try[T], use when something Exceptional can happen that you cannot handle in the function. This, in general, excludes validation logic and data retrieval failures but can be used to report unexpected failures.
  • Exceptions, use only as a last resort. When catching exceptions use the facility methods Scala provides and never catch { _ => }, instead use catch { NonFatal(_) => }

One final advice is to read through the Scaladoc for all the types discussed here. There are plenty of useful combinators available that are worth using.

Categories: Companies

Definition of DevOps and the Definition of “Done” – Quick Links

Learn more about our Scrum and Agile training sessions on

I was poling around for a good definition of DevOps and found a thoughtful article written by Ernest Mueller called What is DevOps?  Highly recommended reading as it includes lots of insight about the relationship between Agile and DevOps.  FWIW, I feel that the concept of the Definition of “Done” is Scrum’s own original take on the same class of ideas: breaking down silos in an organization to get stuff into the marketplace faster and faster.  I even talked about operationalizing software development back in 2004 and 2005 as a counterpoint to the project management approach which puts everyone in silos and pushes work through phase gates.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
Categories: Blogs

The Trojan Retrospective – From Crickets to Conversations

Agile Management Blog - VersionOne - Wed, 02/18/2015 - 03:14

rear view

This blog is part of a series on “Agile Trojan Horses – Covert Appetizers for Agile Discovery.” This series is intended to help spark conversations that restore focus on agile fundamentals, whet the appetite to discover more about agile, and help apply agile in day-to-day decision-making.

One of the elements that I love about Agile and Scrum is the focus on humility, reflection and continuous inspection and adaptation. One of my favorite Agile Principles is #12…

At regular intervals,
the team reflects on how to become more effective,
then tunes and adjusts its behavior accordingly.

One of my favorite Scrum events is the retrospective. As the Scrum Guide says…
The sprint retrospective is an opportunity for the Scrum team to inspect itself and create a plan for improvements to be enacted during the next sprint…

…The purpose of the sprint retrospective is to:
– Inspect how the last sprint went with regards to people, relationships, rocess, and tools;
– Identify and order the major items that went well and potential improvements; and,
– Create a plan for implementing improvements to the way the Scrum team does its work.

Some common impediments prevent teams from applying the agile principle of regular reflection and from having effective Scrum tetrospectives.

LACK OF ATTENDANCE – Team members not showing up to the retrospective.

- LACK OF PARTICIPATION – Not hearing anything at the event besides the sound of crickets – team members showing up, but not sharing anything.

- LACK OF TRANSPARENCY & INSPECTION – Meaningless conversations – no transparency and no inspection. Nothing other than whining, finger-pointing and complaints.

LACK OF ADAPTATION – Meaningful conversations – healthy transparency and inspection but no meaningful adaptation. No follow-up actions to make things better.

Besides creating safety, one of the most important elements needed for a team to have meaningful inspection and adaptation is an ice-breaker. For many team members new to agile, the act of reflecting on the team in a safe setting, is awkward and unfamiliar.

I usually begin by putting up some flip charts with the questions…

1.    What worked well?
2.    What could have worked better?
3.    What actions will we take in the coming sprint to make it more effective?

I add a couple of additional questions…

4.    What was the most valuable knowledge we gained in this sprint?
5.    What contributions would we like to acknowledge in this sprint?

In most cases, with a few nudges here or there, the team starts opening up and we have a meaningful conversation. However if the team is still not talking, you could try this approach…

In this approach, we put up a bunch of assertions in front of the team and ask them to share their responses to the statement. These approaches fall into five major categories that also help raise the team’s awareness of key elements relevant to agility…






For each assertion, ask the team to respond by choosing one of four categories…

HUH…? - We are unaware of or unclear about what this means.
HEALTHY – Our team is healthy in this area. No adaptation needed.
NEEDS SOME ADAPTATION – Our team could use some adaptation here.
BLOCKED! NEEDS URGENT ADAPTATION- Our team needs urgent adaptation here.

If the team is co-located, you could create a grid on flip charts and the team could put up colored posts.

Encourage each team member to also jot down why they chose a particular post-it, maybe with an example, if they can. If not, it is completely OK, they can just put up a blank post-it.

If the team has specific ideas on how they might adapt the next sprint to be more effective, they can jot them down on a post-it and put it up in the last column.

The grid might look something like this…ResponseOptions
If the team is distributed, you could send them an online survey and have them respond either before the retrospective event or during the initial portion of the event.

Either way, once the team has finished responding, you can sum-up the responses in a table like this…ResponseScores

You might try jotting down your responses as you read this blog as well, before you take this idea to the team. Now that we have set the stage, let’s start reviewing the assertions…


We begin with a section adapted from the Agile Manifesto. Reflect on how aware your team is about the Manifesto and how effective you all feel you were in applying it to your work. time-box this section, and review feedback as a group.Manifesto


The next section encourages reflection on the Agile Principles. Ask the team to consider each principle and share their thoughts on how effectively you all applied them. Discuss as a group.Principles 1-6

Principles 7-12

Let’s start reflecting on one of the frameworks under the agile umbrella – Scrum. At a high level, the Scrum framework has three core objectives. As a team, reflect on whether the processes you used helped accomplish these objectives.


Scrum is not just about rituals or “what” we do as a team. The heart of scrum are the core Scrum Values that define “how” we work together as a team. Pull up the Five Scrum Values and allow the team to share their thoughts on the team culture and how close it was to being true to the Scrum Values. Discuss as a group.


As described in the Scrum Guide, Scrum is based on the three pillars of empirical process control. Ask the team to share their thoughts on how effectively the team applied empiricism in their work. Discuss as a group.


The Scrum Guide clearly lays out the accountability for each role in the framework. Ask the team to reflect on how effectively each role or group delivered their accountability. Discuss as a group…ScrumGuide-2


The Scrum Guide clearly lays out the purpose and desired outcomes for each of the five events in the framework. Ask the team to reflect on how effective they thought each event was. Discuss as a group…ScrumGuide-3


The Scrum Guide clearly lays out the purpose of each of the three artifacts. Ask the team to reflect on how closely each artifact accomplished the purpose defined in the Guide. Discuss as a group…


Scrum can only be as effective as the level of transparency in the organization. Reflect as a team on how transparent you feel the organization is. Discuss as a group…


Mary and Tom Poppendiek have done some amazing work how the lean principles from manufacturing offer a better approach to software development. These principles are available on their website – and I have taken the liberty to adapt them for our approach.



Reflect as a team on how effective you were in applying the lean mindset and discuss as a group.

By now, hopefully, the silence and crickets at the beginning of the retrospective have been replaced by meaningful conversations about how the team can inspect and adapt its work.

Hopefully, these conversations have dispelled some common myths about what agile is and is not and acted as an appetizer for the team to explore some more about agile software delivery.

When you sum up the responses, the graph might look something like this…

Before you end the session, ask the team to pair up and pick up at least one action item to make a thin slice or process improvement in the next sprint. Without adaptation, the transparency and inspection of a retrospective are meaningless. If there are too many options to choose from, use dot-voting to help rank the options and pick the ones that fit the capacity constraints of the team.

Try out this approach even if you begin with baby steps by reflecting as an individual.
Either way, please share your thoughts. If you like, you can use this Excel spreadsheet as a starting point.

Either way, please share your thoughts. Keep calm, and Scrum on!

Categories: Companies

Mocking External Services

George Dinwiddie’s blog - Tue, 02/17/2015 - 23:53

Should your tests mock outside services or not? I keep seeing discussions on this topic. On the one hand, it lets your tests be in control of the testing context. Otherwise it may be very difficult to create a reliable automated test.

  • The external service might return different results at different times.
  • The external service might be slow to respond.
  • Using the external service might require running the test in a particular environment.
  • It may be impossible to generate certain error conditions with the real service.
  • There may be side-effects of using the real service.

On the other hand, our mock of the external service might differ from it in important ways.

  • Our mock service might have a slightly different interface than the real one.
  • Our mock service might accept slightly different parameters than the real one.
  • Our mock service might return slightly different results than the real one.
  • The real service might change behavior, and we won’t notice until we deploy to production.

This leaves us in a quandary. What to do?

Here is a pattern that I’ve found helpful:

1. Isolate my code from the external service using the Adapter Pattern. Less often, I might use the Mediator Pattern. I find this isolation a good idea whether or not I need it for testing. It allows me to wrap the external service with an API that’s custom made for my application, offering just the right affordances in terms that make sense in my context. All of the translation between the terminology of my application and the terminology of the external service are handled within the adapter. Changes in the external service, or changing to another service altogether, should be limited to the confines of the adapter.

2. Test my application using a Mock Adapter. Strictly speaking, it’s not a mock, which implies that it self-validates the usage. Generally it’s a fake or a stub (depending on whose terminology) that provides data to my system, or a spy that captures output data from my system for examination by the test. Testing with the mock adapter demonstrates how my application behaves with the API that is specific to the needs of my application.

3. Test my Adapter with the external system. As I’m building my application, I discover needs that it has of the adapter. For each of those needs, I have a test (or multiple tests) of the application using the mock adapter. When generating those tests, I also write tests that demonstrate how I think the real service, as seen through the real adapter, will behave.

These tests go in a different suite and get run less frequently. They get run often as I’m growing the adapter. They also get run periodically to make sure things still work. And they get run whenever receiving a new version of the external service. This has saved a lot of work when the new version was delivered with regressions in the existing functionality.

Sometimes testing the adapter and service is hard to do, as the external system may have difficult environmental constraints. I’ve had cases where the adapter had to be running in the same application server as the external service in order for things to work. This makes it more difficult to run the tests. The tests either need to be inside the container, too, or they need a proxy inside the container to talk to the adapter. Either is a PITA, but worth the effort if you can’t test the combination of adapter and external service any other way.

What does this look like? Declan Whelan asked this question, and prompted this article.

I’ve got an application that delivers race horse horoscopes to help betters make logical decisions on race day. The top API of the domain is the CrystalBall class:

public class CrystalBall {

    private HoroscopeProvider horoscopeProvider;

    public CrystalBall(HoroscopeProvider horoscopeProvider) {
        this.horoscopeProvider = horoscopeProvider;

    public static CrystalBall instance() {
        return new CrystalBall(new CachingHoroscopeProvider(

    public String requestHoroscope(String horsename, String effectiveDate) {
        return horoscopeProvider.horoscopeFor(horsename, effectiveDate);

Note that the factory method, instance(), provides a default HoroscopeProvider. This one happens to cache the results in a Derby database to avoid unnecessary calls to the actual MumblerHoroscopeProvider which is the adapter to the service, Mumbler, which is implemented in a third party library. The adapter looks like this:

public class MumblerHoroscopeProvider implements HoroscopeProvider {

    private static final String DEFAULT_RULES = "{Outlook cloudy, try again later.}";
    private Mumbler mumbler;

    public MumblerHoroscopeProvider(String rules) {

    private MumblerHoroscopeProvider() {
        mumbler = new Mumbler();

    public MumblerHoroscopeProvider(File file) {
        try {
        } catch (IOException e) {

    public MumblerHoroscopeProvider(InputStream stream) {
        try {
        } catch (IOException e) {

    public String horoscopeFor(String horsename, String effectiveDate) {
        return mumbler.generate();

    public static HoroscopeProvider instance() {
        String resourceName = "/com/gdinwiddie/equinehoroscope/resources/MumblerHoroscopeRules.g";
        ResourceLoader loader = new ResourceLoader();
        InputStream stream = loader.loadResourceStream(resourceName);
        return new MumblerHoroscopeProvider(stream);

    static class ResourceLoader {
        InputStream loadResourceStream(String name) {
            return getClass().getResourceAsStream(name);

This adapter is a little complicated because it has the flexibility of providing the necessary grammar file to Mumbler via a String, a File, or an InputStream. In all cases it meets the needs of the service to have a grammar that describes the horoscopes to be generated. It also provides the method horoscopeFor() that translates to the service API call generate().

The unit tests use a mock adapter, MockHoroscopeProvider, that allows the test to add horoscopes and expect them to be returned in that order.

public class MockHoroscopeProvider implements HoroscopeProvider {
    List horoscopes = new ArrayList();

    public void addHoroscope(String horoscope) {

    public String horoscopeFor(String horsename, String effectiveDate) {
        return horoscopes.remove(0);


You can see this being used in the test, ensureWeGetHoroscopeFromProvider():

public class EquineHoroscopeTest {
    private static final String CANNED_HOROSCOPE = "The rain in Spain falls mainly on the plain.";
    private MockHoroscopeProvider mockHoroscopeProvider;

    public void setUp() {
        mockHoroscopeProvider = new MockHoroscopeProvider();
    public void ensureWeGetHoroscopeFromProvider() {
        CrystalBall forecaster = new CrystalBall(mockHoroscopeProvider);
                forecaster.requestHoroscope("Doesn't Matter", "today"));


The test queues up a horoscope and then verifies that it is returned by the CrystalBall. In this manner, we check that our system works properly as long as the HoroscopeProvider provides a horoscope when expected. There are other tests for other classes verifying the the horoscope is properly cached when the CachingHoroscopeProvider is used.

What about our real horoscope service? We have tests for our real adapter calling the real service:

public class MumblerHoroscopeProviderTest {
    public void canGenerateHoroscopeFromString() {
        HoroscopeProvider provider = new MumblerHoroscopeProvider(
                "{Better go back to bed.}");
        assertEquals("Better go back to bed.",
                provider.horoscopeFor("any horse", "any date"));

    public void canGenerateHoroscopeFromFile() {
        // TODO need to handle when run from common build
        String fileName = "test/com/gdinwiddie/equinehoroscope/resources/dummyHoroscope.g";
        HoroscopeProvider provider = new MumblerHoroscopeProvider(new File(
        assertEquals("Simple sentence.",
                provider.horoscopeFor("doesn't matter", "yesterday"));

    public void handleMissingRulesFile() {
        HoroscopeProvider provider = new MumblerHoroscopeProvider(new File(
        assertEquals("Outlook cloudy, try again later.",
                provider.horoscopeFor("any horse", "any date"));

    public void canGenerateHoroscopeFromResourceFile() {
        String resourceName = "/com/gdinwiddie/equinehoroscope/resources/dummyHoroscope.g";
        InputStream stream = getClass().getResourceAsStream(resourceName);
        HoroscopeProvider provider = new MumblerHoroscopeProvider(stream);
        assertEquals("Simple sentence.",
                provider.horoscopeFor("doesn't matter", "yesterday"));

    public void defaultRulesFile() {
        HoroscopeProvider provider = MumblerHoroscopeProvider.instance();
        assertNotSame("Outlook cloudy, try again later.",
                provider.horoscopeFor("any horse", "any date"));

    public void printSomeHoroscopes() {
        HoroscopeProvider provider = MumblerHoroscopeProvider.instance();
        for (int i = 0; i < 10; i++) {
            System.out.println(provider.horoscopeFor("", ""));

As I look at these tests, I see that I left them in less-than-perfect condition. The test to generate a horoscope from a rules file is ignored because the build file for a full system that includes this component runs from a different directory, and cannot find the file. This test was never fixed because that system depends on a resource rather than a file. I note now that I’m missing a test for the behavior when the resource is specified but missing. The last test is unusual in that it has no assertion. It merely prints a few horoscopes. This allows visual inspection of the output, which has significant randomness. This is a quick-and-dirty method of testing the “interestingness” of the generated output given the current state of the rules file. I don’t yet know a way to assert “interestingness.”


Isolate your system from an external service using an adapter.

System isolated from External Service using Adapter

Use a mock adapter to test your system. Also test the real adapter and external service to verify your assumptions.

Test in two parts.

Categories: Blogs

Checking Your Scrum Team Health

Scrum Expert - Tue, 02/17/2015 - 21:54
The first value of the Agile Manifesto is to prefer “Individuals and interactions over processes and tools”. But how can you know if your individuals and your teams like your current Agile approach. In his article, Henrik Kniberg present a simple tool to assess the health of your Scrum teams. The tool presented is a visual board that record the opinions (green, orange, red) of the team members on specific dimensions of their work: ease of release, technical quality, value, speed, etc. Coaches organize workshops with the team, facilitating discussions around ...
Categories: Communities

Clean Tests: Isolating Internal State

Jimmy Bogard - Tue, 02/17/2015 - 19:46

Other posts in this series:

One of the more difficult problems with slow tests that touch shared resources is building a clean starting point. In order for tests to be reliable, the environment in which the test executes needs to be in a reliable, consistent starting state. In slow tests, in which I’m accessing out-of-process dependencies, I’m worried about two things:

  • External state is known and consistent
  • Internal state is known and consistent

In order to keep my sanity, I want to put the responsibility of building that known starting point into a Standard Fixture. This fixture is responsible for creating that starting point, and it’s this starting point that ensures the long-term maintainability of my system.

Consistent internal state

Since I’m using AutoFixture for the creation and configuration of my fixture, it will be AutoFixture I use to build out my Standard Fixture. My standard fixture will be a single class in which my tests will interact with, and because the name “Fixture” is a bit overused in many libraries, I have to name my class somewhat specifically, and it will start with building out an isolated sandbox for my internal state:

public class SlowTestFixture
    private static IContainer Root = IoC.BuildCompositionRoot();

    public SlowTestFixture()
        Container = Root.CreateChildContainer();

    public IContainer Container { get; }

I use a DI container as my composition root in my systems, and this combined with child containers allows me to ensure that I have a unique, isolated sandbox for running my tests. The root container is my blueprint for an execution context, and represents what I do in production. The child container’s configuration, whatever I might do to it, lives only for the context of this one test.

Throughout the rest of my tests, I can access that container to build components as need be. The next piece I’ll need is to tell AutoFixture about this fixture, and to use it both when someone needs access to the context as well as when someone needs an instance of something.

In AutoFixture, this is done via fixture customizations:

public class SlowTestsCustomization : ICustomization
    public void Customize(IFixture fixture)
        var contextFixture = new SlowTestFixture();

        fixture.Register(() => contextFixture);

        fixture.Customizations.Add(new ContainerBuilder(contextFixture.Container));

Customizations alter behaviors of the AutoFixture’s fixture object, allowing me to add effectively new links in a chain of responsbility pattern. I want two behaviors added:

  • Access to the fixture
  • Building container-supplied instances

The first is simple, I can register individual instances with AutoFixture using the “Register” method. The second, since it depends on the type supplied, needs its own isolated customization:

public class ContainerBuilder : ISpecimenBuilder
    private readonly IContainer _container;

    public ContainerBuilder(IContainer container)
        _container = container;

    public object Create(object request, ISpecimenContext context)
        var type = request as Type;

        if (type == null || type.IsPrimitive)
            return new NoSpecimen(request);

        var service = _container.TryGetInstance(type);

        return service ?? new NoSpecimen(request);

AutoFixture calls each specimen builder, one at a time, and each specimen builder either builds out an instance or returns a null object, the “NoSpecimen” object.

Ultimately, the goal is to be able to have my tests to use a pre-built component, or to use the fixture as necessary:

public InvoiceApprovalTests(Invoice invoice,
    SlowTestFixture fixture,
    IInvoiceApprover invoiceApprover)
    _invoice = invoice;


The last part I need to fill in is to modify Fixie to use my customizations when building up test instances. This is in my Fixie convention where I had previously configured Fixie to use AutoFixture to instantiate my test classes:

private object CreateFromFixture(Type type)
    var fixture = new Fixture();

    new SlowTestsCustomization().Customize(fixture);

    return new SpecimenContext(fixture).Resolve(type);

My tests now have an isolated sandbox for internal state, as each child container instance is isolated per fixture. If I need to inject stubs/fakes, I don’t affect any other tests because of how I’ve built the boundaries of my test in Fixie.

In the next post, I’ll look at isolating external state (the database).

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Designing a job crafting experience

Alastair Simpson created a Mentor Canvas intended for mentoring UX designers.

I generally like it because it provides a reasonable structure in a collaborative, canvas style.

However, to make it more appealing to me, I'd like to adjust it to generalise to a non-UX designer perspective and also to reflect some slightly different assumptions of what I consider important for developing oneself and others.  Specifically, I prefer a job crafting approach.

I've created a template on Google Drive:

Categories: Blogs

Should Agile Equal Being Happy?

Leading Agile - Mike Cottmeyer - Tue, 02/17/2015 - 15:50

Ever had a conversation with someone about what they thought “being” Agile meant?  I was having that conversation today.  The other guy said he was surprised that he wasn’t happier.  I asked him to help me understand what he meant by that.

An Agile team should be happy

Someone, somewhere, convinced this fellow that the Manifesto for Agile Software Development included life, liberty, and the pursuit of happiness.

The reality is I feel he was misguided, just like all of those other people who think that if you’re on an Agile team then you don’t plan, you don’t test, or you don’t document. The ideas like Agile is all teddy bears and rainbows has somehow spread to the far reaches of the Agile community.

When asked if Agile makes me happy, my response was simple.


Being an Agile coach, leading Agile transformations, and helping customers reach their potential does not make me happy.  It leaves me with a feeling of satisfaction.  Much like mowing my lawn every weekend in summer, it doesn’t make me happy. But, when I am done with the task at hand, I look at what I have accomplished and I feel satisfied.  Isn’t that a more realistic goal? The pursuit of satisfaction, as it relates to work?  Happiness is an emotional state that I reserve to my personal life, when I combine satisfaction from my work and positive emotions in my off-time.

Is the goal of happiness within an Agile team misguided?

I’m interested in your thoughts.

The post Should Agile Equal Being Happy? appeared first on LeadingAgile.

Categories: Blogs

UPscALE Agile in Medium & Large Enterprises, Stuttgart, Germany, March 11 2015

Scrum Expert - Tue, 02/17/2015 - 11:25
UPscALE Agile in Medium & Large Enterprises is a one-day focused on the scaling of Scrum and other Agile software development approaches. All the talks are in German except of the keynote. In the agenda of UPscALE Agile in Medium & Large Enterprises you will find topics like the ” Scrum @ Scale – A Scaling Framework based on My Experiences” keynote delivered by Jeff Sutherland. The other presentations will be performed by medium and large enterprises like Volkswagen and SAP about their experience is scaling Agile. Web site: Location for the ...
Categories: Communities

Cancelling $http requests for fun and profit

Xebia Blog - Tue, 02/17/2015 - 10:11

At my current client, we have a large AngularJS application that is configured to show a full-page error whenever one of the $http requests ends up in error. This is implemented with an error interceptor as you would expect it to be. However, we’re also using some calculation-intense resources that happen to timeout once in a while. This combination is tricky: a user triggers a resource request when navigating to a certain page, navigates to a second page and suddenly ends up with an error message, as the request from the first page triggered a timeout error. This is a particular unpleasant side effect that I’m going to address in a generic way in this post.

There are of course multiple solutions to this problem. We could create a more resilient implementation in the backend that will not time out, but accepts retries. We could change the full-page error in something less ‘in your face’ (but you still would get some out-of-place error notification). For this post I’m going to fix it using a different approach: cancel any running requests when a user switches to a different location (the route part of the URL). This makes sense; your browser does the same when navigating from one page to another, so why not mimic this behaviour in your Angular app?

I’ve created a pretty verbose implementation to explain how to do this. At the end of this post, you’ll find a link to the code as a packaged bower component that can be dropped in any Angular 1.2+ app.

To cancel a running request, Angular does not offer that many options. Under the hood, there are some places where you can hook into, but that won’t be necessary. If we look at the $http usage documentation, the timeout property is mentioned and it accepts a promise to abort the underlying call. Perfect! If we set a promise on all created requests, and abort these at once when the user navigates to another page, we’re (probably) all set.

Let’s write an interceptor to plug in the promise in each request:

  .factory('HttpRequestTimeoutInterceptor', function ($q, HttpPendingRequestsService) {
    return {
      request: function (config) {
        config = config || {};
        if (config.timeout === undefined && !config.noCancelOnRouteChange) {
          config.timeout = HttpPendingRequestsService.newTimeout();
        return config;

The interceptor will not overwrite the timeout property when it is explicitly set. Also, if the noCancelOnRouteChange option is set to true, the request won’t be cancelled. For better separation of concerns, I’ve created a new service (the HttpPendingRequestsService) that hands out new timeout promises and stores references to them.

Let’s have a look at that pending requests service:

  .service('HttpPendingRequestsService', function ($q) {
    var cancelPromises = [];

    function newTimeout() {
      var cancelPromise = $q.defer();
      return cancelPromise.promise;

    function cancelAll() {
      angular.forEach(cancelPromises, function (cancelPromise) {
        cancelPromise.promise.isGloballyCancelled = true;
      cancelPromises.length = 0;

    return {
      newTimeout: newTimeout,
      cancelAll: cancelAll

So, this service creates new timeout promises that are stored in an array. When the cancelAll function is called, all timeout promises are resolved (thus aborting all requests that were configured with the promise) and the array is cleared. By setting the isGloballyCancelled property on the promise object, a response promise method can check whether it was cancelled or another exception has occurred. I’ll come back to that one in a minute.

Now we hook up the interceptor and call the cancelAll function at a sensible moment. There are several events triggered on the root scope that are good hook candidates. Eventually I settled for $locationChangeSuccess. It is only fired when the location change is a success (hence the name) and not cancelled by any other event listener.

  .module('angularCancelOnNavigateModule', [])
  .config(function($httpProvider) {
  .run(function ($rootScope, HttpPendingRequestsService) {
    $rootScope.$on('$locationChangeSuccess', function (event, newUrl, oldUrl) {
      if (newUrl !== oldUrl) {

When writing tests for this setup, I found that the $locationChangeSuccess event is triggered at the start of each test, even though the location did not change yet. To circumvent this situation, the function does a simple difference check.

Another problem popped up during testing. When the request is cancelled, Angular creates an empty error response, which in our case still triggers the full-page error. We need to catch and handle those error responses. We can simply add a responseError function in our existing interceptor. And remember the special isGloballyCancelled property we set on the promise? That’s the way to distinguish between cancelled and other responses.

We add the following function to the interceptor:

      responseError: function (response) {
        if (response.config.timeout.isGloballyCancelled) {
          return $q.defer().promise;
        return $q.reject(response);

The responseError function must return a promise that normally re-throws the response as rejected. However, that’s not what we want: neither a success nor a failure callback should be called. We simply return a never-resolving promise for all cancelled requests to get the behaviour we want.

That’s all there is to it! To make it easy to reuse this functionality in your Angular application, I’ve packaged this module as a bower component that is fully tested. You can check the module out on this GitHub repo.

Categories: Companies

Python/pandas: Column value in list (ValueError: The truth value of a Series is ambiguous.)

Mark Needham - Mon, 02/16/2015 - 23:39

I’ve been using Python’s pandas library while exploring some CSV files and although for the most part I’ve found it intuitive to use, I had trouble filtering a data frame based on checking whether a column value was in a list.

A subset of one of the CSV files I’ve been working with looks like this:

$ cat foo.csv

Loading it into a pandas data frame is reasonably simple:

import pandas as pd
df = pd.read_csv('foo.csv', index_col=False, header=0)
>>> df
0    1
1    2
2    3
3    4
4    5
5    6
6    7
7    8
8    9
9   10

If we want to find the rows which have a value of 1 we’d write the following:

>>> df[df["Foo"] == 1]
0    1

Finding the rows with a value less than 7 is as you’d expect too:

>>> df[df["Foo"] < 7]
0    1
1    2
2    3
3    4
4    5
5    6

Next I wanted to filter out the rows containing odd numbers which I initially tried to do like this:

odds = [i for i in range(1,10) if i % 2 <> 0]
>>> odds
[1, 3, 5, 7, 9]
>>> df[df["Foo"] in odds]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/markneedham/projects/neo4j-himym/himym/lib/python2.7/site-packages/pandas/core/", line 698, in __nonzero__
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().

Unfortunately that doesn’t work and I couldn’t get any of the suggestions from the error message to work either. Luckily pandas has a special isin function for this use case which we can call like this:

>>> df[df["Foo"].isin(odds)]
0    1
2    3
4    5
6    7
8    9

Much better!

Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.