Skip to content

Feed aggregator

Measuring Benefits of Software Development: Traditional vs. Agile

Scott W. Ambler - Agility@Scale - Mon, 04/14/2014 - 20:09

I was recently involved in an online discussion about how to calculate the benefits realized by software development teams.   As with most online discussions it quickly devolved into camps and the conversation didn’t progress much after that.  In this case there was what I would characterize as a traditional project camp and a much smaller agile/lean product camp.  Although each camp had interesting points, the important thing for me in the conversation was the wide cultural and experience gap between the people involved in the conversation.

The following diagram summarizes the main viewpoints and the differences between them.  The traditional project camp promoted a strategy where the potential return on investment (ROI) for a project would be calculated, a decision would be made to finance the project based (partly) on that ROI, the project would run, the solution delivered into production, and then at some point in the future the actual ROI would be calculated.  Everyone was a bit vague on how the actual ROI would be calculated, but they agreed that it could be done although would be driven by the context of the situation.  Of course several people pointed out that it rarely works that way.  Even if the potential ROI was initially calculated it would likely be based on wishful thinking and it would be incredibly unlikely that the actual ROI would be calculated once the solution was in production.  This is because few organizations are actually interested in investing the time to do so and some would even be afraid to do so.  Hence the planned and actual versions of the traditional strategy in the diagram.

image

The agile/lean camp had a very different vision.  Instead of investing in upfront ROI calculation, which would have required a fair bit of upfront requirements modelling and architectural modelling to get the information, the idea was that we should instead focus on a single feature or small change.  If this change made sense to the stakeholders then it would be implemented, typically on the order of days or weeks instead of months, and put quickly into production.  If your application is properly instrumented, which is becoming more and more common given the growing adoption of DevOps strategies, you can easily determine whether the addition of the new feature/change adds real value.

Cultural differences get in your way  

The traditional project camp certainly believed in their process.  In theory it sounded good, and I’m sure you could make it work, but in practice it was very fragile.  The long feedback cycle, potentially months if not years, pretty much doomed the traditional approach to measuring benefits of software development to failure. The initial ROI guesstimate was often a work of fiction and rarely would it be compared to actuals.  The cultural belief in bureaucracy motivated the traditional project camp to ignore the obvious challenges with their chosen approach.

The agile/lean camp also believed in their strategy.  In theory it works very well, and more and more organizations are clearly pulling this off in practice, but it does require great discipline and investment in your environment.  In particular, you need investment in modern development practices such as continuous integration (CI), continuous deployment (CD), and instrumented solutions (all important aspects of a disciplined agile DevOps strategy).  These are very good things to do anyway, it just so happens that they have an interesting side effect of making it easy (and inexpensive) to measure the actual benefits of changes to your software-based solutions.  The cultural belief in short feedback cycles, in taking a series of smaller steps instead of one large one, and in their ability to automate some potentially complex processes motivated the agile/lean camp to see the traditional camp as hopeless and part of the problem.  

Several people in the traditional project camp struggled to understand the agile/lean approach, which is certainly understandable given how different that vision is compared with traditional software development environments.  Sadly a few of the traditionalists chose to malign the agile/lean strategy instead of respectfully considering it.  They missed an excellent opportunity to learn and potentially improve their game.  Similarly the agilists started to tune out, dropping out of the conversation and forgoing the opportunity to help others see their point of view.  In short, each camp suffered from cultural challenges that prevented them from coherently discussing how to measure the benefits of software development efforts.

How Should You Measure the Effectiveness of Software Development?

Your measurement strategy should meet the following criteria:

  1. Measurements should be actioned.  Both the traditional and agile/lean strategies described above meet this criteria in theory.  However, because few organizations appear willing to calculate ROI after deployment as the traditional approach recommends, in practice the traditional strategy rarely meets this criteria.  It is important to note that I used the word actioned, not actionable.  Key difference.
  2. There must be positive value.  The total cost of taking a measure must be less than the total value of the improvement in decision making you gain.  I think that the traditional strategy falls down dramatically here, which is likely why most organizations don’t actually follow it in practice.  The agile/lean strategy can also fall down WRT this criterion but is much less likely to because the feedback cycle between creating the feature and measuring it is so short (and thus it is easier to identify the change in results to the actual change itself).
  3. The measures must reflect the situation you face.  There are many things you can measure that can give you insight into the ROI of your software development efforts.  Changes in sales levels, how often given screen or function is invoked, savings incurred from a new way of working, improved timeliness of information (and thereby potentially better decision making), customer retention, customer return rate, and many others.  What questions are important to you right now?  What measures can help provide insight into those questions?  It depends on the situation that you face, there are no hard and fast rules.  For a better understanding of complexity factors faced by teams, see The Software Development Context Framework.
  4. The measures should be difficult to game.  Once again, traditional falls down here.  ROI estimates are notoriously flakey because they require people to make guesses about costs, revenues, time frames, and other issues.  The measurements coming out of your instrumented applications are very difficult to game because they’re being generated as the result of doing your day-to-day business.  
  5. The strategy must be compatible with your organization.  Once again, this is a “it depends” type of situation.  Can you imagine trying to convince an agile team to adopt the traditional strategy, or vice versa?  Yes, you can choose to improve over time.  

Not surprisingly, I put a lot more faith in the agile/lean approach to measuring value.  Having said that, I do respect the traditional strategy as there are some situations where it may in fact work. Just not as many as traditional protagonists may believe.

Categories: Blogs, Companies

Validation and Uncertainty

George Dinwiddie’s blog - Mon, 04/14/2014 - 16:54

What an extraordinary conversation I had recently on Twitter. It started with Neil Killick’s statement that we should not consider our stories truly done until validated by actual use. This is a lovely thing, if we can manage it. While I’ve not set such a bold declaration of “done,” I’ve certainly advocated for testing the benefit of what we build in actual use. Deriving user stories from hypothesis of what the users will do, and then measuring actual use against that hypothesis is a very powerful means of producing the right thing—more powerful than any Product Owner or Marketing Manager could otherwise dream up. 

While I often recommend such an approach as part of “Lean Startup in the Enterprise,” when I hear someone else say it, it’s easier to think of potential problems. Paul Boos says it’s “balanced insight.” I fear it’s me being contrary, but I do like to examine multiple sides of a question when I can. In any event, such conversations help me to think deeper about an issue.

The first situation I considered was the solar calibrator for Landsat VII. When you only get one launch date, it’s a bit hard to work incrementally, validating the value with each increment. Instead, we must validate as best we can prior to our single irrevocable commitment. This involves quite a bit of inference from previous satellites, and from phenomena we can measure on earth. We must also predict future conditions as best we can, so that we can plan for both expected conditions and anomalies that we can envision. This is an extreme situation, and it’s quite possible we’ll utterly fail.

So, the conversation turned to ecommerce systems. Surely we can measure user behavior and know what effect a change makes. We can measure the behavior, all right, but even without making any change, we may notice a lot of variance in that behavior. If a variance of 5% to 10% can be expected week-over-week in some measured behavior, then a change that might produce a 1% to 2% change is very hard to detect.

The obvious answer is to maintain a control group. If we present some users with an unchanged system and others with the change, then we can measure the normal variation in the control group and the normal variation plus the variation specific to the change in the experimental group. Given a sufficient number of users, the normal variation should be equal between the two groups.

Is, however, the specific variation a lasting phenomena in response to the essence of the change, or is it transient behavior triggered by the fact that there was a change? In spite of the Hawthorne Effect being a legend based on poor experimental design, novelty does attract interest.

Another difficulty is that the people whose behavior we are measuring are individuals, and vary considerably from one another. When we measure in aggregate, we are averaging across those individuals, and smoothing the rough bumps off of our data. Unfortunately, much of the data is in those rough bumps. What motivates one person to purchase our product may drive another one off. Can’t we segregate our customers into different segments so that we can at least average over smaller, more-targeted groups? If we’re measuring behavior of customers who are logged into our site and whose background and behavioral history we know, then we certainly can. If they are anonymous users, then no, we can’t. Not unless we can spy on them by tracking them around the web through widespread cookies, surreptitious javascript, or other means.

When we gain in one respect, we lose in another. With either of these methods, there may be errors in what we think we know about them. And any time we segment, we’re reducing the quantity in a study group, increasing the chance that our measurements may be due to random chance rather than the change we are studying.

The use of statistics can help us estimate whether or not a variation is specific or random. It can tell us whether the change is statistically significant, and what is our confidence level that it’s not random. It cannot, however, assure us of our conclusions.

Statistics also can only alert us to correlation, not to causation. The behavior change we notice may be due to an overlooked difference that accompanies the difference we’re attempting to measure. If we’re not very careful, then there may be systemic bias that we don’t notice, and we make the wrong presumption from the evidence.

In the world of commerce, we rarely have the opportunity to repeat our experiments to see if they’re repeatable. We want to keep progressing, and so we lose the baseline conditions that would allow us to repeat. Also, our system is just a small part of the daily life of a potential customer. It’s just a small part of the larger system made up of all the other small systems they use. Those other systems also have the capability to change the user’s behavior with our system. Perhaps they change how the user perceives our system can be operated, or sets a different standard for what is considered acceptable.

The world is a bit unpredictable. In the end, we seem to be measuring what happened against what we estimate would have happened, otherwise. Sometimes it may seem very clear to us; sometimes murky. Always we have the possibility of fooling ourselves.

Categories: Blogs

Task Boards: Track the Details

LeanKit Task Boards help you break down and manage the finer details of your work. Seeing each granular task as a part of your visual flow of work promotes visibility for the whole team.   Group and Ungroup Work On the Fly For those, “Oops, those should’ve been grouped together…” moments, cards include a ‘Move to Taskboard’ […]

The post Task Boards: Track the Details appeared first on Blog | LeanKit.

Categories: Companies

How To Solve Problems So They Stay Solved

Why is it that as we approach our goals they seem to be more difficult to achieve? Why is it that things progressing so well seem sooner or later to turn sour? And when things turn sour, how is it that they seem to do so in such a rapid fashion? Why is it that […]

The post How To Solve Problems So They Stay Solved appeared first on Blog | LeanKit.

Categories: Companies

Leveraging the Team for Good PDPs (if you need them)

Learn more about our Scrum and Agile training sessions on WorldMindware.com

I am currently working with a relatively new Scrum team (5 Sprints/weeks young) that needs to rewrite their Personal Development Plans (PDPs) in order to better support Scrum and the team.  PDPs are still the deliverables of individuals required by the organization and likely will be for some time.  The organization is still in the early days of Agile adoption (pilots) and they are large.  So, instead of giving them a sermon on why metrics for managed individuals are bad, I am going to help them take the step towards Agility that they are ready to take.

The Plan:

  • Facilitate a team workshop to create an initial Skills Matrix;
  • work as a team to develop a PDP for each individual team member that directly supports the team’s high-performance Goal (already established)—
    • in other words, when considering an appropriate PDP per individual, the team will start with the team’s performance Goal and build the individual PDP from there;
  • develop a plan as a team for how the team will support each team member to fulfill his/her individual PDP—
    • in other words, all individual PDPs will be part of the team’s plan for team improvement;
  • Internally publish the plan (share with management).

I’ll follow up with another post to let everyone know how it goes.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Testautomatisierung – die Versicherung für Ihre Software!

Scrum 4 You - Mon, 04/14/2014 - 07:25

Der Ausgangspunkt dieses Beitrags war eine Diskussion zum Thema Testautomatisierung und die Frage, wie deren Vorteile und Nutzen einfach kommuniziert werden können. Mit welchem Beispiel aus der alltäglichen Praxis lässt sich die Testautomatisierung für Software am besten vergleichen? Mir fiel folgender Vergleich ein: Ist denn die Testautomatisierung nicht eigentlich eine Versicherung für die entwickelte Software?

Aber der Reihe nach. Per Definition sind Versicherungen dazu da, ein mögliches Risiko für einen Schadenseintritt abzusichern [1]. Da Software heute in vielen Unternehmen essentiell für deren Wertschöpfungsprozesse ist oder sogar die Wertschöpfung darstellt, ist deren Berücksichtigung für das Risikomanagement des Unternehmens nicht nur sinnvoll, sondern notwendig. Im Rahmen des Risikomanagements müssen mögliche Schadensfälle definiert und bezüglich ihrer Eintrittswahrscheinlichkeit und -schwere klassifiziert und priorisiert werden.

Stellt euch vor, dass das betrachtete Unternehmen 50% seines Umsatzes mit Hilfe eines Online-Shops erwirtschaftet und in diesem wegen eines Softwarefehlers mehrere Stunden keine Bestellungen getätigt werden können. Im realen Leben ein Worst-Case-Szenario, das immer wieder auch renommierte Shop-Betreiber trifft.

In der Lehre des Risikomanagements gibt es verschiedene Varianten, mit einem Risiko umzugehen. Das Risiko kann in Kauf genommen, minimiert, an Dritte übertragen oder gar vermieden werden. [2]

Viele Firmen entscheiden sich bei Software unbewusst für die erste Variante, da sie diese Schadensfälle gar nicht berücksichtigt, geschweige denn berechnet haben. Erst wenn der Schaden eintritt, wird sich ein Unternehmen seines ausgesetzten Risikos bewusst. Dabei wäre es doch so einfach, das Risiko mit Hilfe einer Versicherung auf ein akzeptables Maß zu minimieren. Diese Versicherung ist die Testautomatisierung. Mit Hilfe unzähliger automatisierter Tests werden regelmäßig alle wichtigen Funktionalitäten der Software auf ihre Funktionstüchtigkeit getestet. Diese Tests liefern somit regelmäßig Feedback über das aktuelle Risiko, das von der Software ausgeht. Mit Hilfe von Statistiken und Auswertungen können diese Daten dann in eine unternehmensweite Risikoüberwachung integriert werden.

Testautomatisierung sichert so nicht nur nachhaltig die Qualität und Wartbarkeit der Software, sondern verschafft auch ein Gefühl der Sicherheit, die eigene Software „im Griff“ zu haben. So let us start coding test driven!

[1]  Wiktionary: http://de.wiktionary.org/wiki/Versicherung
[2]  Ahrendts, F.; Martin, A.: IT-Risikomanagement leben! Wirkungsvolle Umsetzung für Projekte in der Softwareentwicklung. Berlin: Springer Verlag, 2008.

Related posts:

  1. Test Driven Development (TDD) und Scrum | Teil 1
  2. Der ScrumMaster ist eine Versicherung (Mike Beedle)
  3. Prioritäten | Product Owners Tools

Categories: Blogs

Why Have a Strategy?

J.D. Meier's Blog - Mon, 04/14/2014 - 06:50

To be able to change it.

Brilliant pithy advice from Professor Jason Davis’ class,Technology Strategy (MIT’s OpenCourseWare.)

Categories: Blogs

Agile Lagging to Leading Metric Path

Even in an Agile environment there is a benefit to applying measures to understand progress.  It can be tempting to apply the same iron triangle metrics (based on cost, schedule, and scope) that may have been used in a more traditional mindset to Agile projects and initiatives.  Instead, I suggest removing all of those metrics and start with a clean slate.  On the clean slate, consider your outcomes.

An Agile mindset asks that you consider outcome instead of output as a measure of success.  This means we should first start with understanding our desired outcomes for an initiative or project.  Within a business context of building products, one measure of success an increase in revenue. Having a customer revenue metric helps you understand whether the products being built are increasing revenue upon release. While capturing revenue is a good starting point, it is a “lagging” indicator meaning you don’t recognize the evidence of revenue movement until after the release is in production and has been in the marketplace for a period of time.
To supplement lagging measures, you need corresponding leading measures and indicators that provide you with visibility during development to gauge if you are moving the product into a position of increased revenue. I call this framework the Lagging to Leading Metric Path.  This visibility is important because it provides input for making decisions as you move forward. Making the right decision leads to improved results. As you consider measures, think about how they help you gain visibility and information for decisions in building a product that helps you lead toward an increase in revenue.
For a hopeful increase in customer revenue, what leading metrics can we put in place to ensure we are moving in the right direction?  Let’s say in this case that increased revenue is the hopeful lagging metric based on expected customer sales.  Examples of leading measures or indicators to achieve an outcome of this lagging metric for increased customer revenue include:
  • Customers attending Sprint Review: a leading metric where you capture how many customers are actually attending the sprint review and how much feedback they give. This indicates engagement and interest. 
  • Customer satisfaction from Sprint Review: a leading metric is capturing customer satisfaction from the functionality they viewed within the sprint review.  This indicates levels of satisfaction with the functionality as the product is being built. 
  • Customer satisfaction of product usage: an indicator of the most recent release highlighting a level of satisfaction on the usage of the current product including commentary.   

When applying Agile to product development, the outcome that matters most are often represented by lagging metrics.  Therefore you will need leading indicators to ensure you are moving in the right direction, to provide visibility, and to help you with decision-making.   Within your own context, consider constructing a lagging to leading metric path so that you know you are moving in the right direction during your Agile journey.

                    ------------------------------------------------------------------------
Note: the lagging to leading metric path really isn't specific to Agile and I would suggest applying this to an initiative or project aligning with any mindset, process, method, or practice of delivering value.

To read more about establishing an Agile Lagging to Leading Metric Path and Agile Measures of Success, consider reading Chapter 14 of Being Agile
Categories: Blogs

Neo4j 2.0.0: Query not prepared correctly / Type mismatch: expected Map

Mark Needham - Sun, 04/13/2014 - 19:40

I was playing around with Neo4j’s Cypher last weekend and found myself accidentally running some queries against an earlier version of the Neo4j 2.0 series (2.0.0).

My first query started with a map and I wanted to create a person from an identifier inside the map:

WITH {person: {id: 1}} AS params
MERGE (p:Person {id: params.person.id})
RETURN p

When I ran the query I got this error:

==> SyntaxException: Type mismatch: expected Map but was Boolean, Number, String or Collection<Any> (line 1, column 62)
==> "WITH {person: {id: 1}} AS params MERGE (p:Person {id: params.person.id}) RETURN p"

If we try the same query in 2.0.1 it works as we’d expect:

==> +---------------+
==> | p             |
==> +---------------+
==> | Node[1]{id:} |
==> +---------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 1
==> Labels added: 1
==> 47 ms

My next query was the following which links topics of interest to a person:

WITH {topics: [{name: "Java"}, {name: "Neo4j"}]} AS params
MERGE (p:Person {id: 2})
FOREACH(t IN params.topics | 
  MERGE (topic:Topic {name: t.name})
  MERGE (p)-[:INTERESTED_IN]->(topic)
)
RETURN p

In 2.0.0 that query fails like so:

==> InternalException: Query not prepared correctly!

but if we try it in 2.0.1 we’ll see that it works as well:

==> +---------------+
==> | p             |
==> +---------------+
==> | Node[4]{id:2} |
==> +---------------+
==> 1 row
==> Nodes created: 1
==> Relationships created: 2
==> Properties set: 1
==> Labels added: 1
==> 53 ms

So if you’re seeing either of those errors then get yourself upgraded to 2.0.1 as well!

Categories: Blogs

Quarterly Product Update Webinar

Join us for a roundup of LeanKit’s newest and upcoming features on Thursday, May 1 at 1pm ET.  Jon Terry, LeanKit’s COO, will give a live update and answer questions. During this webinar, you’ll learn how to:  Visualize your date-driven work with the new Calendar View Get the most out of recent UI enhancements Stay connected with […]

The post Quarterly Product Update Webinar appeared first on Blog | LeanKit.

Categories: Companies

Effective Agile at Scale Webinar Series

Net Objectives gets straight to the point when it comes to describing Agile, reminding us that it’s not about iterations, teams, co-location or any of the ways it may be implemented. Instead, Agile is about the fast delivery of business value in a predictable, repeatable manner. With this perspective in mind, join Al Shalloway, CEO of […]

The post Effective Agile at Scale Webinar Series appeared first on Blog | LeanKit.

Categories: Companies

Social Intelligence and 95 Articles to Give You an Unfair Advantage

J.D. Meier's Blog - Sun, 04/13/2014 - 15:56

Social Intelligence is hot.

I added a new category at Sources of Insight to put the power of Social Intelligence at your fingertips:

Social Intelligence

(Note that you can get to Social Intelligence from the menu under “More Topics …”)

I wanted a simple category to capture and consolidate the wealth of insights around interpersonal communication, relationships, conflict, influence, negotiation, and more.   There are 95 articles in this category, and growing, and it includes everything from forging friendships to dealing with people you can’t stand, to building better relationships with your boss.

According to Wikipedia, “Social intelligence is the capacity to effectively negotiate complex social relationships and environments.”

There's a great book on Social Intelligence by Daniel Goleman:

Social Intelligence, The New Science of Human Relationships

According to Goleman, “We are constantly engaged in a ‘neural ballet’ that connects our brain to the brains with those around us.”

Goleman says:

“Our reactions to others, and theirs to us, have a far-reaching biological impact, sending out cascades of hormones that regulate everything from our hearts to our immune systems, making good relationships act like vitamins—and bad relationships like poisons. We can ‘catch’ other people’s emotions the way we catch a cold, and the consequences of isolation or relentless social stress can be life-shortening. Goleman explains the surprising accuracy of first impressions, the basis of charisma and emotional power, the complexity of sexual attraction, and how we detect lies. He describes the ‘dark side’ of social intelligence, from narcissism to Machiavellianism and psychopathy. He also reveals our astonishing capacity for ‘mindsight,’ as well as the tragedy of those, like autistic children, whose mindsight is impaired.”

According to the Leadership Lab for Corporate Social Innovation, by Dr. Claus Otto Scharmer  (MIT OpenCourseware), there is a relational shift:

The Rise of the Network Society

And, of course, Social is taking off as a hot technology in the Enterprise arena.  It’s changing the game, and changing how people innovate, communicate, and collaborate in a comprehensive collaboration sort of way.

Here is a sampling of some of my Social Intelligence articles to get you started:

5 Conversations to Have with Your Boss
6 Styles Under Stress
10 Types of Difficult People
Antiheckler Technique
Ask, Mirror, Paraphrase and Prime
Cooperative Controversy Over Competitive Controversy
Coping with Power-Clutchers, Paranoids and Perfectionists
Dealing with People You Can't Stand
Expectation Management
How To Consistently Build a Winning Team
How To Deal With Criticism
How Do You Choose a Friend?
How To Repair a Broken Work Relationship
Mutual Purpose
Superordinate Goals
The Lens of Human Understanding
The Politically Competent Leader, The Political Analyst, and the Consensus Builder
Work on Me First

If you really want to dive in here, you can brows the full collection at:

Social Intelligence

Enjoy, and may the power of Social Intelligence be with you.

Categories: Blogs

Scrum, XP, SAFe, Kanban: Which Method Is Suitable for My Organization?

Agile World - Venkatesh Krishnamurthy - Sun, 04/13/2014 - 14:12

image I have recently seen the SAFe framework criticized by the Scrum founder as well as the Kanban founder (see "unSAFEe at Any Speed" and "Kanban -- The Anti-SAFe for Almost a Decade Already"). Method wars are not new, however, and could go on forever. In the face of these discussions, it is important to remember the real intent behind Agile methods.

In this recently published Cutter article, I discuss the importance of understanding Agile as a tool rather than as a goal.  I am also proposing some ideas from complexity theory and Cynefin framework to substantiate the need for parallel/safe to fail experiments rather than  handcuffing organizations with single framework/method or a process.

Read the complete article on Cutter

image 

Photo courtesy: Flickr

Categories: Blogs

Question: Advice to a beginning ScrumMaster

Agile & Business - Joseph Little - Sun, 04/13/2014 - 00:31

Virginia asks: “I am a beginning ScrumMaster in a tough situation.  I have some ideas, but I am unsure what to do.  And unsure what to do first.  What can you suggest?”

Answer:  I think this is a common problem.

But, in reality, there is no end of things that a ScrumMaster can do to help the team.

First, take what you already know about Scrum, and remind the Team what Scrum is.  And why….what the values and principles are.

This can be done in a thousand ways.  One example: Post the Agile Manifesto and the Agile Principles in the Team room.  After each Daily Scrum, ask each member of the Team to take two minutes to explain something about one line or one item from the list.

You will be impressed how well they explain the agile ideas to each other.  And then, you can add an additional insight. Maybe something you think they could improve on.

Read or re-read Agile Project Management with Scrum by Schwaber to get lots of stories and ideas about how Scrum should work.

Second, get a better list of impediments.  Ok, let’s be honest — START a list, a public list.  Yes, of course collect the ones they tell you in the Daily Scrum and in the Retrospective.  Add the ones that you see.  Read this blog, and steal some impediments that apply to you and your team.

You want a list of the top 20 things to improve, broken down into actionable things, where you could see, smell, notice improvement in 2 weeks or less.  Yes, you often start with some epic impediments…but just start there…

An impediment is anything, ANYTHING, that is slowing down the Team. Example: Anything that stops a story, slows down the Team. People issues, technical issues, organizational issues, the weather, I need coffee, I need a dentist, we need a different culture, whatever. Whatever.

Ok, we have to discuss two things that happen universally in the Daily Scrum, at least at first.  They don’t divide the tasks into small pieces, and they talk vaguely about what they worked on, and do not focus on what was DONE (completed).  The tasks must be mostly in the 2-4 hour range.  And they must say whether or not it was completed. If a 4 hour task is not completed in one day, clearly there is some kind of ‘impediment’ (eg, I cannot estimate very well).

Then, they must give their biggest impediment. (What slowed them down the most.)  Time itself is not an impediment.

It might be: “I don’t know this area that well.”  Or: “The servers were down.”  Or: “Well, if the tests were automated, then I could have found the bad code faster.”  Or lots of other things.  Saying: “None” is not an option.  Implying that ‘things are so good around here, that there is no way it could possibly get better’ is also not an option.  Things can always be better.

Also, you must give them a challenge.  Tell them: “We have to double the velocity, in a way that we believe, and not by working any harder.  So, what do we have to change to do that?  And imagine that anything could change. Anything. And that the managers will approve anything, if we can make a good case for it.”

For the Retrospective, see also Agile Retrospectives by Derby and Larsen for more ideas to uncover the real impediments. They have forgotten lots of impediments because they have become used to them, or they can’t imagine that it could be changed.

Third, aggressively attack the impediments.  Every day. Every hour.  Take the top one, and go after it. If you can’t fix it yourself, that is fine.  But get someone to.

I do not mean go crazy. Use some common sense. If the cost is greater than the benefit, than do solve it that way.  Sometimes you can only mitigate the impact. Etc, etc.  But still, every day and every hour, attack the top impediment.

Fourth, tell them.  Tell the right people what you will do, what you are doing, and what you have done.

Mostly, you tell the Team.

How?  In the Daily Scrum (you answer the questions, and tell them).  In the Retrospective.  And in other ways that make sense.

Why?  Well, not to brag as such.  But you need to know they care.  They want the ‘fix’ that you will give, are giving, did give them.  Also, once they know things are getting fixed, they will get more creative about talking about things you could fix.

Fifth, keep a list of ‘fixes installed.’  All the things you did, or got done, to make their lives better.

Why? So, when you are discouraged, you can look at the list and get some encouragement.  So, so when the wonder why you are not doing ‘real work’, you can remind them of your value.  So you can justify the managers why you deserve a raise.

Track the list, and make a reasonable guess as to how much of the improved velocity of the team is attributable to the fixed or mitigated impediments. Typically 100% is effectively attributable to you.  Yes, Scrum itself did some (but you still take credit). Yes, the Team did some things, but honestly probably would have done very little without you, or would have done it very much later.  It does not matter that you did not do it ‘with your own hands’.  You made it happen, you were the key.  It does not matter than some improvements cost ‘extra’ money. The benefits were huge, and mostly would never have happened without you.

Do not slack for even one day.

The Team and the customers deserve everything you have to give.  And you too will be rewarded in ways hard to explain but very clear…by all the good things you make happen.

Sixth, to help you become a better ScrumMaster faster, start a ScrumMasters club with other SMs in your area.

Learn from each other. Maybe have a brown bag once a week, and present ideas and experiences to each other once per week over lunch.

That’s a start. There are many places and ways to learn.  As you act, you will discover more ways to learn, and more things to learn about.

Does that help?

 

twittergoogle_plusredditlinkedinmail
twitterlinkedinrss
Categories: Blogs

Five links on Software Design

The real lessons in software development come from production. The more frequently you can deploy – more feedback you can generate

@KentBeck

henrik.signature

 

 

 

I don’t claim these articles to be the best on this subject, but I have enjoyed reading them and they have made my knowledge grow. I recommend you to have a look if you are interested in the subject. Happy reading!

Follow me on twitter @hlarsson for more links on a daily basis. You may also subscribe to weekly newsletter of my links. Thanks to writers of these articles and Kirill Klimov for collecting the agile quotes: http://agilequote.info/. Please follow @agilequote for more quotes.


Categories: Blogs

The Industry Life Cycle

J.D. Meier's Blog - Sat, 04/12/2014 - 21:27

I’m a fan of simple models that help you see things you might otherwise miss, or that help explain how things work, or that simply show you a good lens for looking at the world around you.

Here’s a simple Industry Life Cycle model that I found in Professor Jason Davis’ class, Technology Strategy (MIT’s OpenCourseWare.)

image

It’s a simple backdrop and that’s good.  It’s good because there is a lot of complexity in the transitions, and there are may big ideas that all build on top of this simple frame.

Sometimes the most important thing to do with a model is to use it as a map.

What stage is your industry in?

Categories: Blogs

Security Update on CVE-2014-0160 Heartbleed

sprint.ly - scrum software - Sat, 04/12/2014 - 05:30

OpenSSL, the open source cryptographic library reported the Heartbleed vulnerability on April 7, 2014. The vulnerability allows stealing the information protected, under normal conditions, by SSL/TLS encryption.

We have had no evidence that this vulnerability was used against Sprintly but we have taken all necessary precautions to ensure the continued safety of your information. 

Actions We Have Taken 

  1. Within hours of the official report from OpenSSL, we patched and verified all our servers for CVE-2014-0160. 
  2. We use Amazon’s ELB product for load balancing. They patched our region a few hours before we patched our servers. 
  3. We have re-issued new SSL certificates to all our servers. 
  4. We have rotated all of our SSH, Chef, and AWS API keys throughout our infrastructure. 
  5. We have rotated all 3rd party API keys we use to provide services, such as Transloadit (file processing) and Postmark (email).
  6. We have set up our Chef nodes to re-key themselves every 24 hours. We suggest you do the same
  7. Friday night we flushed all active sessions. This means you will have to log into Sprintly again when you get back to work Monday. Apologies in advance for any inconveniences.

Additional Precautions 

You may consider taking these additional precautionary measures on your Sprintly account: 

  1. Change your Sprintly password 
  2. Reset your Sprintly API key 

Both settings may be found in the Profile menu under your Gravatar. 

Again, we have had no indication that this vulnerability was used against Sprintly but do feel that it is a good habit to keep your passwords and security keys regularly updated. 

If you have any questions or concerns, please feel free to contact us at support@sprint.ly.

Categories: Companies

LeanKit Services Not Affected by Heartbleed

You may have heard recent reports of a vulnerability, commonly known as “Heartbleed,” that affects the popular open-source library OpenSSL. We have confirmed that the LeanKit application and our internal supporting services are not affected by this vulnerability. This bug, officially referenced as CVE-2014-0160, is not an issue with the design of SSL but is […]

The post LeanKit Services Not Affected by Heartbleed appeared first on Blog | LeanKit.

Categories: Companies

Feature Cards

Agile Game Development - Fri, 04/11/2014 - 17:33
My previous article described Feature Boards, a variation of task boards that some agile game teams find more useful.  The main difference between a by-the-book task board and a Feature Board is that task boards contain task cards, which are markers for individual tasks the team foresees will be needed to complete a set of features,  while a Feature Board contains Feature Cards, which are markers for the features themselves.  This article describes a typical Feature Card.


An example Feature Card
The above card shows some typical elements:

  • Card Color: Using a simple color priority scheme (e.g. red, yellow, green) can help teams decide the order of features to work on based on their priority in the backlog or any dependencies that exist between them.
  • Title:  In this case “Jump”.
  • Description: Very often written in a User Story format.  This briefly describes the feature from the point of view of the user, such as the player .
  • Acceptance criteria: While planning an iteration, teams will capture some unique aspects of the feature that will need to be tested.  For example, it might be important to the product owner that the blending of animations be smooth and without popping.  Someone, usually a tester, will write this criteria the back of the card to remind the team to test the feature to ensure there is no popping while jumping.
  • Start and End Dates: It’s useful to capture the day when the team starts working on a feature and when it’s done.  As mentioned in the article about Feature Boards, it’s not best to work on too many features in parallel during an iteration.  Using the dates will allow a team to measure how much work in progress is occurring during the iteration.  I’ll describe how this is used in an upcoming article on Feature Board Burndown Charts.
  • Estimates: As a part of iteration planning, each discipline will discuss the plan for implementing their portion of the feature and may want to capture some time estimates.  Some teams use hours, some days, while some teams will skip estimates altogether.  Capturing discipline estimates rather than individual estimates increases collaboration among members of the same discipline.  These estimates are used to track whether an individual discipline is at capacity or not while features are planned.  For example, if a team is finding that the sum of all animation work across all features adds up to more time than their one full-time animator can handle, they’ll find another animator to help them or they’ll change which features they commit to.
  • Tasks: Tasks captured during planning or during the iteration can be annotated on the card with smaller post-its or, by using larger cards, be written on the cards themselves.  If you use post-its, make sure you use the brand name ones with the good glue!
  • Card Size: Feature cards are usually written on a 3x5 or 4x6 index card.  

VariationsThe sky is the limit on how to format these cards.  For example, some teams record the story points or t-shirt sizes on the cards and don’t use time estimates.  Some teams will break down larger features into sub-features and track the completion of those.  The key is to collect meaningful metrics that are part of an ecosystem to help the team better plan, execute and deliver features at ideal quality.
NextWe’ll look at an optional burndown or throughput tracking chart.


Categories: Blogs

Is Your Sprint Pipeline Running Well?

Agile Management Blog - VersionOne - Fri, 04/11/2014 - 16:55

Most agile teams, when starting out new on their agile transformation road, conduct all sprint related activities in the same timebox, i.e., sprint planning, sprint execution, sprint review and sprint retrospective.  This is illustrated in Figure 1, where each sprint (from Sprint 0 through N+1, represented by each row) is mapped onto a single sprint timebox (Timebox 0 through N+1, represented by each column), and successive sprints are executed in successive timeboxes in a sequential order (Sprint 0 in Timebox 0, Sprint 1 in Timebox 1, and so on).

Figure1

 Figure 1: Sequential execution of sprints

For example, all implementation tasks for Sprint 2 (analysis, design, code development and unit testing, acceptance test case development and acceptance testing, defect fixing – done concurrently (not as a mini-waterfall) by a self-organized, cross-functional team — are completed in Timebox 2.  Sprint 2 planning is completed (before Sprint 2 starts) in approximately 4 hours for a two-week sprint (or 8 hours for a four-week sprint); and sprint review and retrospectives are completed in approximately 1 hour each for a two-week sprint (or about 2 hours each for a four-week sprint).

Each light pink vertical thin box separator between two successive sprint timeboxes represents a small timebox to complete all these ceremonies, i.e., sprint review and retrospective for the previous sprint as well as sprint planning for the next sprint.   Thus all these tasks for successive sprints are carried out in a sequence of successive timeboxes.  This is a simple and straightforward model where work goes in each timebox sequentially, sprint by sprint, without any overlap.

What are the advantages of and the challenges for the simple model of sequential sprint execution shown in Figure 1?

Advantages of the sequential sprint execution model:

1AS.    Simple to teach, understand and learn – and hence covered by all agile text books, training courses, and certification programs.

2AS.    Conceptually simple to execute — but fraught with challenges as explained below.

Challenges for the sequential sprint execution model:

1CS.     Analysis issues may block the team and reduce throughput: Analysis of backlog items (or stories, as they are often called) must involve intense conversations among the product owner and team members, and confirmation of each story by defining its acceptance criteria.  Stories can be properly understood only through conversation and confirmation parts of the story metaphor of card, conversations, and confirmation.   The product owner should answer the questions and clarify the issues raised by the team members before design and code development work can start.     This goal – clarifying all stories to reach a collective, common, shared understanding – may become very challenging if all stories are analyzed during the same timebox in which they are scheduled for design, code development, testing, defect fixing and delivery.   This is so because the product owner may not know all the answers to team members’ questions without consulting users, customers and stakeholders, and performing necessary market and customer research and analysis.  This entire analysis effort is often difficult to compress in the same timebox when design-development-testing-defect fixing work is also going on concurrently because the actions of many actors responsible for providing answers to the analysis questions may be beyond the control of the product owner (for example, some stakeholders or customers may not be available to provide specific clarifications within the timebox time constraints).   The net effect of this challenge is often reduced team velocity (throughput) as the team is still waiting for stories to be clarified while sprint execution is going on, and team members may be even blocked waiting for clarification.  Finally the sprint timebox may be over before some planned stories could be completed by the team members and accepted by the product owner.

2CS.     Sprint planning may be less effective and efficient: Without clear and shared understanding of each story by all team members and the product owner, the team will find it difficult to estimate the work effort and complete all sprint planning tasks in the allocated time for sprint planning (approx. 4 hours for two-week sprints and 8 hours for four-week sprints).   Needless to say poorly planned sprints contribute to poor (ineffective and inefficient) sprint execution and reduced throughput.

A better model to overcome the challenges of the sequential sprint model is to execute sprints in a pipeline fashion as illustrated in Figure 2, where the Analysis task is performed one timebox ahead of the Design-Code-Test timebox for each sprint.  For example, development and analysis of Sprint 2 backlog is performed during Timebox 1 (one timebox ahead of Timebox 2), while the actual design, code development and unit testing, acceptance test cases development and testing, regression testing and defect fixing tasks for Sprint 2 are performed in Timebox 2.  This same pattern repeats for each sprint, i.e., the work for each sprint proceeds in a pipeline manner, and as a consequence, the work for each sprint is spread over two consecutive timeboxes in an overlapping manner with the next sprint.

Figure2

Figure 2: Sprint pipeline

 What are the advantages of and challenges for the pipeline model of sprint execution shown in Figure 2?

Advantages of the pipelined sprint execution model: 

1AP.   Analysis work proceeds smoothly without blocking the team: As analysis work is carried out one timebox ahead of design-development-testing-defect fixing work for each sprint, the stories are substantially clearer by the time the team enters the Sprint Planning Workshop for each sprint.  Note that the product owner has a full sprint timebox (typically 2 to 4 weeks) to consult with the necessary users, customers and stakeholders in order to answer team members’ questions on stories.  And while this clarification of stories for the next sprint is going on, the team members are not held up or blocked as they are implementing the stories for the current sprint.   Effort estimation and other sprint planning work proceeds more smoothly and can be more easily completed during the Sprint Planning Workshop.

2AP.   Team experiences higher throughput with well understood stories and well-planned sprint: Note that although each sprint work is spread over two timeboxes (as shown by the blue oval in Figure 2), the throughput is not adversely impacted because the team is still delivering working software at the end of each timebox.   In fact, as sprint planning effectiveness and efficiency goes up and stories become clearer, there is a lot less uncertainty about stories as the sprint starts, which reduces impediments, improves team morale and dynamics, and improves team’s throughput compared to sequential execution of sprints.

Challenges for the pipelined sprint execution model: 

1CP.   In each timebox, the team needs to work on two sprints: Although most of the time the team is focused on design-code development-testing-defect fixing work for the current sprint, some of its time must be set aside to analyze the backlog items prepared by the product owner for the next sprint.  This is indicated by the red oval in Figure 2, where the team is spending most of its effort on design-code development-testing-defect fixing for Sprint 2 in Timebox 2, while at the same time spending some effort in analyzing the stories for Sprint 3.   Some teams — especially when they are new to agile — may find working on two sprints in the same timebox challenging.

2CP.   Small risk of doing wasteful work: There may be a small risk of analyzing few stories one timebox ahead of their actual implementation that may end up not being part of the next sprint commitment due to change in priorities as the next sprint planning is completed.   Some people may even object that doing analysis one timebox ahead of the actual implementation could be somewhat wasteful, and it goes against the “just-in-time” agile work philosophy.

I will now explain how to deal with both of these challenges, 1CP and 2CP.

Solution to the challenge of working on two sprints in the same timebox (1CP): The ScrumMaster for the team can help manage work on both sprints in the same timebox (i.e., design-development-testing-defect fixing work for the current sprint as well as analysis of backlog items for the next sprint) by establishing and following a Sprint Cadence Calendar as illustrated in Figure 3.

Figure3

 Figure 3: Two-Week Sprint Cadence Calendar

The two-week Sprint Cadence Calendar has 10 work days, with workday starting at 8:00 AM (0800 hours) and ending at 5:00 PM (1700 hours). The team should allocate about 30 minutes for preparation for the Daily Scrum (DS) meeting and actual attendance by each team member.  These DS meetings should be held every day of the sprint at the same time and place.   In Figure 3, these DS meetings (preparation and actual meeting) are shown as starting at 9:00 AM and ending around 9:30 AM every day.  If you are interested in understanding and implementing highly effective and efficient Daily Scrums, please take a look at my Making-daily-scrum-meetings-really-effective-efficient blog on the subject.  The other ceremonies (Sprint Planning, Sprint Review, and Sprint Retrospective) are also illustrated in Figure 3. If you are interested in understanding and implementing really effective Sprint Retrospectives, please take a look at my Making-sprint-retrospectives-really-effective blog on the subject.

As shown in Figure 3, I recommend that the analysis of backlog items for the next sprint should be scheduled on a regular cadence, where a two-hour meeting is held on every Wednesday (3 PM to 5 PM), as an example.  In these two meetings in a two-week sprint (a total of 4 hours or approximately 5% of the time in the two-week timebox), the product owner and the entire team converse about each story and also confirm each story with its acceptance criteria.  If the product owner cannot answer some questions in the meetings, he/she has adequate time left in the two-week sprint timebox to get those questions answered in consultation with users, customers and stakeholders, without blocking any team member.

I also recommend that the product owner grooms the product backlog and prepares a draft of backlog items for the next sprint on a regular cadence, shown on every Tuesday (3 PM to 5 PM) in Figure 3.  The cadence or pattern for sprint planning, sprint review, sprint retrospective, analysis for the next sprint, and product backlog grooming (shown in Figure 3) is for illustrative purposes only.  Your team should discuss and experiment various sprint cadence options to find the cadence that most suits your needs and choose the cadence that best works for you.  For example, a team may hold Sprint Review and Retrospective for the previous Sprint on Week-1 Monday morning (9 AM to Noon)  followed by Sprint Planning Workshop for the current sprint from 1 PM to 5 PM.  Another team may start the Timebox on a Wednesday (instead of Monday), and two weeks later end it on a Tuesday (instead of Friday), i.e., stagger the start and end of the timebox shown in Figure 3 by two work days to avoid Sprint Planning, Sprint Review and Sprint Retrospective falling adjacent to a weekend day (and tempting some team members to miss them due to their long weekend vacation!).

Regular cadence for all the activities mentioned above minimizes coordination, scheduling and transaction costs; increases predictability through a disciplined schedule published ahead of time for several release cycles (6 to 12 months), and helps an agile team to become a well-oiled machine.  Regular cadence or pattern also reduces unplanned, unexpected, interrupt-driven work that is very damaging due to sudden, unplanned context switches with resulting loss of productivity.

For all these activities shown on a regular cadence in Figure 3, the ScrumMaster working with the team members must set aside adequate capacity: 4 hours for sprint planning, 4.5 hours for 9 Daily Scrum meeting (preparation and attendance), 4 hours for Analysis of the next sprint, 3 hours for Sprint Review, Sprint Retrospective and Celebration – a total of 15.5 hours of capacity is needed for each member for these activities, and that capacity for each member is not available for the design-development-testing-defect fixing work for the current sprint.  If you are interested in calculating the agile capacity of a team after allocating capacity for all these activities, please see my Agile-capacity-calculation blog on the subject.

Solution to the challenge of managing the risk of doing wasteful work (2CP): An important reason why teams fail to deliver stories or backlog items in a sprint is that they were not understood properly when they were committed to the sprint plan during Sprint Planning.  As Dean Leffingwell explains in his Agile Software Requirements book (page 211), from a timing perspective there are three opportunities to do this conversation and confirmation for each story.

  • Prior to the sprint: This is what I have advocated in this blog by stipulating to engage in this conversation and confirmation only one timebox ahead.  Doing it more than one timebox ahead will increase the risk that some of those stories may not get into any sprint commitment as they get trumped by higher priority stories, or they may be removed from the product backlog altogether due to changed business conditions.
  • During the Sprint Planning Workshop:  This workshop is limited to approximately 4 hours for a two-week sprint.  In this short time box, the team has to not only converse and confirm the stories, but do story effort estimation and all other tasks related to sprint planning.  The team may find it difficult to get the time needed to complete the conversation and confirmation for each story, or the product owner simply may not have answers to some of the questions and most likely there is no time in the workshop timebox (only 4 hours) to get the answers by contacting users, customers and stakeholders.
  • During the actual sprint: If the team feels sufficiently confident that conversation and confirmation about a story can wait until the actual sprint due to uncertainty about the story’s business needs, then the conversation and confirmation for that story may be completed along with its implementation in the actual sprint if the story is of sufficiently lower priority.   However, keep in mind that the story was committed to the sprint during its sprint planning without proper conversation and confirmation (and hence with either no estimate or a very rough estimate).  This situation carries some risk and is not ideal.

I recommend that you complete the analysis of as many high priority stories as possible one timebox ahead of the sprint in which they will be implemented, try to complete as much analysis as possible for some of the lower priority stories during the Sprint Planning Workshop, and leave very small number of analysis questions to be resolved during the actual sprint if you have to.

In my coaching engagements, I have seen agile teams transitioning from sequential sprint execution model to the pipelined sprint execution model after 2 to 4 sequential sprints under their belt, and then with experience (inspect and adapt) getting better at the pipelined model.  This is kaizen (continuous improvements) in work, resulting into higher throughput, improved quality, and increased team morale and work satisfaction.

Have you tried the sprint pipelined model?  What is your experience?
Are you interested in starting your own sprint pipeline?

I would love to hear from you either here or by e-mail (Satish.Thatte@VersionOne.com) or on twitter (@smthatte).

 

Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.