Skip to content

Feed aggregator

Scaling Agile Approaches

Scrum Expert - Tue, 04/19/2016 - 14:21
If the basic principles of the Agile Manifesto and the Scrum framework are simple managing global Agile software development projects in the corporate world is sometimes more complex and complicated. This article lists some approaches that have been created to answers the specific questions raised by scaling Agile practices in large organizations. Scaling Agile could be a challenging journey. Graig Larman and Bas Vodde, the authors of the LeSS approach, wrote in their first book “After working for some years in the domain of large, multisite, and offshore development, we have distilled our experience and advice down to the following: Don’t do it”. In fact, some people argue that you should not try to scale up Agile, but rather learn how to scale down your projects. This is however not the way many large multinational organizations work when they want to develop quickly large software products with multiple teams that could be distributed over distant location. As the vision of the product and the number of people involved in the project get bigger, these projects needed specific practices to manage them. From the simple initial idea of Scrum of Scrums to the extension of user stories in features and epics and the integration of sprints in a releases agenda, the Agile project management world has added new concepts to the initial Scrum and Kanban frameworks. This has lead to the creation of new frameworks and approaches, often trademarked and registered by consulting companies selling services and certifications, that could be [...]
Categories: Communities

Agile Portugal, Porto, Portugal, 3-4 June 2016

Scrum Expert - Tue, 04/19/2016 - 10:30
Agile Portugal 2016 is a two-day international conference that gathers practitioners from the Agile and Scrum community with invited international leading experts. In the agenda of the Agile Portugal conference you can find topics like “Coding Dojo Challenge – SOLID design principles”, “Continuous Innovation & Change with PopcornFlow”, “Selling The ‘Fluffy’ Side Of Agile”, “How to improve software project estimates – The #NoEstimates view”, “Coaching Teams Through Change”, “Managing to Innovate – Agile Product Management”, “Learning Canvas – A shot of emergent learning for creative workers”. Web site: http://2016.agilept.org/ Location for the Agile Portugal conference: Porto, Portugal
Categories: Communities

Agile Delivery, London, UK, June 2 2016

Scrum Expert - Tue, 04/19/2016 - 10:17
Agile Delivery is a one-day event taking place in London. It targets tech and business people that focus on software development and software testing automation, Behaviour-Driven Development (BDD) and DevOps for retail, finance, government and digital sectors. The conference provides both presentations and workshops. In the agenda of Agile Delivery you can find topics like “Culture Before Tooling or Does Tooling Foster Culture?”, “Product versus Craft”, “Collaboration, Hands On BDD for Product Owners, Devs and Testers”, “Zero to Tested with Docker and Ruby”, “Agile Transformation for Cios”. Web site: http://agile.delivery/ Location for the Agile Delivery conference: Old Street, London, UK
Categories: Communities

R: substr – Getting a vector of positions

Mark Needham - Mon, 04/18/2016 - 21:49

I recently found myself writing an R script to extract parts of a string based on a beginning and end index which is reasonably easy using the substr function:

> substr("mark loves graphs", 0, 4)
[1] "mark"

But what if we have a vector of start and end positions?

> substr("mark loves graphs", c(0, 6), c(4, 10))
[1] "mark"

Hmmm that didn’t work as I expected! It turns out we actually need to use the substring function instead which wasn’t initially obvious to me on reading the documentation:

> substring("mark loves graphs", c(0, 6, 12), c(4, 10, 17))
[1] "mark"   "loves"  "graphs"

Easy when you know how!

Categories: Blogs

Agile Contracts and SAFe

Agile Product Owner - Mon, 04/18/2016 - 20:55

As you know, not everything that’s valuable in SAFe can make it to an icon on the Big Picture. One such example is the new Guidance article on Agile Contracts, which we developed as part of 4.0. In the hubbub around the 4.0 release, we didn’t take the time to bring any attention to this new content, so we are doing so now.

Working with customers on a more collaborative basis is a key element of achieving a leaner and more agile enterprise. Such an approach moves the Customer-Supplier (see also the new 4.0 Customer and Supplier articles) relationship to a win-win paradigm, one that improves the overall economic outcomes for both parties. True “partnering” in Lean means exactly that; a supplier engagement model based on long term customer commitments to key suppliers, based on earned trust.

But even in the presence of trust, how does one go about committing substantial investment to an Agile program, where it’s impossible, in advance, to really know exactly what the buyer is getting? That’s the interesting topic that this new article addresses. The article highlights a “SAFe Managed Investment Contract” approach which uses SAFe practices, nomenclature, and PIs as-objective-Milestones for contract governance. Here’s a graphic teaser from the article:

SAFe Managed Investment Contract execution phaseSAFe Managed Investment Contract execution phase

We are placing this blog post in the “Updates” category so it will appear in the Updates field above the Big Picture. That simply serves as a reminder of new content under development, and in this case as a prompt to more readily get your comments on this important topic.

Stay SAFe,

— Dean, Drew and the SAFe Team

Categories: Blogs

How Customer PMs Helped Us Design Tracker Analytics v1

Pivotal Tracker Blog - Mon, 04/18/2016 - 18:32

Of all of our customer requests, improved reporting and analytics is one of the most popular, consistently ranking near the top of the most requested features on our marketing and exit surveys.

Until recently, we hadn’t taken the time to focus on improving our analytics. The biggest reason for this is that we didn’t really use any project health metrics ourselves. Our team was small, we didn’t ever really “report to anyone,” and we don’t really have hard deadlines. This isn’t to say that we couldn’t benefit from project health metrics; we just hadn’t experienced a strong need for them, so we had difficulty addressing a need we couldn’t identify.

The obvious answer was to also speak to customers and Pivotal Labs consultants (aka, “Pivots”). But who, and how do we find them?

We spoke to three groups of PMs

 
1. People who were using our (really) old reporting features

Our first step was to go to customers and Pivots already using our reports. The goal here was not only to discover how they were using reports now, and what was valuable to them, but also learn what other reports they create (with or without Tracker data). While a good starting point, this can be a risky undertaking to only speak to current customers. The concept of a “product death cycle” illustrates that simply speaking to your current customers does not reveal the needs of those who don’t use it now.

2. Customers who told us they wanted better reporting

The second group we spoke to included people who mentioned reporting, dashboards, and charts in feature requests and marketing surveys. These represent an underserved group who were not finding value from our current reports. In many cases, they were using other tools (or creating their own tools) to compensate.

3. PMs managing large, ongoing projects

The third group was customers and Pivots who were managing larger teams that didn’t necessarily ask for analytics, but had a depth of experience on various engagements. For this group, we wanted to know which metrics were most important to them to track team progress in general. The reason we looked at larger customers is because we found a correlation between larger teams/multiple projects and a greater need for holistic views of projects, as their size makes them difficult to manage directly.

What we learned: experienced PMs consider predictable teams healthy teams

Healthy vs. unhealthy projects

We’ve been hearing for some time that experienced PMs focus less on the speed of the team and more on predictability. This is because predictability is about confidence: it allows someone a better idea of when something might be done, and with fewer surprises along the way. When we spoke to experienced PMs—particularly Pivots—a number of recurring factors for measuring predictability kept emerging:

1. Consistent velocity (low volatility)

Velocity and points accepted is often the first (and sometimes only) metric that PMs pay attention to. But instead of just looking at the velocity numbers in isolation, experienced PMs track trends in velocity over time for signs of peaks and valleys, known as volatility. We have a metric we use for volatility in Tracker, but this number alone might not tell you as much as visualizing these trends on a graph. Ken Mayer wrote about the importance of paying attention to volatility on our blog a few years back.

2. A smooth process pipeline and few blockers

This includes any stories that may be blocked, or which are not moving through the delivery process as expected. A common problem is a situation in which the team has delivered a number of stories but those stories have not been accepted/rejected, leading to a situation colloquially known as “Christmas time,” due to the red/green button coloring on the dashboard. Another common issue is having too many stories started without being delivered (aka, “too many balls in the air”). Experienced PMs look for these patterns frequently to get a sense of ongoing project health.

3. Predictable time between story start and acceptance

For planning purposes, Agile discourages pegging time to story work (or points). This is because it skews team estimation of work away from complexity and toward time. But knowing how long it takes to typically get work done is an asset to a PM trying to identify process bottlenecks, or planning for several iterations out.

Closely tied to completion time is rejection rate. PMs reported that they would look for trends in rejection; too many rejections for a particular iteration were a warning sign. Rejections can mean a number of things that can point to a process bottleneck: loosely defined acceptance criteria, steps being missed, or misunderstood story requirements.

PMs were already creating reporting tools with Tracker

Note: All of these report examples were recreated from customer and Pivotal reports formats. The project names and content have been changed to protect privacy.

In another post, I wrote about the importance of using artifact research to get to customer pains quickly, tease out needs vs. wants, and demonstrate value to stakeholders. I won’t go into depth about it in this post, but most of our insights about customer reporting needs came from looking at what customers were doing already.

What did we get done? What is still in flight?

customer progress reports

One report we encountered early on was a basic progress report. This was something that seemed ubiquitous across customers and Pivots. The format of the report was straightforward, showing what the team accomplished in some period of time (usually an iteration) and what was still in progress. These reports may have also included some notion of what was coming down the pipeline the next iteration. In all cases, the progress reports used stories as a basic unit of accomplishment.

Below are two examples of this report: one from a Pivotal Labs PM, another from a customer. Both are creating roundups of stories accepted, with the darker version (a Markdown file) demonstrating what was also in-flight and unstarted.

What is the status of the initiatives I care about? When will they be done?

customer epic reports

Most of our customers (and Pivots) use epics to represent major feature areas. This makes sense considering that epics basically represent large features. Many teams will also version their epics to demonstrate their place in a bigger roadmap (e.g., “Login screen v1,” “Shopping cart v2,” etc.).

To this end, epic progress reports were one of the most common reports we saw. This took many forms, but the most common was that of a report each iteration with the recently completed, in-progress, and upcoming epics. The degree of details varied between reports, but the constant was knowing which epics were completed, which epics were in progress (and what was left from those epics), and what was coming down the pipeline.

Did we accomplish what we expected?

Estimated vs. actual
Customers would frequently tell us that they wanted to see what they planned to do vs. what they actually accomplished. This closely follows the theme of predictability, as it gives PMs a sense of what may happen in the future based on what happened in the past. More importantly, it allows PMs to identify problems with their process.

The first version of Tracker Analytics focused on helping teams become more predictable

project report annotations

There was a lot we wanted to do with Tracker Analytics, but for the first version, we kept things simple and focused on the most basic form of the above indicators. We designed Analytics to help teams clear process bottlenecks, stay predictable, and communicate status. We wanted to help PMs answer the questions:

  • What did we work on? How much did we accomplish?
  • Are we predictable? What could we improve on?
  • Are there any bottlenecks in our process?

Lisa Doan wrote a great post on the basic usage of Analytics. In the article, she talks about using Analytics to make teams more predictable, identify bottlenecks, and report on feature work.

How are the new Analytics working for your team? Let us know by using the Send us feedback widget in the top left of Analytics, or email tracker@pivotal.io.

The post How Customer PMs Helped Us Design Tracker Analytics v1 appeared first on Pivotal Tracker.

Categories: Companies

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 18:04
Validated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast. In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt
Categories: Companies

Call for Papers: Agile / Lean at HICSS (due 15 June 2016)

Leading Agile - Mike Cottmeyer - Mon, 04/18/2016 - 17:19

Are you exploring agile/lean management practices? Submit a draft agile/lean research paper or experience report by June 15, 2016 to the Agile/Lean mini-track at the Hawaii International Conference on System Sciences (HICSS)!

The HICSS conference, sponsored by IEEE, brings together a broad cross-section of researchers in system sciences—including software development, social media, energy transmission, marketing systems, knowledge management and information systems. Agile and lean management practices apply to all of these fields.Influential papers on Scrum patterns, agile metrics, lean forecasting, qualitative grounded inquiry, distributed development and large-company experience reports have appeared in past years. HICSS 50 will be held January 4-7, 2017 at Hilton Waikoloa Village, Big Island, Hawaii.

In conjunction with, and in celebration of, the 50th HICSS conference, selected submissions from this mini-track may be selected for fast-track consideration in the Journal of Information Technology Theory and Application (JITTA) and the AIS Transactions on Human-Computer Interaction.

If you are researching or innovating in applying agile and lean principles, we welcome your submission. The full call for papers is here: Agile/Lean HICSS-50 Call for Papers.

Help us extend the agile and lean frontier, by presenting your work at HICSS.

The post Call for Papers: Agile / Lean at HICSS (due 15 June 2016) appeared first on LeadingAgile.

Categories: Blogs

Call for Papers: Agile / Lean at HICSS (due 15 June 2016)

Leading Agile - Mike Cottmeyer - Mon, 04/18/2016 - 17:19

Are you exploring agile/lean management practices? Submit a draft agile/lean research paper or experience report by June 15, 2016 to the Agile/Lean mini-track at the Hawaii International Conference on System Sciences (HICSS)!

The HICSS conference, sponsored by IEEE, brings together a broad cross-section of researchers in system sciences—including software development, social media, energy transmission, marketing systems, knowledge management and information systems. Agile and lean management practices apply to all of these fields.Influential papers on Scrum patterns, agile metrics, lean forecasting, qualitative grounded inquiry, distributed development and large-company experience reports have appeared in past years. HICSS 50 will be held January 4-7, 2017 at Hilton Waikoloa Village, Big Island, Hawaii.

In conjunction with, and in celebration of, the 50th HICSS conference, selected submissions from this mini-track may be selected for fast-track consideration in the Journal of Information Technology Theory and Application (JITTA) and the AIS Transactions on Human-Computer Interaction.

If you are researching or innovating in applying agile and lean principles, we welcome your submission. The full call for papers is here: Agile/Lean HICSS-50 Call for Papers.

Help us extend the agile and lean frontier, by presenting your work at HICSS.

The post Call for Papers: Agile / Lean at HICSS (due 15 June 2016) appeared first on LeadingAgile.

Categories: Blogs

Full Program Announced for Agile on the Beach 2016

Scrum Expert - Mon, 04/18/2016 - 17:17
Agile on the Beach 2016 has announced its full line up for this edition. The sixth Agile on the Beach conference will be held on 1&2 September at the Performance Centre at Tremough Campus in Penryn, Cornwall. The conference will have 5 tracks : Agile Software Delivery, Teams & Practices, Product Design & Management, Business and a Bonus Track of double workshops all running on the 1st & 2nd September. The two keynotes of the Agile on the Beach 2016 conference will be Dr. Rebecca Parsons ThoughtWorks’ Chief Technology Officer and Director of Agile Alliance and Dr Linda Rising internationally known for her work in patterns, retrospectives, influence strategies, agile development, and the change process. Super Early Birds can secure tickets at £295 (£100 off full price) if booked before 31st May. To explore the full conference program and for help with travel and accommodation please visit  www.agileonthebeach.com
Categories: Communities

Introducing Large Scale Scrum (LeSS)

TV Agile - Mon, 04/18/2016 - 16:51
Large Scale Scrum (LeSS) defines itself as “Scrum applied to many teams working together on one product”. LeSS aims at finding a balance between defined elements and empirical process control. LeSS tries to figure out how to apply the principles and elements of Scrum in a large-scale context. Video producer: http://www.adventureswithagile.com
Categories: Blogs

FAQ: What Are My Options For Board User Access?

My name is Ryan MacGillis and I’m a Customer Support Specialist here at LeanKit. A lot of...

The post FAQ: What Are My Options For Board User Access? appeared first on Blog | LeanKit.

Categories: Companies

The High Barrier To Entry For ES2015 (ES6)

Derick Bailey - new ThoughtStream - Mon, 04/18/2016 - 13:30

It’s no secret that I’m a fan of various ES6 features. I use a lot of the new syntax options and methods on various objects whenever I can – in browser based apps as well as node.js apps. 

Scaling es6

But I recently stumbled upon on a situation that had me wondering if the barrier to entry is really worth the cost, right now. 

A Brochure On Mountain Climbing

A friend of mine released a validation library. I’ve been needing one in my current client project, so I thought I would take a look.

The syntax looked nice. The library was small, but flexible. It had some built-in validators for common things, and it looked like it would work with just about any JavaScript object. 

Everything looked good and I was set to try it out.

Facing The Actual Mountain

I sat down to install it in a sample project, and learn how it works.

This is what I do whenever I get ready to try a new library of any kind. It frees me from the confines of a large and potentially complex project, and lets me focus on the library in question.

The first thing I noticed in the github readme was “npm install” – ok, great! No problem. I’ll try this out in node and see how it goes.

But the moment after I installed the library, I saw something that looked like this:

Sure, I know this code – the named import syntax for ES2015. I use this syntax in my browser apps a lot.

… Wait

I thought I just npm installed this.

The Mountain Is Larger Than The Picture

Ok. What do I do, now?

I’ve got a library that was installed via npm just now, but I’m supposed to be using the new ES2015 import syntax with destructuring assignment – two things that node.js doesn’t support yet. 

Oh, wait… is this library meant to be used within a browser?

Ok, sure. I can do that. 

Out Of Breath Before I Started The Climb

I just need to install babel, and browserify…

But, I don’t like the command line tools. Having to re-run them on file changes is obnoxious.

Instead, I’ll install grunt and the grunt plugins to automate this.

Wait, if I’m doing this for in-browser code, then I need a basic web server, too.

Sure – just grab express and then configure the output of babel and browserify to go into the public folder, right?

Packing Up And Heading Home

I thought about everything that I had to do before I could sit down and try out this little library.

Then I deleted the sample project folder and went back to hand-writing my own validation in the client project.

Caveat Emptor, ES2015

I honestly don’t know if the library my friend wrote is any good or not.

I never actually had a chance to use it.

The mountain that I had to scale to try it out was far more than I had signed up for.

And I get it – I do. My friend builds react/redux/flux/whatever-the-latest-crazy-name-is apps where he already has all of the precompiler and build tools he needs. I have all of these tools, too, when I’m looking at my client projects. 

But the barrier to entry for using a lot of ES2015 features is still incredibly high, and yet it’s something that I continuously see being glossed over in libraries. 

Explicit Dependencies; Easier Getting Started

In this particular case, there is zero mention of anything ES2015 build related. No mention of browserify or babel or react or any other tools that would be used to turn this beautiful new syntax into something that a JavaScript runtime could actually use. 

I’m fine with using new and better syntax when it makes sense.

But when I want to spend 10 minutes trying out a new library and it will take me an hour to configure a working build and runtime environment, then I might as well be using <insert your most hated, “enterprise” level, bloated language / framework here>.

There has to be an easier way to deal with the large list of build and runtime dependencies while we wait for browsers and node.js to catch up to the syntax we want.

And installing 2, 3 or even 1 tool that requires a ton of configuration for builds isn’t it.

Categories: Blogs

Crunch! The evil that won't let it die.

Agile Game Development - Sat, 04/16/2016 - 19:24
Crunch is still an issue in the game industry.   Game developers still face excessive crunch (death marches), which leads to burnout and exodus from the industry.  I came close to leaving the industry in 2001, after the months of crunch leading up to our PS2 launch titles Midnight Club and Smugglers run.

I was shocked to see this article written by an industry veteran who suffered from "extreme exhaustion" in his earlier days and is now promoting crunch with such gems as:

"Don’t be in the game industry if you can’t love all 80 hours/week of it — you’re taking a job from somebody who would really value it."

"If working on a game for 80 hours a week for months at a time seems “strenuous” to you … practice more until you’re better at it."

Even worse, he's shared this philosophy in this horrible presentation (don't be surprised if this link is dead...I'll find another).  Who did he present this to?  How common is this attitude?

I'd write more in response, but Rami Ismail said it best.

"Sustainable Pace"

Teams should find a sustainable pace that they can work over the long term. This means working close to 40 hours per week most of the time with periods of crunch that don't last too long.

How long should crunch last? There have been studies that show the real effectiveness of crunch. For us, the proof came the last time management enforced company wide overtime in 2005. We "asked" the teams to work 6 days a week at 10 hours a day to finish Darkwatch. The proof came from the velocity observed in burndown charts.

The following chart shows how much time the teams averaged in burning off their estimated hours per week*:




The first week was a normal ~ 40 hour week. Weeks 2-5 were the 60 hour weeks. The first thing you notice is that the overtime in week 2 caused the velocity to greatly increase. More "stuff" was getting done! However the velocity decreased until we get to week 5 where the velocity was even less than a normal week!

How is this possible? It's simple. People get tired. They make more mistakes. They lose the incentive to create value.

Now, as a programmer, I love being in the "zone".  I've worked 80-hour weeks on end on a game I was very passionate about and stayed in that highly productive zone.  It never lasted more than several weeks and when I was done, I rested.  The mistake that managers made is that the effect was the cause.  They thought that forcing me to work 80-hours a week would put me in the zone.   That's like forcing someone to smile to make them happy.  It didn't work.

This is what I love about agile...it creates simple empirical evidence about what works and what doesn't. This is why there is no "rule" about overtime. There doesn't need to be a rule. If your teams are truly using iterations to find the best way to work, they'll quickly discover that after several weeks of overtime, the value is lost and that pace should not continue. It becomes common sense.

For more extensive information on this topic, check out Dan Cook's Presentation.

*The terms "velocity" and "estimation" have bad reputations in agile circles these days, but a lot of it comes from how they are used.


Categories: Blogs

Links for 2016-04-15 [del.icio.us]

Zachariah Young - Sat, 04/16/2016 - 09:00
Categories: Blogs

How to go faster than you can?

Manage Well - Tathagat Varma - Sat, 04/16/2016 - 07:23
Speed is a key skill in today's fast-moving and forever-changing world. However, most companies are not designed for speed - instead they are designed for efficiency as they typically need to cover a long distance.
Categories: Blogs

They always laugh at you…

Manage Well - Tathagat Varma - Sat, 04/16/2016 - 07:08
They always laugh at you...
Categories: Blogs

Understanding Cost of Delay and its Use in Kanban

Improving projects with xProcess - Fri, 04/15/2016 - 14:41
Cost of Delay (CoD) is a vital concept to understand in product development. It should be the guide to the ordering of work items, even if - as is often the case - estimating what it will be is difficult. Cost of Delay is important because it focuses on the business value of work items and how that value changes over time. An understanding of Cost of Delay is essential if you want to maximise the flow of value to your customers.

Don Reinertsen in his book Flow [1] has shown that, if you want to deliver the maximum business value with a given size team, you give the highest priority, not to the most valuable work items in your "pool of ideas," not even to the most urgent items (those whose business value decays at the fastest rate), nor to your smallest items. Rather you should prioritise those items with the highest value of urgency (or CoD per week) divided by the time taken to implement them. Reinertsen called this approach Weighted Shortest Job First or WSJF (sometimes pronounced wizjiff!).

In this series of articles, of which this is the first, we return to the topic of Cost of Delay (previously addressed 3 years ago in Selecting Backlog Items By Cost of Delay), and how CoD can be applied in Kanban. I'll explain the terminology used in the recently/to-be published book Essential Kanban Condensed [2] - including why this differs slightly from that used by some other authors - and how you can apply this knowledge in Kanban, potentially combining it with the use of Classes of Service.

Here are the links to the articles in this series:

Part 1: Understanding Cost of Delay and its Use in Kanban (this article)
Part 2: Cost of Delay Profiles
Part 3: How to Calculate WSJF
Part 4: Others may follow...
    Let's start with some definitions, by looking at a particular work item, a proposal for a new feature in a software product. Let's assume that we've already carried out some analysis of this feature and the competitive market in which the product operates. As a result we can forecast the cashflow - in and out - that will result from the implementation and exploitation of the feature.

    Here's what the cashflow looks like...


    To know what the Cost of Delay is for this feature we need to estimate what the cashflow would be if we delayed starting this work and instead started in say 10 weeks or 20 weeks time. Here's a comparison of these 3 different cash flows, with no delay, 10 weeks delay and 20 weeks delay.

    The analysis seems to be forecasting that not only will the peak revenue be lower by entering the market later, the time period for exploiting the feature profitably is also shorter. To see the effect of this on the overall value of the feature, it is useful to plot a cumulative value, see below...

    Now we can see what the value of this feature is if it is implemented without delay - about $420K. We can also see the loss of value - the Cost of Delay - for a 10-week and a 20-week delay.

    The next step is to plot the Cost of Delay against the length of the delay. This graph is often referred to as the CoD profile. There are a number of archetypes that different authors have identified that can help us identify the likely profile in given scenarios. We'll look at these in more detail in later articles in this series. Here's the CoD profile for our feature:

    This shows our feature is losing value most rapidly right now! As value is lost so the rate at which value is lost is also diminishing. At a certain point the projected revenue from the feature becomes less than the development cost so there is not value in implementing the feature and no further Cost of Delay.

    We refer the rate at which value is lost as Urgency (the first derivative of Cost of Delay), but other authors use Cost of Delay Per Week or (unfortunately in my view) sometimes just Cost of Delay. It is important therefore when reviewing materials on CoD to clarify whether the term is measured in currency (e.g. $) or currency per length of delay (e.g. $ per week). Here is the plot for Urgency (CoD per week) for our example:

    We can see from this graph that Urgency is diminishing in this case as the market opportunity is also disappearing. Reinertsen and Preston Smith [3] noted that the  sense of urgency in organisations often runs in the opposite direction to the market opportunity - they named it the Urgency Paradox, the "cruel tendency" for this sense of urgency in product development to be highest when the real urgency, as reflected by market opportunity, is lowest.

    We will see in future articles in this series how different kinds of work item have different CoD and Urgency profiles, and how we can use this and WSJF to help the scheduling of work to maximise the delivery of business value.

    Now read part 2: Cost of Delay Profiles

    References

    [1] Donald G. Reinertsen. The Principles of Product Development Flow, Celeritas Publishing. (2009)

    [2] David J. Anderson and Andy Carmichael, Essential Kanban Condensed. Lean Kanban University Press. (2016)

    [3] Preston G. Smith and Donald G. Reinertsen. Developing Products in Half the Time. John Wiley and Sons. (1998)
    Categories: Companies

    To Split or Not to Split, It’s Really Complicated.

    Leading Agile - Mike Cottmeyer - Fri, 04/15/2016 - 14:20

    At our most recent sprint review, a team I’m coaching completed all but three stories. So at sprint review, we were trying to decide what to do with the stories and whether to move them into planning or split them. All three stories had outside dependencies. All three had completed tasks and two of them were still blocked by the dependencies. While there could be a lot of second-guessing as to how we got ourselves into this fix, the burning question at the time was what to do about the points. We looked at three possible scenarios:

    1. Moving the story to planning and all the points with it.
    2. Splitting the story, leaving a 0 point story with finished tasks, and moving the story to planning with all its points.
    3. Splitting the story and re-estimating each part of the split.

    To Split or Not To SplitNow, I will say that this team is new to Agile and has recently started tacking Velocity Variance and Story Completion Ratio in an effort to understand what impacts their predictability. As with all new metrics, the team is very focused on them. We are calculating the Velocity Variance as Velocity this sprint – the three sprint average divided by the three sprint average times 100. Story completion ratio is calculated Stories completed from sprint planning/ stories committed at sprint planning. You can see the angst this could cause with three stories affecting both measures.

    Scenario 1 – Move the story, all its tasks and all points to the next sprint. I have worked in teams that took this approach. We did not mind a few lumps in our velocity trend. For this team though, there was an objection to inflating the sprint by three stories, one of which was a 5 pointer. They saw the extra points as a potential ding for the Variance calculation. Also they had just planned a big sprint about two sprints back and didn’t make the commitment so were shy of having a really large number. Also, having the story with its completed tasks they felt was misleading, as some of them were rather large so the math just looked funny. Day one of the sprint I have completed 15 hours toward a story.

    Scenario 2 – Split off the completed tasks into a 0 point story and plan the remaining tasks in a story with all the points. This solved the problem of having too many completed hours in the sprint but still caused the sprint to seem inflated. The other benefit of splitting was in our tracking tool there was less scrolling and the team could focus on the tasks left to complete. The velocity on average would continue to be as accurate as possible though the Variance could still take a hit on 2 sprints, the Story ratio would only be affected in the “failed” sprint.

    Scenario 3 – Split off the completed tasks into a story, move the story with incomplete tasks to planning and re-estimate each part of the split. Although not the classic way splitting is taught, it would alleviate the concern over velocity variation. This prompted a lot of discussion on whether the split actually gave value and could be a “real” story indicating that the story itself might not be small enough. When the team ran a couple of examples we found that most of the incomplete tasks were either data manipulation or testing. The team felt this indicated that the stories had been held by the blocking issues and not too large to complete with data and testing in the sprint.

    WinnerThe team chose to implement Scenario 2 with the modification that they would not plan blocked stories in a sprint. In the case of the two blocked stories, one was waiting on a data file and the other was waiting on wording from the legal team. This lead to another best practice for our team, plan stories only if all of the external dependencies have been resolved before sprint planning.

    The biggest win was the lively discussion and the empowerment the team demonstrated resolving their incomplete work issue.

    The post To Split or Not to Split, It’s Really Complicated. appeared first on LeadingAgile.

    Categories: Blogs

    To Split or Not to Split, It’s Really Complicated.

    Leading Agile - Mike Cottmeyer - Fri, 04/15/2016 - 14:20

    At our most recent sprint review, a team I’m coaching completed all but three stories. So at sprint review, we were trying to decide what to do with the stories and whether to move them into planning or split them. All three stories had outside dependencies. All three had completed tasks and two of them were still blocked by the dependencies. While there could be a lot of second-guessing as to how we got ourselves into this fix, the burning question at the time was what to do about the points. We looked at three possible scenarios:

    1. Moving the story to planning and all the points with it.
    2. Splitting the story, leaving a 0 point story with finished tasks, and moving the story to planning with all its points.
    3. Splitting the story and re-estimating each part of the split.

    To Split or Not To SplitNow, I will say that this team is new to Agile and has recently started tacking Velocity Variance and Story Completion Ratio in an effort to understand what impacts their predictability. As with all new metrics, the team is very focused on them. We are calculating the Velocity Variance as Velocity this sprint – the three sprint average divided by the three sprint average times 100. Story completion ratio is calculated Stories completed from sprint planning/ stories committed at sprint planning. You can see the angst this could cause with three stories affecting both measures.

    Scenario 1 – Move the story, all its tasks and all points to the next sprint. I have worked in teams that took this approach. We did not mind a few lumps in our velocity trend. For this team though, there was an objection to inflating the sprint by three stories, one of which was a 5 pointer. They saw the extra points as a potential ding for the Variance calculation. Also they had just planned a big sprint about two sprints back and didn’t make the commitment so were shy of having a really large number. Also, having the story with its completed tasks they felt was misleading, as some of them were rather large so the math just looked funny. Day one of the sprint I have completed 15 hours toward a story.

    Scenario 2 – Split off the completed tasks into a 0 point story and plan the remaining tasks in a story with all the points. This solved the problem of having too many completed hours in the sprint but still caused the sprint to seem inflated. The other benefit of splitting was in our tracking tool there was less scrolling and the team could focus on the tasks left to complete. The velocity on average would continue to be as accurate as possible though the Variance could still take a hit on 2 sprints, the Story ratio would only be affected in the “failed” sprint.

    Scenario 3 – Split off the completed tasks into a story, move the story with incomplete tasks to planning and re-estimate each part of the split. Although not the classic way splitting is taught, it would alleviate the concern over velocity variation. This prompted a lot of discussion on whether the split actually gave value and could be a “real” story indicating that the story itself might not be small enough. When the team ran a couple of examples we found that most of the incomplete tasks were either data manipulation or testing. The team felt this indicated that the stories had been held by the blocking issues and not too large to complete with data and testing in the sprint.

    WinnerThe team chose to implement Scenario 2 with the modification that they would not plan blocked stories in a sprint. In the case of the two blocked stories, one was waiting on a data file and the other was waiting on wording from the legal team. This lead to another best practice for our team, plan stories only if all of the external dependencies have been resolved before sprint planning.

    The biggest win was the lively discussion and the empowerment the team demonstrated resolving their incomplete work issue.

    The post To Split or Not to Split, It’s Really Complicated. appeared first on LeadingAgile.

    Categories: Blogs

    Knowledge Sharing


    SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.