Skip to content

Feed aggregator

Agile on the Beach, Falmouth, UK, September 3-4 2015

Scrum Expert - Wed, 08/19/2015 - 09:42
The Agile on the Beach conference is a two-day conference focused on Agile project management and software development that will take place in Falmouth, on the beaches of Cornwall. In the agenda of Agile on the Beach you can find topics like ” Introduction to Continuous Delivery”, “Being Agile in Business”, “Can Guinness help you estimate?”, “Effective Customer Interviewing”, “The Death of Continuous Integration”, “Value Led Agile Delivery”, “The Geek’s Guide to Leading Teams”, “Making Mad Men more Agile”, “User Story Mapping for Fun & Profit”, “My Kanban Diary”. Web site: Location ...
Categories: Communities

News update 2015/08 – Kanban Training - Peter Hundermark - Wed, 08/19/2015 - 09:13

At a recent SUGSA open space in Johannesburg someone posed the question “where do I find Scrum Masters?”. I hear this question asked repeatedly in different forms by people trying to transition to lean-agile ways of working. I believe such questions are born out of the historic machine model we have of organisations, that people are fungible resources. At least in the world of knowledge workers this is both untrue and damaging.

Moreover, if everyone in a rapidly growing number of companies adopting Scrum, chases after the same pool of experienced Scrum Masters, we are not addressing the need. We are just recycling the same group of experienced people and not growing capacity. Let’s examine the capacity requirement a little more. If you’re starting out with Scrum, for every 10 or so development team members (the people who actually do the work) you need one Scrum Master (aka team coach). I’ll save the sermon here on why you need a Scrum Master per one or two teams. You can read Michael James’ Scrum Master Checklist to learn more about that.

My simple response to the original question is “grow a pair!”…..more.


Once again Scrum Sense are teaming up with LEANability to bring foundation and advanced Kanban training to South Africa. Join Dr. Klaus Leopold on a 2-day practical learning journey and deepen your knowledge of Kanban. After completion you will receive certification through the Lean Kanban University.

Kanban is a new technique for managing software development processes in a highly efficient way. It underpins Toyota’s “just-in-time” (JIT) production system. Kanban provides a way of prioritising workflow and is effective at uncovering workflow and process issues.

Book your place on one of the following courses:

Improving & Scaling Kanban (Advanced) : 05 – 06 Nov 2015 (JHB)
Applying Kanban (Foundation) : 09 – 10 Nov 2015 (CPT)

SPECIAL OFFER: Book for 3 and only pay for 2!

agile42 Newsletteragile42 Newsletter

As hopefully most of you are aware, Scrum Sense is in the process of merging with agile42, a leading global agile coaching company.

We encourage you to sign-up to their newsletter to receive monthly company updates as well as interesting blog posts by their agile coaches.


Upcoming Courses

Certified Scrum Product Owner (JHB)
15-16 Sept 2015

Certified Scrum Master (CPT)
28-29 Sept 2015

Certified Scrum Master (JHB)
06-07 Oct 2015

Improving & Scaling Kanban – Advanced (JHB)
05-06 Nov 2015

Applying Kanban – Foundation (CPT)
09-10 Nov 2015

Course Schedule and Book Online

The post News update 2015/08 – Kanban Training appeared first on ScrumSense.

Categories: Blogs

Agile Innovation

Leading Answers - Mike Griffiths - Wed, 08/19/2015 - 05:15
Psst, this is your conscious, I am here to remind you about something you have thought about, but then hid away in the back of your mind. Lots of this agile stuff is hypocritical, it preaches evolution and change, but... Mike Griffiths
Categories: Blogs

7 Wastes That Impact Business Growth

Several of us among LeanKit’s founders and early employees first learned about Lean in the context of logistics and manufacturing. We wrote and implemented software that helped big companies buy and move and track physical goods. So we learned about the Lean concept of reducing waste in terms of inventory, transportation, motion, etc. It made […]

The post 7 Wastes That Impact Business Growth appeared first on Blog | LeanKit.

Categories: Companies

Release Burn Down Brought to Life

Xebia Blog - Wed, 08/19/2015 - 00:48

Inspired by the blog of Mike Cohn [Coh08] "Improving On Traditional Release Burndown Charts" I created a time lapsed version of it. It also nicely demonstrates that forecasts of "What will be finished?" (at a certain time) get better as the project progresses.

The improved traditional release burn down chart clearly show what (a) is finished (light green), (b) what will very likely be finished (dark green), (c) what will perhaps be finished, and perhaps not (orange), and (d) what almost is guaranteed not to be finished (red).

This knowledge supports product owners in ordering the backlog based on the current knowledge.


The result is obtained doing a Monte Carlo simulation of a toy project, using a fixed product backlog of around 100 backlog items with various sized items. The amount of work realized also varies per projectday based on a simple uniform probability distribution.

Forecasting is done using a 'worst' velocity and a 'best' velocity. Both are determined using only the last 3 velocities, i.e. only the last 3 sprints are considered.

The 2 grey lines represent the height of the orange part of the backlog, i.e. the backlog items that might be or not be finished. This also indicates the uncertainty over time of what actually will be delivered by the team at the given time.


The Making Of...

The movie above has been created using GNU plot [GNU Plot] for drawing the charts, and ffmpeg [ffmpeg] has been used to creat the time lapsed movie from the set of charts.


Over time the difference between the 2 grey lines gets smaller, a clear indication of improving predictability and reduction of risk. Also, the movie shows that the final set of backlog items done is well between the 2 grey lines from the start of the project.

This looks very similar to the 'Cone of Uncertainty'. Besides that the shape of the grey lines only remotely resembles a cone, another difference is that the above simulation merely takes statistical chances into account. The fact that the team gains more knowledge and insight over time, is not considered in the simulation, whereas it is an important factor in the 'Cone of Uncertainty'.


[Coh08] "Improving On Traditional Release Burndown Charts", Mike Cohn, June 2008,

[GNU Plot] Gnu plot version 5.0, "A portable command-line driven graphing utility",

[ffmpeg] "A complete, cross-platform solution to record, convert and stream audio and video",

Categories: Companies

Cultivating Collaboration via intense partnerships to solve problems.

Agile Complexification Inverter - Tue, 08/18/2015 - 18:02
I'm presenting this workshop at DFW Scrum.

DFW Scrum Meeting Aug. 18th 2015
It’s said that two heads are better than one, in reference to problem solving. We will use Tangram puzzles to simulate this experience, and via structured debriefs of these exercises, discover the powerful behaviors of awesome collaboration, and the negative warning signs of poor collaboration. We will jump right into simulation exercises, come prepared to have FUN and learn by doing.  No lecture - if you want a lecture… go here:
Here are some of the resources and exercise if you wish to reproduce this workshop or want to dig further into the science behind collaboration.
Presentation Cultivation Collaboration (PDF)References on collaboration (PDF)Jim Tamm's TED TALK on defensiveness (PDF)
Categories: Blogs

LeadingAgile is 332 on the Inc. 500

Leading Agile - Mike Cottmeyer - Tue, 08/18/2015 - 15:04

500_color stacked

Kinda of a neat milestone for our company. LeadingAgile debuted at number 332 on the Inc. 500 list of fastest growing privately held companies in America. We were number 18 in Georgia. Number 28 in our industry. Our growth rate was 1410% over the past three years. Again… it’s been a hell of a ride.

Thanks for being there with us through the journey.

The post LeadingAgile is 332 on the Inc. 500 appeared first on LeadingAgile.

Categories: Blogs

Interuptions, Support, Business as usual – how do these work in scrum

Growing Agile - Tue, 08/18/2015 - 14:58
Most teams starting out with Scrum understand their backlog and what we called planned work. This is work that everyone grooms together and fully understands. Then there is all the other stuff! We tell teams new to scrum to just track these things for the first sprint. They put up all things they do, outside of […]
Categories: Companies

I Think You Should Be More Explicit Here In Step Two

Leading Agile - Mike Cottmeyer - Tue, 08/18/2015 - 13:00

Have you ever seen the Sidney Harris cartoon with the two scientists at the blackboard? There is a bunch of math on one side, and a bunch of math on the other, with the words “then a miracle occurs” sandwiched in between.

The punchline is “I think you should be more explicit here in step two”.

I think agile folk need to be more explicit here in step two…

Go to Scrum training
Then a miracle occurs
Everyone goes agile

Read the Manifesto
Then a miracle occurs
Everyone goes agile

Stop being command and control
Then a miracle occurs
Everyone goes agile

What if… as a coach, a trainer, or a practitioner… you had to backup everything you said, every recommendation out of your mouth… with a plan for how you’d actually help make that recommendation a reality?

How would that change the conversation around adopting agile?

I think it would force us to meet people where they are. I think it would force us to be more pragmatic in our approach. To my point yesterday, I think it would create more empathy for managers sorting through all our dogma.

I think it would force us to think about how to place small bets. To create transformation strategies that are more iterative and incremental. I think it would force us to work through transition patterns and intermediate states.

Telling people to self-organize is a cop out.

I think we need to be more explicit here in step two.

You can check out Sidney Harris’ website if you want to see the original cartoon.

The post I Think You Should Be More Explicit Here In Step Two appeared first on LeadingAgile.

Categories: Blogs

Bottom up je agility verhogen

Ben Linders - Tue, 08/18/2015 - 11:45
Agile verandertrajecten worden veelal top down uitgevoerd in organisaties. Zulke trajecten kosten veel tijd en energie van het management en de medewerkers, duren lang, en leveren vaak niet de verwachte voordelen op: de organisatie wordt er niet echt agile van. Een bottom up aanpak, gedreven vanuit de medewerkers, kan ervoor zorgen dat een organisatie sneller en blijvend hun agility verhoogt. Continue reading →
Categories: Blogs

Grow a pair - Peter Hundermark - Tue, 08/18/2015 - 10:25
At a recent SUGSA open space in Johannesburg someone posed the question “where do I find Scrum Masters?”. I hear this question asked repeatedly in different forms by people trying to transition to lean-agile ways of working. I believe such questions are born out of the historic machine model we have of organisations, that people are fungible resources. At least in the world of knowledge workers this is both untrue and damaging.


Moreover, if everyone in a rapidly growing number of companies adopting Scrum, chases after the same pool of experienced Scrum Masters, we are not addressing the need. We are just recycling the same group of experienced people and not growing capacity. Let’s examine the capacity requirement a little more. If you’re starting out with Scrum, for every 10 or so development team members (the people who actually do the work) you need one Scrum Master (aka team coach). I’ll save the sermon here on why you need a Scrum Master per one or two teams. You can read Michael James’ Scrum Master Checklist to learn more about that.


My simple response to the original question is “grow a pair!” I’m not being rude here. I mean that to get the Scrum Masters you want and need, the best way is to invest in growing your own people. In all but the smallest of organisations, you will need two or more of these people. And I have found that Scrum Masters thrive in groups. Which is one of the reasons I helped to found SUGSA in 2008.


Another reason you should seriously consider developing your own Scrum Masters is this. A Scrum Master manages via influence, not authority. This requires accumulating a positive balance in your “political capital” bank account. A newly hired Scrum Master, even if experienced in Scrum, begins with zero political capital in your organisation. On the other hand a savvy employee who has been with you a few years knows the lie of the land and who can help her get things done.


Growing your own Scrum Masters is both simple and hard, just like Scrum itself! It’s simple in that you just need to ask the question of your own people: “who thinks they would like to try being a Scrum Master?” Of course it will help if those you ask have had some exposure to Scrum. Preferably Scrum used well. And then you need to give them the opportunity to learn. And that can be harder. Becoming agile (as opposed to “doing Scrum”) is hard. It’s a journey. And an apprentice Scrum Master needs a “master” Scrum Master to mentor her on this journey. That’s where the term “journeyman” comes from after all!


Better still, your apprentice Scrum Master can be supported on her journey in multiple ways. These might include:
  • formal training to gain explicit knowledge
  • practice within Scrum teams to grow tacit knowledge
  • mentoring by a “master” Scrum Master
  • coaching to help them unlock their potential
  • peer support via mentors, conferences and user groups.
I’ve been experimenting with ways to help grow Scrum Masters (and agile coaches) since 2010. Most recently I’ve been talking with some of my clients about an approach to selecting potential agile team coaches and growing them through a formalised mentorship programme. This includes identifying people with potential, yet who have lacked opportunity thus far. If this interests you, please drop me a line and join the conversation.

The post Grow a pair appeared first on ScrumSense.

Categories: Blogs

Russian and Belarusian translations of the Prime Directive

Ben Linders - Tue, 08/18/2015 - 08:45
The Belarusian and Russian translations have been added to the blog post Retrospective Prime Directive in many languages. This post now contains 11 translations of this important statement that is used in Agile Retrospectives all over the world. Continue reading →
Categories: Blogs

Favorite Recruiter Email Subject

I just chuckled when the I got the following email from a recruiter:

Are You Down With ATG???

Brilliant targeting as ATG barely survived the dotcom implosion and limped along before finally being sold off to Oracle. Given that almost any ATG developer would have been working in the late 90s, a reference to Naughty by Nature’s OPP was a great hook. Sadly I have zero interest in returning to work on a failed app server product, but I love the effort.

Categories: Blogs

Name Calling and Ad Hominem Attacks

Leading Agile - Mike Cottmeyer - Mon, 08/17/2015 - 23:09

It’s been a crazy few weeks for me.

Three weeks ago I was Nicaragua working on water systems for a non-profit my family and I support. Two weeks ago I was in Washington, DC for the Agile2015 conference. Last week I was digging out from the previous two weeks with clients and our ever expanding sales pipeline. This week I’m bouncing around the country talking to prospects while trying to sneak a quick trip to Gainesville to drop my middle son off for his freshman year of college.

Kinda nuts, but I digress…

In between the week in Nicaragua and the week at Agile2015, I got to spend a day at the Agile Coaches Camp also in DC. One of the sessions I proposed and facilitated was on impediments to large scale agile transformation. Go figure, huh? Of course I have my own ideas around the kinds of things that get in the way of large scale agile transformations, but I wanted to start by talking to the group and getting an idea of what they thought was important.

I was kinda blown away.

It seemed like everyone thought the problems adopting agile in large organizations came down to managers not being open to doing agile. The thinking was that managers were too command and control. That managers were not agile enough. That companies didn’t have an agile culture, that no one was willing to inspect and adapt, or that no one was willing to respond to change. Everything was about how managers were getting in the way.

Here is a question for you…

What if managers were actually open to doing agile? What if managers didn’t want to be command and control? What if managers were agile enough, and were willing to build an agile culture, were willing to inspect and adapt, and desperately wanted to respond to change… but what if they didn’t know how? What if the environment around them had real barriers to adopting agile and what if all they wanted was guidance toward how to remove them?

What would you tell them? Go forth and self-organize?

What is your answer when an organization has contractual obligations committing them to 10 times more work than they can actually do?

What is your answer when formal business processes are hopelessly entangled?

What is your answer when legacy architectures are full of technical debt and there is insufficient automation…or a regular build… let alone continuous integration?

What do you do when there aren’t enough people to staff complete cross functional teams or governance and regulation get in the way?

Stop the ad hominem attacks

I think we need to stop labelling people, stop the ad hominem attacks on management… and if we are serious about helping companies adopt agile in a meaningful way… start figuring out strategies for helping managers solve the real issues effecting real companies with real business problems to solve. It’s the system these folks are living in that is driving the behavior you are seeing in large organizations. We can’t change attitude unless we fix the systems.

Help me get there

Our industry has an absolute fixation on end-state. We continue to iterate on SAFe and LeSS and DaD and Scrum. We talk about Beyond Budgeting, The Future of Management, and Holocracy. The problem we have right now, with the companies reading these books, isn’t that the end-state isn’t understood… it’s that they can’t see how to take their existing organizations, with their existing models, and existing constraints… and find a way to transition to the new model.

They need help understanding what the intermediate states look like.

It’s easy to tell a child who can’t swim to jump in a pool and swim.

It’s easy to tell an overweight teenager to be healthy and loose weight.

It’s easy to point to the goal and call names when someone can’t seem to achieve it.

The hard part is meeting people where they are and helping them craft a strategy for getting where they need to be…even where they want to be… but just can’t seem to find a way to get there. I’m becoming more and more convinced that the end state isn’t the problem… it’s the transition patterns. It’s the change management. It’s helping companies make progress, and still deliver, while they are changing.

Name calling and ad hominem attacks don’t help.

The post Name Calling and Ad Hominem Attacks appeared first on LeadingAgile.

Categories: Blogs

Iterables, Iterators and Generator functions in ES2015

Xebia Blog - Mon, 08/17/2015 - 21:45

ES2015 adds a lot of new features to javascript that make a number of powerful constructs, present in other languages for years, available in the browser (well as soon as support for those features is rolled out of course, but in the meantime we can use these features by using a transpiler such as Babeljs or Traceur).
Some of the more complicated additions are the iterator and iterable protocols and generator functions. In this post I'll explain what they are and what you can use them for.

The Iterable and Iterator protocols

These protocols are analogues to, for example, the Java Interfaces and define the contract that an object must adhere to in order to be considered an iterable or an iterator. So instead of new language features they leverage existing constructs by agreeing upon a convention (as javascript does not have a concept like an interface in other typed languages). Let's have a closer look at these protocols and see how they interact with each other.

The Iterable protocol

This protocol specifies that for an object object to be considered iterable (and usable in, for example, a `for ... of` loop) it has to define a function with the special key "Symbol.iterator" that returns an object that adheres to the iterator protocol. That is basically the only requirement. For example say you have a datastructure you want to iterate over, in ES2015 you would do that as follows:

class DataStructure {
  constructor(data) { = data;

  [Symbol.iterator]() {
    let current = 0
    let data =;
    return  {
      next: function () {
        return {
          value: data[current++],
          done: current > data.length
let ds = new DataStructure(['hello', 'world']);

console.log([...ds]) // ["hello","world"]

The big advantage of using the iterable protocol over using another construct like `for ... in` is that you have more clearly defined iteration semantic (for example: you do not need explicit hasOwnProperty checking when iterating over an array to filter out properties on the array object but not in the array). Another advantage is the when using generator functions you can benefit of lazy evaluation (more on generator functions later).

The iterator protocol

As mentioned before, the only requirement for the iterable protocol is for the object to define a function that returns an iterator. But what defines an iterator?
In order for an object to be considered an iterator it must provided a method named `next` that returns an object with 2 properties:
* `value`: The actual value of the iterable that is being iterated. This is only valid when done is `false`
* `done`: `false` when `value` is an actual value, `true` when the iterator did not produce a new value

Note that when you provided a `value` you can leave out the `done` property and when the `done` property is `true` you can leave out the value property.

The object returned by the function bound to the DataStructure's `Symbol.iterator` property in the previous example does this by returning the entry from the array as the value property and returning `done: false` while there are still entries in the data array.

So by simply implementing both these protocols you can turn any `Class` (or `object` for that matter) into an object you can iterate over.
A number of built-ins in ES2015 already implement these protocols so you can experiment with the protocol right away. You can already iterate over Strings, Arrays, TypedArrays, Maps and Sets.

Generator functions

As shown in the earlier example implementing the iterable and iterator protocols manually can be quite a hassle and is error-prone. That is why a language feature was added to ES2015: generator functions. A generator combines both an iterable and an iterator in a single function definition. A generator function is declared by adding an asterisk (`*`) to the function name and using yield to return values. Big advantage of using this method is that your generator function will return an iterator that, when its `next()` method is invoked will run up to the first yield statement it encounters and will suspend execution until `next()` is called again (after which it will resume and run until the next yield statement). This allows us to write an iteration that is evaluated lazily instead of all at once.

The following example re-implements the iterable and iterator using a generator function producing the same result, but with a more concise syntax.

class DataStructure {
  constructor(data) { = data;

  *[Symbol.iterator] () {
    let data =;
    for (let entry of data) {
      yield entry;
let ds = new DataStructure(['hello', 'world']);

console.log([...ds]) // ["hello","world"]
More complex usages of generators

As mentioned earlier, generator functions allow for lazy evaluation of (possibly) infinite iterations allowing to use constructs known from more functional languages such as taking a limited subset from an infinte sequence:

function* generator() {
  let i = 0;
  while (true) {
    yield i++;

function* take(number, gen) {
  let current = 0;
  for (let result of gen) {
    yield result;
    if (current++ >= number) {
console.log([...take(10, generator())]) // [0,1,2,3,4,5,6,7,8,9]
console.log([...take(10, [1,2,3])]) // [1,2,3]

Delegating generators
Within a generator it is possible to delegate to a second generator making it possible to create recursive iteration structures. The following example demonstrates a simple generator delegating to a sub generator and returning to the main generator.

function* generator() {
  yield 1;
  yield* subGenerator()
  yield 4;

function* subGenerator() {
  yield 2;
  yield 3;

console.log([...generator()]) // [1,2,3,4]
Categories: Companies

30 Day Sprints for Personal Development: Change Yourself with Skill

J.D. Meier's Blog - Mon, 08/17/2015 - 18:44

"What lies behind us and what lies before us are small matters compared to what lies within us. And when we bring what is within us out into the world, miracles happen." -- Ralph Waldo Emerson

I've written about 30 Day Sprints before, but it's time to talk about them again:

30 Day Sprints help you change yourself with skill.

Once upon a time, I found that when I was learning a new skill, or changing a habit, or trying something new, I wasn't getting over that first humps, or making enough progress to stick with it.

At the same time, I would get distracted by shiny new objects.  Because I like to learn and try new things, I would start something else, and ditch whatever else I was trying to work on, to pursuit my new interest.  So I was hopping from thing to thing, without much to show for it, or getting much better.

I decided to stick with something for 30 days to see if it would make a difference.  It was my personal 30 day challenge.  And it worked.   What I found was that sticking with something past two weeks, got me past those initial hurdles.  Those dips that sit just in front of where breakthroughs happen.

All I did was spend a little effort each day for 30 days.  I would try to learn a new insight or try something small each day.  Each day, it wasn't much.  But over 30 days, it accumulated.  And over 30 days, the little effort added up to a big victory.

Why 30 Day Sprints Work So Well

Eventually, I realized why 30 Day Sprints work so well.  You effectively stack things in your favor.  By investing in something for a month, you can change how you approach things.  It's a very different mindset when you are looking at your overall gain over 30 days versus worrying about whether today or tomorrow gave you immediate return on your time.  By taking a longer term view, you give yourself more room to experiment and learn in the process.

  1. 30 Day Sprints let you chip away at the stone.  Rather than go big bang or whole hog up front, you can chip away at it.  This takes the pressure off of you.  You don't have to make a breakthrough right away.  You just try to make a little progress and focus on the learning.  When you don't feel like you made progress, you at least can learn something about your approach.
  2. 30 Day Sprints get you over the initial learning curve.  When you are taking in new ideas and learning new concepts, it helps to let things sink in.  If you're only trying something for a week or even two weeks, you'd be amazed at how many insights and breakthroughs are waiting just over that horizon.  Those troughs hold the keys to our triumphs.
  3. 30 Day Sprints help you stay focused.  For 30 days, you stick with it.  Sure you want to try new things, but for 30 days, you keep investing in this one thing that you decided was worth it.  Because you do a little every day, it actually gets easier to remember to do it. But the best part is, when something comes up that you want to learn or try, you can add it to your queue for your next 30 Day Sprint.
  4. 30 Day Sprints help you do things better, faster, easier, and deeper.  For 30 days, you can try different ways.  You can add a little twist.  You can find what works and what doesn't.  You can keep testing your abilities and learning your boundaries.  You push the limits of what you're capable of.  Over the course of 30 days, as you kick the tires on things, you'll find short-cuts and new ways to improve. Effectively, you unleash your learning abilities through practice and performance.
  5. 30 Day Sprints help you forge new habits.  Because you focus for a little bit each day, you actually create new habits.  A habit is much easier to put in place when you do it each day.  Eventually, you don't even have to think about it, because it becomes automatic.  Doing something every other day, or every third day, means you have to even remember when to do it.  We're creatures of habit.  Just replace how you already spend a little time each day, on your behalf.

And that is just the tip of the iceberg.

The real power of 30 Day Sprints is that they help you take action.  They help you get rid of all the excuses and all the distractions so you can start to achieve what you’re fully capable of.

Ways to Make 30 Day Sprints Work Better

When I first started using 30 Day Sprints for personal development, the novelty of doing something more than a day or a week or even two weeks, was enough to get tremendous value.  But eventually, as I started to do more 30 Day Sprints, I wanted to get more out of them.

Here is what I learned:

  1. Start 30 Day Sprints at the beginning of each month.  Sure, you can start 30 Day Sprints whenever you want, but I have found it much easier, if the 17th of the month, is day 17 of my 30 Day Sprint.  Also, it's a way to get a fresh start each month.  It's like turning the page.  You get a clean slate.  But what about February?  Well, that's when I do a 28 Day Sprint (and one day more when Leap Year comes.)
  2. Same Time, Same Place.  I've found it much easier and more consistent, when I have a consistent time and place to work on my 30 Day Sprint.  Sure, sometimes my schedule won't allow it.  Sure, some things I'm learning require that I do it from different places.  But when I know, for example, that I will work out 6:30 - 7:00 A.M. each day in my living room, that makes things a whole lot easier.  Then I can focus on what I'm trying to learn or improve, and not spend a lot of time just hoping I can find the time each day.  The other benefit is that I start to find efficiencies because I have a stable time and place, already in place.  Now I can just optimize things.
  3. Focus on the learning.  When it's the final inning and the score is tied, and you have runners on base, and you're up at bat, focus is everything.  Don't focus on the score.  Don't focus on what's at stake.  Focus on the pitch.  And swing your best.  And, hit or miss, when it's all over, focus on what you learned.  Don't dwell on what went wrong.  Focus on how to improve.  Don't focus on what went right.  Focus on how to improve.  Don't get entangled by your mini-defeats, and don't get seduced by your mini-successes.  Focus on the little lessons that you sometimes have to dig deeper for.

Obviously, you have to find what works for you, but I've found these ideas to be especially helpful in getting more out of each 30 Day Sprint.  Especially the part about focusing on the learning.  I can't tell you how many times I got too focused on the results, and ended up missing the learning and the insights. 

If you slow down, you speed up, because you connect the dots at a deeper level, and you take the time to really understand nuances that make the difference.

Getting Started

Keep things simple when you start.  Just start.  Pick something, and make it your 30 Day Sprint. 

In fact, if you want to line your 30 Day Sprint up with the start of the month, then just start your 30 Day Sprint now and use it as a warm-up.  Try stuff.  Learn stuff.  Get surprised.  And then, at the start of next month, just start your 30 Day Sprint again.

If you really don't know how to get started, or want to follow a guided 30 Day Sprint, then try 30 Days of Getting Results.  It's where I share my best lessons learned for personal productivity, time management, and work-life balance.  It's a good baseline, because by mastering your productivity, time management, and work-life balance, you will make all of your future 30 Day Sprints more effective.

Boldly Go Where You Have Not Gone Before

But it's really up to you.  Pick something you've been either frustrated by, inspired by, or scared of, and dive in.

Whether you think of it as a 30 Day Challenge, a 30 Day Improvement Sprint, a Monthly Improvement Sprint, or just a 30 Day Sprint, the big idea is to do something small for 30 days.

If you want to go beyond the basics and learn everything you can about mastering personal productivity, then check out Agile Results, introduced in Getting Results the Agile Way.

Who knows what breakthroughs lie within?

May you surprise yourself profoundly.

Categories: Blogs

How to Use Continuous Planning

Johanna Rothman - Mon, 08/17/2015 - 17:17

If you’ve read Reasons for Continuous Planning, you might be wondering, “How can we do this?” Here are some ideas.

You have a couple of preconditions:

  • The teams get to done on features often. I like small stories that the team can finish in a day or so.
  • The teams continuously integrate their features.

Frequent features with continuous integration creates an environment in which you know that you have the least amount of work in progress (WIP). Your program also has a steady stream of features flowing into the code base. That means you can make decisions more often about what the teams can work on next.

Now, let’s assume you have small stories. If you can’t imagine how to make a small story, here is an example I used last week that helped someone envision what a small story was:

Imagine you want a feature set called “secure login” for your product. You might have stories in this order:

  1. A person who is already registered can login with their user id and password. For this, you only need to have a flat file and a not-too-bright parser—maybe even just a lookup in the flat file. You don’t need too many cases in the flat file. You might only have two or three. Yes, this is a minimal story that allows you to write automated tests to verify that it works even when you refactor.
  2. A person who is not yet registered can create a new id and password.
  3. After the person creates a new id and password, that person can log in. You might think of the database schema now. You might not want the entire schema yet. You might want to wait until you see all the negative stories/features. (I’m still thinking flat file here.)
  4. Now, you might add the “parse-all-possible-names” for login. You would refactor Story #2 to use a parser, not copy names and emails into a flat file. You know enough now about what the inputs to your database are, so you can implement the parser.
  5. You want to check for people that you don’t want to log in. These are three different small stories. You might need a spike to consider which stories you want to do when, or do some investigation.
    1. Are they from particular IP addresses (web) or physical locations?
    2. Do you need all users to follow a specific name format?
    3. Do you want to use a captcha (web) or some other robot-prevention device for login (three tries, etc.)?

Maybe you have more stories here. I am at the limit of what I know for secure login. Those of you who implement secure login might think I am past my limit.

These five plus stories are a feature set for secure login. You might not need more than stories 1, 2, and 3 the first time you touch this feature set. That’s fine. You have the other stories waiting in the product backlog.

If you are a product owner, you look at the relative value of each feature against each other feature. Maybe you need this team to do these three first stories and then start some revenue stories. Maybe the Accounting team needs help on their backlog, and this feature team can help. Maybe the core-of-the-product team needs help. If you have some kind of login, that’s good enough for now. Maybe it’s not good enough for an external release. It’s good enough for an internal release.

Your ability to change what feature teams do every so often is part of the value of agile and lean product ownership—which helps a program get to done faster.

You might have an initial one-quarter backlog that might look like this:


Start at the top and left.

You see the internal releases across the top. You see the feature sets across just under the internal releases. This part is still a wish list.

Under the feature sets are the actual stories in the details. Note how the POs can change what each team does, to create a working skeleton.

The details are in the stories at the bottom.

This is my picture.You might want something different from this.

The idea is to create a Minimum Viable Product for each demo and to continue to improve the walking skeleton as the project teams continue to create the product.

Because you have release criteria for the product as a whole, you can ask as the teams demo, “What do we have to do to accomplish our release criteria?” That question allows and helps you replan for the next iteration (or set of stories in the kanban). Teams can see interdependencies because their stories are small. They can ask each other, “Hey can you do the file transfer first, before you start to work on the Engine?”

The teams work with their product owners. The product owners (product owner team) work together to develop and replan the next iteration’s plan which leads to replanning the quarter’s plan. You have continuous planning.

You don’t need a big meeting. The feature team small-world networks help the teams see what they need, in the small. The product owner team small-world network helps the product owners see what they need for the product over the short-term and the slightly longer term. The product manager can meet with the product owner team at least once a quarter to revise the big picture product roadmap.

You can do this if the teams have small stories, if they pay attention to technical excellence and use continuous integration.

In a program, you want smallness to go big. Small stories lead to more frequent internal releases (every day is great, at least once a month). More frequent internal releases lead to everyone seeing progress, which helps people accomplish their work.

You don’t need a big planning meeting. You do need product owners who understand the product and work with the teams all the time.

The next post will be about whether you want resilience or prediction in your project/program. Later :-)

Categories: Blogs

Why you might struggle with Progressive Elaboration

Leading Agile - Mike Cottmeyer - Mon, 08/17/2015 - 14:02

Progressive Elaboration is the process of breaking Epics down into User Stories and defining details of those stories over time as the stories move through release planning and get closer and closer to being added to a Sprint. I’ve been exploring why some organizations struggle to put this into practice.

Context: I often work with largish organizations making commitments 2 to 4 months out. They use Scrum, stable teams, stable velocity and some amount of Epic/Feature/Story decomposition to make these commitments. Agile can help them be predictable. Whenever I begin working with these Product Owners and Program Managers I explain the need to quickly identify all the stories for the release (so that we could create or verify a release plan) yet also get 2-3 sprints worth of backlog ready for the teams because they are going to start sprinting soon. Sometimes this works like a charm. Other times teams have difficulty getting into the progressive elaboration approach. They can’t get beyond defining all the details for all the stories for an Epic in one pass.

When people think about defining requirements, they often have in mind planning a release and/or reducing misunderstanding. One’s primary purpose influences how fast or slow they go and to what level of detail they go:

Theory of Constraints Evaporating Cloud for Defining Requirements

Theory of Constraints Evaporating Cloud for Defining Requirements

So it seems that a compromise has to be made — to strike a balance between going fast and going slow, between writing more or writing less, between planning a release quickly or reducing misunderstanding for development.

There are some hidden assumptions underlying this thinking:

  • It is more efficient to completely analyze any feature in just one step.
  • We can’t plan a release without all the details.

On the contrary, we can plan a release with just a list of user stories. Planning a release with a list of features and user stories is a completely separate need than “have all the detail needed to develop and test”. The former is needed early on, and the later is needed much later. This time dimension is often overlooked.

Further, it’s not more efficient to completely analyze any feature in just one step if some people needed to do the analysis aren’t available when you want them involved. (For example, they may be fully allocated to wrapping up a prior release.) Even if it could be proven most efficient to look at a feature just once, with lean thinking we are less concerned about the efficiency of individual workers and are far more interested in the smooth flow of work. Local optimization sub-optimizes the whole. We get more value by keeping the valuable work flowing than we would by controlling costs. In this case, the work is the work of planning a release and decomposing features into stories and (eventually) coming up with the details for development just in time. If we can quickly come up with a list of user stories for the release plan, we keep the (program) management work flowing. The work of management in this case is ensuring that we can complete the release on time; and doing stakeholder management, risk management, and scope management if not. If we can’t come up with a list of all the stories for the release, then the work of management is blocked.

Now we can decouple these 2 things: release planning versus getting the details for development. We can redefine the problem and draw a new picture:

Objective Purpose Prerequisite Plan the release schedule <—  identify stories quickly <—  write less now Plan sprints later <—  define gory detail JIT <—  write more later

When I say “identify stories quickly”, I’m usually just talking about the story titles, 1 liners. When I say “write less now”, I also mean analyze less now. We need to analyze just enough to come up with a list of what the likely candidate user stories might be.

When I say “write more later”, I also mean to analyze more later. For that, consider the lead time. In the context that I operate (see above), my advice is to have 2 or 3 sprints worth of work ready. What “ready” means and how much detail is needed depends on the organization. Notice that “later” might just be 1 day later, but it’s still later than “now”.

With that said, can your Product Owner Team “now” identify the (likely) stories for the remaining features before coming up with the full details for any of them? Seems to me that the BA or PO paired up with someone technical should be able to look at the features on the wish list and come up with 1 to 5 user stories for each. Should be able to do that in a day or less.

P.s. If anyone cares, this approach of resolving a seeming conflict is from the Theory of Constraints and is known as the Evaporating Cloud Method.

The post Why you might struggle with Progressive Elaboration appeared first on LeadingAgile.

Categories: Blogs

Your Application Architecture Is Not A Single, Distinct Thing

Derick Bailey - new ThoughtStream - Mon, 08/17/2015 - 13:30

It’s easy to think of an application as having “an architecture.” We talk this way a lot, in software development. But the more I build similarly-architected systems and new systems with different architectures, the more I realize that architecture is not a single thing. Rather, architecture is a fluid and living thing – it grows, it shrinks, it changes over time.


An application’s architecture is not a single thing, but a collection of things that form the living whole.

A Single Application’s Architecture

I’m building a Backbone.js application for my client these days. It’s a component-based application architecture. There are many individual parts of the screen that are broken down into controllers using the mediator pattern, with views that show things.

But it’s also a messaging based architecture where I am using pub/sub (event aggregator), request/response and commands through an in-memory message broker. This is one of the ways that unrelated components communicate with each other.

It also contains a wizard-style workflow where I register steps and views for each steps. There is an architecture in and of itself in this – beyond the mediators, and beyond the broker – to facilitate the flow between high level steps.

Then, there’s the back-end code on the server – a middleware based stack of request handlers. Various HTTP handlers get called at various times, for authentication, authorization, APIs and other aspects of the web server.

Oh, and there’s a messaging architecture on the back-end as well, with it’s own broker. And each of the applications sitting on the other side of the queue has it’s own architecture, still. 

But wait, there’s also …

Let It Grow. Let It Find It’s Own Shape.

My Backbone application did not start out with this architecture in mind. Sure, I had an idea that I wanted to do component-based UI with Backbone. I set out to implement that because of the complex UI needs. But along the way, I found the need for the wizard. I found the need for unrelated components to communicate with each other. I found the need to coordinate multiple view implementations with a single, higher level workflow … and so much more.

I found the need for most of this architecture, rather than forcing the application in to a given architecture. I also know that the architecture that I have next month will not look entirely like what I have today. I will let the application grow and let the features and functionality determine the new and ever-changing shape of the architecture as I build. 

I encourage you to do the same. Don’t force an application’s feature set in to “the way things are”. Question the current design and look for opportunities to change, improve and grow the application in a meaningful way.

Categories: Blogs

Persistence with Docker containers - Team 1: GlusterFS

Xebia Blog - Mon, 08/17/2015 - 11:05

This is a follow-up blog from KLM innovation day

The goal of Team 1 was to have GlusterFS cluster running in Docker containers and to expose the distributed file system to a container by ‘mounting’ it through a so called data container.

Setting up GlusterFS was not that hard, the installation steps are explained here [installing-glusterfs-a-quick-start-guide].

The Dockerfiles we eventually created and used can be found here [docker-glusterfs]
Note: the Dockerfiles still containe some manual steps because you need to tell GlusterFS about the other node so they can find each other. In an real environment this could be done by for example Consul.

Although setting up GlusterFS cluster was not hard, mounting it on CoreOS proved much more complicated. We wanted to mount the folder through a container using the GlusterFS client but to achieve that the container needs to run in privileged mode or with ‘SYS_ADMIN’ capabilities. This has nothing to do with GlusterFS it self, Docker doesn’t allow mounts without these options. Eventually mounting of the remote folder can be achieved but exposing this mounted folder as Docker volume is not possible. This is an Docker shortcoming, see docker issue here

Our second - not so prefered - method was mounting the folder in CoreOS itself and then using it in a container. The problem here is that CoreOS does not have support for GlusterFS client but does have NFS support. So to make this work we exposed GlusterFS through NFS, the step to enable it can be found here [Using_NFS_with_Gluster].  After enabling NFS on GlusterFS we mounted the exposed folder in CoreOS and used it in a container which worked fine.

Mounting GlusterFS through NFS was not what we wanted, but luckily Docker released their experimental volume plugin support. And our luck did not end there, because it turned out David Calavera had already created a volume plugin for GlusterFS. So to test this out we used the experimental Docker version 1.8 and run the plugin with the necessary settings. This all worked fine but this is where our luck ran out. When using the experimental Docker daemon, in combination with this plugin, we can see in debug mode the plugin connects to the GlusterFS and saying it is mounting the folder. But unfortunately it receives an error, it seems from the server and then unmounts the folder.

The volume plugin above is basically a wrapper around the GlusterFS client. We also found a Go API for GlusterFS. This could be used to create a pure Go implementation of the volume plugin, but unfortunately we ran out of time to actually try this.


Using distributed files system like GlusterFS or CEPH sounds very promising especially combined with the Docker volume plugin which hides the implementation and allows to decouple the container from the storage of the host.


Between the innovation day and this blog post the world evolved and Docker 1.8 came out and with it a CEPH docker volume plugin.

Categories: Companies

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.