Skip to content

Feed aggregator

Divergent Thinking is Great for Innovation and Agile

Divergence is a concept that implies that something is different or develops in a different direction.  Divergent thinking takes this concept and applies it to help you gain new insights and solutions.  This concept can be extremely important in achieving an innovation mindset.  This can benefit your focus on innovation and agility leading to better business results.  
For a company, divergent thinking can be used as a technique that provides an opportunity to create an internal market place of ideas.  These ideas can then be discussed, refined, and evolved into multiple solution options. Divergent thinking provides individuals, teams, and companies with the ability to consider lots of possible ways to satisfy a business need. Once divergent thinking occurs, there is a need to pair this with convergent thinking so that one solution is decided upon and experimented with. 
Unfortunately Divergent thinking isn’t encouraged in many of work cultures.  While most companies will say they want new and innovative ideas, there is an angst to move quickly to an answer which typically negates the possibility of sufficient divergent thinking.  Converging too soon reduces options and opportunity. 
If you are looking to infuse a mindset where innovation can thrive, then explicitly introduce divergent thinking into your organization.  Innovation is often introduced in the form of hack days.  This is periodic and event driven.  A more effective approach may be to apply continuous divergent thinking throughout many of the steps of your idea to delivery (aka, end-to-end) process. 
Some may ask, “What is to keep divergent thinking from distracting or slowing down our work?”  The simple answer is to apply a time-box technique for divergent thinking.  A brief example is if you have a business opportunity (aka, idea), allow for a period of time to silently identify all possible solutions.  Place the ideas onto large post-ids and post them up.   
The key is to conduct the divergent thinking silently since this ensures no negative or anchored prejudice interferes.  Divergent thinking works best when ideas can flow freely without opposing opinions.  Once the time-boxed divergent period is concluded and all of the ideas are collected, then convergence may commence.
Divergent thinking is a great way to gain new insights and ideas for solutions to business problems and products.  If you are looking for ways to infuse innovative thinking into your organization, consider applying divergent thinking.  Divergent thinking may allow you to come up with the next generation idea or the 10x gain which can lead to better business results. 
Categories: Blogs

Resource Efficiency vs. Flow Efficiency, Part 3: Managing Performance

Johanna Rothman - Sun, 09/13/2015 - 21:54

Resource Efficiency vs. Flow Efficiency, Part 1: Seeing Your System explains resource efficiency and flow efficiency. Resource Efficiency vs. Flow Efficiency, Part 2: Effect on People explains why flow efficiency helps you get features done faster. Here, in part 3, I’ll address the performance management question.

New-to-agile (and some experienced) managers ask, “How can I manage performance? How will I know people are accountable for their work?”

These are good questions. Performance management and accountability are two different things in flow efficiency.

Here are some ways to manage performance:

  • Ask for the results you want.
  • Ask the team to work together to produce features.
  • Create communities of practice to help people learn their craftsmanship.
  • Provide the team members with the knowledge of how to provide feedback and coaching to each other.
  • As a manager, you provide meta-coaching and meta-feedback to team members. (The team members provide each other feedback and coaching, managing their daily performance.) (See also Four Tips for Managing Performance in Agile Teams.)

If you do these things, you will discover that people are accountable to each other for their work. The point of a standup is to help people vocalize their accountabilities. If the team works as a swarm or as multiple pairs/triads/whatever, they might not need a standup. They might need a kanban board with WIP (work in progress) limits. If your organization likes iterations because it provides boundaries for decision-making or providing focus, that works. It can work with or without a kanban board.

Here’s a question I like to ask managers, “Have you hired responsible adults?” The managers almost always say, “Yes.” They look at me as if I am nuts. I then ask, “Is there a reason for you to not trust them?”

Now we get to the real issues. If the managers have encouraged/enforced resource efficiency, the people often multitask. Or, they have to wait for other people to finish their work. People have a difficult time finishing their work “on time.” Managing “performance” is a function of the system. The system of resource efficiency requires someone to check up on people, because the expertise bottlenecks can become severe.

Instead, if you manage the system by focusing on what you want—features—instead of tasks, you don’t have to do much performance management. Will you make a mistake and hire someone who doesn’t fit? Maybe. The team can tell you.

What if you hire a superstar? Maybe you’re worried that person won’t have enough to do. My experience is that the team will ask the so-called superstar to help them with other things, making her even more of a superstar. In addition, this superstar can help with everyone learning more.

If you don’t rub people’s noses in the fact that someone might be “better” than they are, they will use that person well. Yes, sometimes, I was the person who learned from the superstar. Sometimes I was the superstar. I never noticed. I noticed I got better when I worked with certain people and asked to work with them more often.

Think about what makes people happy at work. Once you take money off the table by paying people enough, it’s all about mastery, autonomy, and purpose.

As managers, you create the system to provide mastery, autonomy, and purpose. You don’t have to manage what people do all day. If you think you do, why would you want to use agile?

BTW, managing for results isn’t new. Peter Drucker first published Managing for Results in 1964.

In part 4, I’ll address accountability and what it could mean in flow efficiency as opposed to resource efficiency.

Categories: Blogs

Resource Efficiency vs. Flow Efficiency, Part 2: Effect on People

Johanna Rothman - Sun, 09/13/2015 - 21:39

If you haven’t read Resource Efficiency vs. Flow Efficiency, Part 1: Seeing the System,  I explain there about optimizing for a given person’s work vs. optimizing for features. Some people (including managers) new to agile have questions about working in flow vs. optimizing for a person.

The managers ask:

  • How do I know the work won’t take longer if we move to flow efficiency?
  • How do I do performance management if a single person isn’t responsible for his/her work? (What’s the accountability, etc.?)

This post is about the length of the work and how people feel when they can’t finish work.

When you have experts, as in resource efficiency, the work queues up behind the expert. Let’s say you have three senior people with these separate expertise areas:

  • Cindy has deep knowledge of the internals of the database and how to make things fast. (I think of this as the platform.)
  • Brian has deep knowledge of the transaction layer and how to move data from one place to another in the product. (I think of this as the middleware.)
  • Abe has deep knowledge of how to present data to the customers and how to create a great customer experience. (I think of this as the UI layer.)

You want Features 1 and 2, which have significant UI impact. Abe is a master at iterating with all the necessary people to get the UI just right. In the meantime, Cindy and Brian go on to Features 3, 4, and 5 because they aren’t needed (yet) on Features 1 and 2.

If you measured cumulative flow, you would see that all five features are open for a while, because these three people have started on them and not finished anything.

Abe encounters a problem with the UI. The customer doesn’t respond or the product management people are not available. Something impedes his progress. So, he starts Feature 9, which is the next feature with significant UI design.

Notice that he doesn’t start the next ranked feature. He starts the next one he can work on.

Cindy and Brian also encounter delays because the test automation isn’t there, or the build takes too long (or something). Choose whatever happened to you last week as an impediment.

They need to stay busy, so they see that Feature 6 needs them both. They start on it. They realize there is a need for UI work. They ask Abe if he is available. No, Abe is working on Feature 9, not Feature 6. Now, Cindy and Brian have another feature in progress.

If this sounds like your project, you are not alone. This is a result of resource efficiency.

The human effect of resource efficiency is multitasking, a feeling of impending pressure because you can see the deadline but you’re not getting closer. You wonder if you will ever finish the work.

Instead, imagine if Cindy, Brian, and Abe along with a tester took Feature 1. They might prepare their platform and middleware parts of the work. They might help Brian with the prototype generation and iteration. They might be able to bang on doors if Abe needs to concentrate on something specific. “I’ll be done with this in a couple of hours. Can you reserve a conference room and ask the product manager to be there? She always gives me a hard time on the UI. I want to know what she thinks of it now.” Or, “Can you call Customer A and get ready for a walkthrough in a couple of hours?”

You might think of this reserve-a-room or call-people work as something a project manager should do. In reality, these actions are servant leadership actions. Anyone can do them.

We often have lead-time for some parts of development. Even if we want to work in flow, we might need other people to finish.

Even if Cindy and Brian can’t directly help with the UI, they can make it easier for Abe to succeed. And, if the tester is involved at the beginning, the tester can create automated tests that don’t depend on the GUI. Maybe the developers not working on product code can help with an automated test framework. (I find that testers new to data-driven  or automated testing don’t always know how to start. Developers might be able to help.)

Imagine if Cindy, Brian, and Abe are not the only people on their team. They are the most senior, and there are two or three other developers on the team. What happens when those more junior developers have a question? Cindy, Abe, and Brian have to stop working on their stories to work with the other people. Or, maybe they don’t and the other people are stuck. I see this in teams all the time. I bet you do, too.

When we optimize for resource efficiency, we have people with unfinished, open work. The more work they have not done, the more they have new work queuing up behind them. They work hard all day and don’t feel as if they accomplish anything, because nothing gets to done.

When we optimize for flow efficiency, people finish things, together. They have less work in progress. They feel a sense of accomplishment because they can point to what they have completed.

I can’t guarantee a team can finish faster in flow because I am not an academic. However, you’ve heard, “Many hands make light work.” That’s the idea. When we help each other move a chunk of work to done, we all succeed faster.

Part 3 will talk about what managers perceive as performance management.

Categories: Blogs

Resource Efficiency vs. Flow Efficiency, Part 1: Seeing Your System

Johanna Rothman - Sun, 09/13/2015 - 21:22

I’ve been working with a number of people who want to work in a more agile way. These nice folks have one stumbling block: resource efficiency vs. flow efficiency. This is partly because of how they see the system of work.

If you ever used phases or a waterfall approach, you might have tried to optimize resource efficiency:

ResourceEfficiency In this image, you see that the work flows from one person to another. What this picture does not show is the fact that there are delays in the workflow.

Each person is a specialist. That means they—and only they—can do their work. The more senior and the more specialized they are, the more they need to do the work and the less capable other people are (for that work). Think of a UI designer or a database admin. I often see teams who don’t have those necessary-for-them roles.

With resource efficiency, you optimize for each person along the way. You get the feature when you get it. Each person is “fully utilized.” This leads to a cost of delay. (See Diving for Hidden Treasures to see more costs of delay.) It also leads to problems such as:

  • “It takes forever to bring people up to speed around here.”
  • “Only Fred can work on that. He’s the only one who knows that code (or whatever).
  • “You can’t take a vacation. That’s just before we want to ship and you’re the only one who knows that part of the product.”
  • Many features are partly done and too few are complete. (The work in progress is quite high.

Contrast that with flow efficiency:


In flow efficiency, the team takes the feature. The team might specialize in that feature area (I see this a lot on  programs). If anyone needs to be away from work for a day or a week or two, the team can continue to do the work without that one person. Yes, the team might be a little slower, but they can still release features.

In flow efficiency, it doesn’t matter what each person “knows.” The team optimizes its work to get features done. You can see this when teams limit the backlog coming into an iteration, when they pair, swarm, or mob to finish features. If the team uses kanban and they keep to their work in progress limits, they can see flow efficiency also.

Resource efficiency is about optimizing at the level of the individual. Flow efficiency is about optimizing for the feature.

If you are transitioning to agile, ask this question, “How do we optimize for features? It doesn’t matter if we keep everyone busy. We need to release features.” This is a mindset change and can challenge many people.

Here’s why you should ask this question: Your customers buy features. They don’t buy your busy-ness.

When I tell managers about resource vs. flow efficiency, they often react, “Yes. But how do we know the features won’t take more time?” and “How will we know how to do performance management?” I’ll address that in parts 2 and 3.

Categories: Blogs

Discovering the Value of People

Evolving Excellence - Sun, 09/13/2015 - 04:52

Big news in the business world:

Wal-Mart is famous for keeping costs down, including employee-related costs. In Joplin, the company is testing a new approach: investing in workers through higher wages and training, on the theory that this will pay off all around—for customers, the company and employees.

Yes, at just one of their 4500 stores, Wal-Mart has discovered skills training.  If it works they plan to roll out this innovative program to the other stores.

That isn't a story from 1975 or even 1995.  It's from this past week.  September, 2015.  Good for them, though, even if they did take a few decades to realize the potential value of people.  A concept that many other companies in many other industries have leveraged to create competitive advantage for a long time.

Pretty much every organization has a mission statement, often gathering dust on the wall in a corner of a conference room, that says "our employees are our most valuable asset."  Really?  How is that demonstrated?

I bet Whirlpool had a statement like that, as they were laying off thousands of highly experienced people at their Fort Smith plant to chase "cheap" labor to a new facility filled with new inexperienced people in Ramos Arizpe, Mexico, while hiring at their nearby Clyde plant, and then only a couple years later they started looking for people to refill a plant they had closed.  You can't make that stuff up.

But that's what happens when you run a company based on traditional accounting methods, where labor is purely a cost and there is no offsetting P&L or balance sheet line for the value of people.  There's a benefit to reducing cost, there is no balancing benefit to preserving the value of brains.

It takes a strong manager to realize that those brains are creating value that more than offsets their cost, even if it isn't directly shown on the financial statements, and to buck the questions of their bosses and financial folks.  It takes an even stronger and more capable leader to invest in, develop, and mentor those brains to really tap into the potential value.  Organizations that have such leaders understand the problems with traditional accounting.  As a side note, you can learn more about those problems, and get to know some of those leading organizations, at the Lean Accounting Summit next month.

Truly empowered high-performing people can have an impact far beyond improvements in productivity and quality.  Consider your perception of the Chipotle brand after reading this article about a fatal accident that happened in front of one of their restaurants.

She [Chipotle shift leader] appeared to be in her early 20s - not much older then her direct reports or the victim of the accident. Yet, she acted with the compassion and appropriateness of a far older leader.

The next day, I called the Chipotle restaurant to offer my appreciation to the store's manager. I told the leader how supportive, flexible, and respectful the Chipotle crew was to all in attendance.

As our phone conversation drew to a close, I said, "I know that our presence last night was not what you expected. We no doubt hurt your business."

Before I finished my thought, the store manager responded, "There are a lot more important things in life than making our numbers last night. I'm just glad we were able to be there."

Great people, led by great leaders, create great companies.  As Richard Branson says, "Clients do not come first.  Employees come first.  If you take care of and develop your employees, they will take care of the clients."

I was thinking about a sandwich for lunch, but I think I'll head down to Chipotle for a veggie burrito.

Categories: Blogs

The Agenda for the Geneva Conference is Available

Sonar - Fri, 09/11/2015 - 15:02

The Geneva SonarQube is going to take place on 23rd-24th of September in Geneva and it is still possible to register

We want this 2-day conference to be very valuable as possible for participants, therefore it took us a little bit of time to put it together, but we believe we have a great agenda for the conference.

Categories: Open Source

Manage Agile, Berlin, Germany, October 5-8 2015

Scrum Expert - Fri, 09/11/2015 - 13:32
The Manage Agile conference is a four-day event taking place in Berlin that focused on Agile project management approaches. It is divided into two workshop days (October 5th and 8th) and two conference days. The conferences focuses on management topics and is a networking platform where specialists and managers compare notes yearly to establish Agile topics not only in software engineering but also in the ...
Categories: Communities

The Agile Manager: What’s Your Role? [Podcast]

In business today, the role of a manager is at a crossroads. The traditional command-and-control manager, who’s rooted in positional power, exists in most organizations. However, there’s a growing swell of support for a philosophy that embraces the wide sharing of intent so decision making can be opened up to employees at all levels. Listen to […]

The post The Agile Manager: What’s Your Role? [Podcast] appeared first on Blog | LeanKit.

Categories: Companies

Customer Spotlight: Elekta

Rally Agile Blog - Thu, 09/10/2015 - 21:45

Occasional stories about Rally customers who are doing cool and interesting things.

Elekta develops clinical solutions for treating cancer and brain disorders, and yes — the technologies are as powerful and sophisticated as you might expect. The systems require an enormous effort behind the scenes to maintain and integrate, not to mention the constant drive toward innovation.

Until recently, the engineering teams at Elekta were struggling to keep up. It’s a familiar story in the waterfall world: competing priorities, high work in progress (WiP), missed deadlines, and an atmosphere of disappointment and distrust.

In the past two years, the organization has turned these trends around using an agile transformation to gain clearer visibility into capacity and create better alignment between engineering and the business. Leaders are taking decisive actions to limit WiP and communicate priorities clearly across the company. As a result, quality and predictability have improved.

How did Elekta do it? Hear directly from Todd Powell, Executive VP of Elekta Software, and read on for some of the factors contributing to their successful transformation.

Success factor 1: Get the PMO involved

In 2013, Elekta Software launched its agile at scale transformation, aiming to overhaul the entire waterfall approach. The idea was to align the work with business value, restore trust between engineering and the business, and create a more rewarding work environment for employees. It also meant giving up extensively scoped plans and adopting a more agile approach to execution.

This was a big shift, especially for a PMO accustomed to defining and documenting requirements from top to bottom. They quickly realized that a “directionally correct” plan turned out to be far more reliable than one that was rigidly scoped. And it was faster — at the scale of a few weeks rather three months. The PMO has played a major role in driving the organizational change necessary for Elekta to truly transform its product development lifecycle.

Success factor 2: Use data to drive decisions

In the past, the product and engineering groups didn’t have a clear understanding of their actual capacity, so they couldn’t push back on roadmap commitments. In effect, there was no forcing function for true business prioritization — instead, everything was a high priority.

By adopting Rally’s portfolio scenario planning prototype (productized in June), the PMO can spin up scenarios based on different growth variables across a three-year planning horizon. These scenarios are key to demonstrating capacity to decision makers and showing how different funding levels affect the backlog. They also give the company a lot of data on performance and speed, which enables realistic decisions about the work.

With this data-driven portfolio planning approach, the PMO put the agile transformation on solid footing and convinced leadership to make important changes in how the work is funded, flowed to teams and delivered.

Success factor 3: Think beyond product delivery

Elekta’s agile transformation is creating real value for the business, not just in terms of engineering predictability and quality. It has given the company a framework for aligning engineering with the product organization and beyond, including marketing and sales. This alignment goes a long way toward creating a culture of trust and higher morale. Team members can rely on each other to deliver on schedule, and engineering teams are no longer pulled into “death march” projects.

Listen to the full Elekta PMO story, as presented at RallyON 2015.

Interested in portfolio management capabilities in Rally? Take the product tour.

Learn how agile portfolio management helps you translate strategic goals into realistic execution plans. Read our white paper.

Chase Doelling
Categories: Companies

Managing JavaScript’s this With ES6 Arrow Functions

Derick Bailey - new ThoughtStream - Thu, 09/10/2015 - 13:30

For a long time now, there have been 5 basic rules for managing JavaScript’s “this”. I’ve previously compiled these rules in to an email course and ebook that outlines how effectively work with them.

Now, with ES6 (officially known as ES2015) as the latest and greatest standard of the JavaScript language, I’ve updated both the ebook and email course to include the new rule that helps you manage “this”: arrow functions – and I want to share this new chapter with you!

Arrow functions

Fortunately, this rule is not another arcane syntax exception, but a simplification taken from other languages to reduce the code in callback functions and manage “this” more logically. 

ES6 Arrow Functions

With ES6, a callback function can be defined using the new “arrow function” syntax. This syntax allows you to get rid of the word “function” and put a literal “=>” arrow in between the parameter list and function body.

There are several syntax variations, depending on the number of arguments and lines of code in the function. No matter which syntax version you are using, though, they all flow from this basic form:

If you were to write the same callback function without the arrow syntax, it would look like this:

While it is possible to reduce the arrow syntax further, the difference between arrow functions and standard functions is insignificant when you look at the number of characters to type.

The real value in arrow functions is not in the syntax reduction, then, but in the way it manages “this” for you.

Lexical Scope

When you write an arrow function in JavaScript, the value of “this” is said to be determined through “lexical analysis”. This is a fancy phrase to say that “this” comes from the surrounding code, where the function is called.

As a simple example, look at the following code and the arrow function within:

Note that there is no “that = this” hand-waving… no .bind, .call or .apply, either. There is nothing in this code to explicitly state what the value of “this” will be when “this.selectManager(employee)” is called.

The arrow function syntax uses the knowledge of where the code lives to see that it should use the value of “this” from the parent function. In other words, the value of “this” in the “addNewEmployee” function will also be the value of “this” inside of the arrow function callback for the event handler. 

This has some tremendous benefits, as you can imagine. But this doesn’t solve every single riddle of “this”, and it doesn’t absolve you from the other rules either.

Changing The Surrounding “this” Changes The Arrow Function’s

When working with the code in the above “orgChart” example, the value of “this” within the addNewEmployee function is still managed by the standard rules of “this”. If you call the method in this manner:

Then the value of “this” within the “addNewEmployee” function and within the arrow function callback will be the “orgChart” object. However, if you change the invocation pattern for addNewEmployee, you will also change the value of “this” in both locations.

In this example, the standard function invocation pattern is used, leaving the value of “this” to be either the global object or undefined if your code is in strict mode. When the code gets to the callback for the “complete” event handler, the value of “this” will not be what you expect. The callback function will fail because “this” is not pointing at “orgChart”, and the “selectManager” method will not be found.

Worth 10x It’s Weight In Deleted Code

The danger in the arrow function’s value of “this” may sound dramatic, but it is no different than the danger of using “this” with “that = this;” or “.apply” or any other method of managing “this” in JavaScript. The benefits of the new arrow function syntax also far outweigh the potential detriments. 

Use the arrow function syntax with your JavaScript code, as soon as you can. This feature is worth the mental weight (and potential tooling weight to run ES6 code today) and the weight of all the code you will delete. You’ll not have to worry about “that = this”, or “.bind”, “.call” and “.apply” when working with arrow functions. You’ll only have to manage the value of “this” in the surrounding code – which is something you are already doing. 

Master The Other 5 Rules

The 5 core rules for managing JavaScript’s “this” still apply, even if you are using ES6 arrow functions, as mentioned already. If you’d like to master the complete set of rules and greatly improve your own JavaScript abilities, check out The Rules To Master JavaScript’s “this” – available in two formats:

The content of both of these formats is the same. Each format includes all 5 core rules for managing “this”, and has been updated to includes the above content as email / chapter 6!

Get the ebook or the email course, and master JavaScript’s most notorious keyword.

Categories: Blogs

Learning Agile Software Development the Hard Way

1. ReadRead one or more of the following:(FREE) Explore Don Well's site: Extreme Programming: A Gentle Introduction.

(FREE) Read the Scrum Guide.2. Participate in the communityJoin a local user group. currently dominates.  I'd suggest also finding or setting up a local Lean Coffee.

Follow Agile people on Twitter.  My handle is @jchyip.  There's an old 2012 list of "The Top 20 Most Influential Agile People".  Probably a reasonable group of people to start with.

Subscribe to Agile blogs.  There's an old 2011 list of "The Top 200 Agile Blogs".  Probably a reasonable place to start.3. Learn and practice the craft
  1. Learn about User Stories and User Story Mapping
  2. Learn Test Driven Development (TDD)
  3. Practice TDD and Pair  Programming
    • Practice using Code Katas.  Alternatively, look for similar language-specific exercises for your particular programming language.  For example, Ruby Quiz.
    • Find or setup a Coding Dojo.  This seems to have become more rare so instead...
    • Join or host a CodeRetreat.
  4. Learn about testing in the Agile context
  5. Learn about Continuous Integration and Continuous Delivery
  6. Write about what you are learning
3. Learn the big picture
  1. Read about the Agile Fluency Model
  2. Read Patterns of Enterprise Application Architecture by Martin Fowler
  3. Explore Lean Software Development
  4. Explore Kanban for software development
  5. Explore Lean Startup
  6. Watch Spotify Engineering Culture videos (Part 1) (Part 2)
  7. Attend conferences.  I recommend smaller, local, not vendor-focused conferences rather than the massive ones.  Open Space conferences tend to be good if you get the right crowd.  YOW! / GOTO / QCon tend to be good.  Lean Kanban conferences tends to be good.
4. Explore the less known
  1. Read Crystal Clear by Alistair Cockburn
  2. Read Agile Software Development by Alistair Cockburn
  3. Read Lean Product and Process Development by Allen C. Ward
  4. Read The Principles of Product Development Flow by Donald G. Reinertsen
Categories: Blogs

New Scrum / Agile Resource Library

Notes from a Tool User - Mark Levison - Wed, 09/09/2015 - 22:20

Agile Scrum Resource LibraryIf you’ve ever attended Mark’s training, you know that he reads an insane amount of reference and research on Scrum and Agile. He shares those resources – some invaluable, some handy, and some simply provocative and obscure – with course attendees in a long and growing list of links that he believes can help them toward their Scrum/Agile goals.

Scrum Master Resources – A good ScrumMaster needs to know much more than just Scrum applications and exercises. An understanding of supporting tools and methodologies, an arsenal of examples and approaches to solve problems and keep things fun but still productive, and a big picture view of how Agile can be applied throughout an organization as well as other aspects of daily life, are all important keys to being an effective ScrumMaster. These resources will provide helpful ideas and suggestions on these and much more.

Scrum Product Owner Resources – A Scrum Product Owner needs to hone their skills in communicating needs and priorities to the team, keep aware of market demands and changes, and be an effective – but diplomatic and fair – liaison between buyers and builders. These resources will help with practical advice on everything from creating personas and user stories, to backlog grooming, and beyond.

Scrum Developer Resources – References include real stories of Emergent Architecture, examples of how Continuous Deployment can be done and where it is already being done, Legacy code survival strategies, and more.

This carefully curated and organized collection of all things Agile will continue to be updated with treasures that Mark finds. It’s a huge and valuable resource of more than 800 articles, posts, links, books, and more.

If you know of a great resource that we haven’t included and should, tell Mark on Facebook or Twitter!

Categories: Blogs

About More with LeSS: A Decade of Descaling with Large-Scale Scrum

DFW Scrum User Group - Wed, 09/09/2015 - 15:20
We were fortunate in June to have Craig Larman, co-creator of Large-Scale Scrum (LeSS), speak to our group. The main goal of LeSS is not to enable traditional big groups to “meet their commitment” more efficiently—it is to see the ineffectiveness … Continue reading →
Categories: Communities

Private properties in ES2015: the good, bad and ugly

Xebia Blog - Wed, 09/09/2015 - 13:16
code { display: inline !important; }

This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week!

One of the new features of ECMAScript 2015 is the WeakMap. It has several uses, but one of the most promoted is to store properties that can only be retrieved by an object reference, essentially creating private properties. We'll show several different implementation approaches and compare it in terms of memory usage and performance with a 'public' properties variant.

A classic way

Let's start with an example. We want to create a Rectangle class that is provided the width and height of the rectangle when instantiated. The object provides an area() function that returns the area of the rectangle. The example should make sure that the width and height cannot be accessed directly, but they must be stored both.

First, for comparison, a classic way of defining 'private' properties using the ES2015 class syntax. We simply create properties with an underscore prefix in a class. This of course doesn't hide anything, but a user knows that these values are internal and the user shouldn't let code depend on its behaviour.

class Rectangle {
  constructor(width, height) {
    this._width = width;
    this._height = height;

  area() {
    return this._width * this._height;

We'll do a small benchmark. Let's create 100.000 Rectangle objects, access the area() function and benchmark the memory usage and speed of execution. See the end of this post on how this was benchmarked. In this case, Chrome took ~49ms and used ~8Mb of heap.

WeakMap implementation for each private property

Now, we introduce a WeakMap in the following naive implementation that uses a WeakMap per private property. The idea is to store a value using the object itself as key. In this way, only the accessor of the WeakMap can access the private data, and the accessor should be of course only the instantiated class. A benefit of the WeakMap is the garbage collection of the private data in the map when the original object itself is deleted.

const _width = new WeakMap();
const _height = new WeakMap();

class Rectangle {
  constructor (width, height) {
    _width.set(this, width);
    _height.set(this, height);

  area() {
    return _width.get(this) * _height.get(this);

To create 100.000 Rectangle objects and access the area() function, Chrome took ~152ms and used ~22Mb of heap on my computer. We can do better.

Faster WeakMap implementation

A better approach would be to store all private data in an object for each Rectangle instance in a single WeakMap. This can reduce lookups if used properly.

const map = new WeakMap();

class Rectangle {
  constructor (width, height) {
    map.set(this, {
      width: width,
      height: height

  area() {
    const hidden = map.get(this);
    return hidden.width * hidden.height;

This time, Chrome took ~89ms and used ~21Mb of heap. As expected, the code is faster because there's a set and get call less. Interestingly, memory usage is more or less the same, even though we're storing less object references. Maybe a hint on the internal implementation of a WeakMap in Chrome?

WeakMap implementation with helper functions

To improve the readability of above code, we could create a helper lib that should export two functions: initInternal and internal, in the following fashion:

const map = new WeakMap();
let initInternal = function (object) {
  let data = {};
  map.set(object, data);
  return data;
let internal = function (object) {
  return map.get(object);

Then, we can initialise and use the private vars in the following fashion:

class Rectangle {
  constructor(width, height) {
    const int = initInternal(this);
    int.width = width;
    int.height = height;

  area() {
    const int = internal(this);
    return int.width * int.height;

For the above example, Chrome took ~108ms and used ~23Mb of heap. It is a little bit slower than the direct set/get call approach, but is faster than the separate lookups.


  • The good: real private properties are now possible
  • The bad: it costs more memory and degrades performance
  • The ugly: we need helper functions to make the syntax okay-ish

WeakMap comes at both a performance as well a memory usage cost (at least as tested in Chrome). Each lookup for an object reference in the map takes time, and storing data in a separate WeakMap is less efficient than storing it directly in the object itself. A rule of thumb is to make sure to do as few lookups as necessary. For your project it will be a tradeoff to store private properties with a WeakMap versus lower memory usage and higher performance. Make sure to test your project with different implementations, and don't fall into the trap of micro-optimising too early.

Test reference

Make sure to run Chrome with the following parameters: --enable-precise-memory-info --js-flags="--expose-gc" - this enables detailed heap memory information and exposes the gc function to trigger garbage collection.

Then, for each implementation, the following code was run:

const heapUsed = [];
const timeUsed = [];

for (let i = 1; i <= 50; i++) {
  const instances = [];
  const areas = [];

  const t0 =;
  const m0 = performance.memory.usedJSHeapSize;

  for (let j = 1; j <= 100000; j++) {
    var rectangle = new Rectangle(i, j);

  const t1 =;
  const m1 = performance.memory.usedJSHeapSize;

  heapUsed.push(m1 - m0);
  timeUsed.push(t1 - t0);

var sum = function (old, val) {
  return old + val;
console.log('heapUsed', heapUsed.reduce(sum, 0) / heapUsed.length);
console.log('timeUsed', timeUsed.reduce(sum, 0) / heapUsed.length);
Categories: Companies

On Epiphany and Apophany - Elizabeth Keogh - Wed, 09/09/2015 - 13:14
We probe, then sense, then respond.

If you’re familiar with Cynefin, you know that we categorize the obvious, analyze the complicated, probe the complex and act in chaos.

You might also know that those approaches to the different domains come with a direction to sense and respond, as well. In the ordered domains – the obvious and complicated, in which cause and effect are correlated – we sense first, then we categorize or analyze, and then we respond.

In the complex and chaotic domains, we either probe or act first, then sense, then respond.

Most people find action in chaos to be intuitive. It’s a transient domain, after all; it resolves itself quickly, and it might not resolve itself in your favour… and is even less likely to do so if you don’t act (the shallow dive into chaos notwithstanding). We don’t sit around asking, “Hm, I wonder what’s causing this fire?” We focus on putting the fire out first, and that makes sense.

But why do we do this in the complex domain? Why isn’t it useful to make sense of what we’re seeing first, before we design our experiments?

As with many questions involving human cognition, the answer is: cognitive bias.

We see patterns which don’t exist.

The term “epiphany” can be loosely defined as that moment when you say, “Oh! I get it!” because you’ve got a sudden sense of understanding something.

The term “apophany” was originally coined as a German word for the same phenomenon in schizophrenic experiences; that moment when a sufferer says, “Oh! I get it!” when they really don’t. But it’s not just schizophrenics who suffer from this. We all have this tendency to some degree. Pareidolia, the tendency to see faces in objects, is probably the best-known type of apophenia, but we see patterns everywhere.

It’s an important part of our survival. If we learn that the berry from that tree with those type of leaves isn’t good for us, or to be careful of that rock because there are often snakes sunning themselves there, or to watch out for the slippery moss, or that the deer come down here to drink and you can catch them more easily, then you have a greater chance of survival. We’re always, always looking out for patterns. In fact, when we find them, it’s so enjoyable that this pattern-learning, and application of patterns in new contexts, forms the heart of video games and is one reason why they’re horribly addictive.

In fact, our brains reward us for almost seeing the pattern, which encourages us to keep trying… and that’s why gambling is also addictive, because a lot of the time, we almost win.

In the complex domain, cause and effect can only be understood in retrospect.

This is pretty much the definition of a complex domain; one in which we can’t understand cause and effect until after we’ve caused the effect. Additionally, if you do the same thing again and again in a complex domain, it will not always have the same effect each time, so we can’t be sure of which cause might give us the effect. Even the act of trying to make sense of the domain can itself have unexpected consequences!

The problem is, we keep thinking we understand the problem. We can see the root causes. “Oh! I get it!”… and off we blithely go to “fix” our systems.

Then we’re surprised when, for instance, complexity reasserts itself and making our entire organization adopt Scrum doesn’t actually enable us to deliver software like we thought it would (though it might cause chaos, which can give us other opportunities… if we survive it).

This is the danger of sensing the problem in the complex domain; our tendency to assume we can see the causes that we need to shift to get the desired effects. And we really can’t.

The best probes are hypothesis-free.

Or rather, the hypothesis is always, “I think this might have a good impact.” Having a reasonable reason for thinking this is called coherence. It’s really hard, though, to avoid tacking on, “…because this will be the outcome.” In the complex domain, you don’t know what the outcome is going to be. It might not be a good outcome. That’s why we spend so much time making sure our probes are safe-to-fail.

I’ve written a fair bit on how to use scenarios to help generate robust experiments, but stories – human tales of what’s happening or has happened – are also a good way to find places that probes might be useful.

Particularly, if you can’t avoid having a hypothesis around outcomes (and you really can’t), one trick you can try is to have multiple outcomes. These can be conflicting, to help you check that you’re not hung up on any one outcome, or even failure outcomes that you can use to make sure your probe really is safe-to-fail.

Having multiple hypotheses means we’re more likely to find other things that we might need to measure, or other things that we need to make safe.

I really love Sensemaker.

Cognitive Edge, founded by Dave Snowden of Cynefin fame, has a really lovely bit of software called Sensemaker that collects narrative fragments – small stories – and allows the people who write those stories to say something about their desirability using Triads and Dyads and Stones.

Because we don’t know whether a story is desirable or not, the Triads and Dyads that Sensemaker uses are designed to allow for ambiguity. They usually consist of either two or three things that are all good, all bad or all neutral.

For instance, if I want to collect stories about pair programming, I might use a Dyad which has “I want to pair-program on absolutely everything!” at one end, and “I don’t want to pair-program on anything, ever,” at the other. Both of those are so extreme that it’s unlikely anyone wants to be right at either end, but they might be close. Or somewhere in the middle.

In CultureScan, Cognitive Edge use the triad, “Attitudes were about: Control, Vulnerability, or Indifference.” You can see more examples of triads, together with how they work, in the demo.

If lots and lots of people add stories, then we start seeing clusters of patterns, and we can start to think of places where experiments might be possible.

A fitness landscape from Cognitive Edge shows loose and tightly-bound clusters, together with possible directions for movement.

A fitness landscape from Cognitive Edge

In the fitness landscapes revealed by the stories, tightly-bound clusters indicate that the whole system is pretty rigidly set up to provide the stories being seen. We can only move them if there’s something to move them to; for instance, an adjacent cluster. Shifting these will require big changes to the system, which means a higher appetite for risk and failure, for which you need a real sense of urgency.

If you start seeing saddle-points, however, or looser clusters… well, that means there’s support there for something different, and we can make smaller changes that begin to shift the stories.

By looking to see what kind of things the stories there talk about, we can think of experiments we might like to perform. The stories though have to be given to the people who are actually going to run the experiments. Interpreting them or suggesting experiments is heading into analysis territory, which won’t help! Let the people on the ground try things out, and teach them how to design great experiments.

A good probe can be amplified or dampened, watched for success or failure, and is coherent.

Cognitive Edge have a practice called Ritual Dissent, that’s a bit like the “Fly on the Wall” pattern, but is done in a pretty negative way, in that the group to whom the experiment is being presented critiques it against the criteria above. I’ve found that testers, with their critical, “What about this scenario?” mindsets, can really help to make sure that probes really are good probes. Make sure the person presenting can take the criticism!

There’s a tendency in human beings, though, to analyze their way out of failure; to think of failure scenarios, then stop those happening. Failure feels bad. It tells us that our patterns were wrong! That we were suffering from apophany, not epiphany.

But we don’t need to be afraid of apophany. Instead of avoiding failure, we can make our probes safe-to-fail; perhaps by doing them at a scale where failure is survivable, or with safety nets that turn commitments into options instead (like having roll-back capability when releasing, for instance), or – my favourite – simply avoiding the trap of signalling intent when we didn’t mean to, and instead, communicating to people who might care that it’s an experiment we want to try.

And that it might just make a difference.

Categories: Blogs

Targetprocess 3.7.8: Custom Graphical Reports Improvements

TargetProcess - Edge of Chaos Blog - Wed, 09/09/2015 - 11:44
Custom Graphical Reports Improvements – Parallel Data

Starting from v.3.7.8 you can add several data plots to the same axis and see them at the same time.

For example, you want see the average, minimum and maximum Cycle Time trends for your user stories, grouped by Team

parallel report settings

result parallel report

Fixed Bugs
  • Top settings menu: “Report Issue” renamed to “Email Support”
  • Fixed top Project/Team selector to show programs with 0 dependent projects
  • Fixed problem with board setup not opening after refreshing the page
  • Fixed quick add popup auto closing in the list view
  • Fixed error when adding a test case in a No User Story cell on a board with user stories as lanes and test cases as cards
  • Burn Down Charts fixed to show independent Features and Epics effort
  • Fixed descriptions with CK editor to work in Internet Explorer 10
  • Fixed left menu to preserve group expansion state on a page reload
  • Fixed corrupted emoji in descriptions and comments
  • Fixed SSO settings edit
  • Fixed quick add on double click in a lane
  • Fixed plugin profile deletion
  • Fixed unable to plan Projects from the Timeline view
  • Fixed error on a try to add quickly an entity with a ‘No team’ on a Board with Team/Team State axes
  • Fixed XS card zoom level on a Timeline to show unit title for allocations
Categories: Companies

AgileByExample, Warsaw, Poland, September 28-30, 2015

Scrum Expert - Wed, 09/09/2015 - 09:22
AgileByExample is a Lean and Agile three-days conference taking place in Warsaw, Poland that helps you learn Agile on live examples. The last day is dedicated to a Lean Agile Dojo. Keynotes, talks and discussions are all in English. In the agenda of the AgileByExample conference you can find topics like “5 Whys Root Cause Analysis”, “Taking Back Agile: Returning to our Core Values and Practices”, ...
Categories: Communities

Secrets from the Experience Report on Kanban

Agile Ottawa - Wed, 09/09/2015 - 06:13
Once again, the community met and shared their stories, their successes and their failures. This time, it was about Kanban. Dag Rowe facilitated an agile 101 at the beginning of the evening with a quick exploration of some Agile Metrics.  While preparing the crowd for … Continue reading →
Categories: Communities

Perfectly Executing the Wrong Plan

TV Agile - Tue, 09/08/2015 - 17:59
App developers ask themselves excellent questions about their users: Do people need my app? Can people use my app? Why do people sign up and then not use my app? However, app developers answer their excellent questions in invalid and unreliable ways. It is shocking to see how much effort app developers put in writing […]
Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.