Skip to content

Feed aggregator

Targetprocess v.3.10.8: Custom Request Types

TargetProcess - Edge of Chaos Blog - Mon, 01/09/2017 - 10:46

Custom Request Types

Custom Request types increase the number of scenarios that the Service Desk can be used for. For example, you could add "Project Request" if you're doing Portfolio Management, "IT request" for infrastructure guys, and much more. Alternatively, you can simplify Idea Management by removing all Request types except for "Idea".


Fixed Bugs
  • Requiring a comment for state transfers is now supported in Boards and Lists. If checked, users are requested to input a comment before moving an entity to the selected state.
  • Fixed: Non-admin user could change their team role without 'add/edit team' permissions
  • POP plugin won't create a requester if a Targetprocess User with the same email already exists
  • Fixed email notification duplicates in the case of reply-to comments where the same person is mentioned and addressed
  • REST api/v1: InboundAssignables, OutboundAssignables endpoints with CustomField collection included returned empty value
Categories: Companies

The Story of Tesla, by Elon Musk

Scrum Breakfast - Mon, 01/09/2017 - 10:00
They didn't know it at the time but they created the first Tesla Roadster by taking a working prototype and iterating on the design. By the time the Roadster was announced, they had replaced 96% of the original prototype. "It's amazing what we can do with small teams and tiny budgets." BTW this is part one, you'll want to stay for most of part two. Another video I had to watch to the end!

Categories: Blogs

Playing Whack-A-Mole With Risk

Tyner Blain - Scott Sehlhorst - Mon, 01/09/2017 - 05:44

Man playing whack-a-mole carnival game

Assumptions are interesting things – we all make them all the time, and we rarely acknowledge that we’re doing it.  When it comes to developing a product strategy – or even making decisions about how best to create a product, one of these assumptions is likely to be what causes us to fail.  We can, however, reduce the chance of that happening.

Being Wrong

What does it feel like to be wrong?  Watch about 25 seconds of this TED talk from Kathryn Schultz, starting at 4:09

Go back later and watch her entire talk – it is really worth it.  But stay with me for now.  All you need for this article is the 25 seconds, and the realization that you don’t know you are wrong until you know you’re wrong.

Hidden in Plain Sight

Assumptions are like being wrong.  But with an added degree of difficulty.  Not only do you not know you’re wrong – but you didn’t realize you were incorrectly asserting something, and then betting on it to be right.

Every strategy, every product idea, every design approach, and every planned implementation is built upon a pile of assumptions.  Those assumptions are there, if you just look at them.  But you have to look for them in order to see them.  They are hidden in plain sight.

The only question is if they are going to cause you any trouble.  You might not be wrong, in the assumptions that really matter.

Wouldn’t it be nice to know when you are wrong?  Before it’s too late?  Before it’s really expensive?  Before your window of opportunity closes?

Identifying Risky Assumptions

Laura Klein spoke at the Lean Startup Conference about identifying risky assumptions and her talk was published in Dec 2014.  Laura is also rapidly becoming on of my favorite gurus.  I just wish I’d become aware of her work sooner.

Laura identifies that every product has at least three different classes of assumptions.

  1. Problem Assumptions – we assume there is a market-viable problem worth solving.
  2. Solution Assumptions – we assume our approach to addressing the problem is the right one.
  3. Implementation Assumptions – we assume we can execute to make our solution a reality, such that it solves the problem and we succeed.

Hold onto this thought – I need to segue and dust off a tool I found five years ago, and some work I’ve done with clients over the last couple of years.  We’ll look at how to incorporate some of those ideas with the ones Laura shared.  And eventually, the whack-a-mole reference will make sense.

Hypotheses and Assumptions

With a client last year, I ran a workshop to elicit assumptions on our project.  We were working to develop what Harry Max calls the theory of our product.  Basically, we were working to develop the vision, the value propositions (for a two-sided market problem), the business model that would enable a credible market entry strategy given the company’s current situation, and a viable solution approach.  Essentially, product strategy and product ideation.

My assertion in that workshop was that assumptions and hypotheses, practically speaking, are risks.

assumptions are implicit risks. hypotheses are explicit risks

Product strategy and product design are a formulated plan of action, built upon a set of beliefs – assumptions and hypotheses.  The risk is that those beliefs are wrong.  And we don’t realize it.  Materially, the only difference between an assumption and a hypothesis is that the assumption is something no one has said out loud.  It represents an implicit risk.  Once you acknowledge the assumption, you can then treat it explicitly – and explicitly decide to do something about it or not. In the workshop I prompted the participants (senior executives, domain experts, product stakeholders and team members) to identify their assumptions and hypotheses.  I started by presenting several hypotheses and assumptions that had been part of conversations prior to the workshop. prompting assumptions - grayed out This helped elicit ideas from the group, but it wasn’t really enough.  What did get things moving was some prompts from Harry, such as the suggestion to complete the sentence “It will never work because..” or “The only way it will work is if…” sticky notes from elicitation workshop We were able to elicit and then organize (affinity mapping) the inputs into a collection of testable hypotheses. What To Do With a Pile of Hypotheses? Now, armed with a list of hypotheses, and limited time and resources to go test them all, we were faced with the challenge of determining which risk to address first.  Remember – hypotheses and assumptions are risks.  Risks of being wrong (and not knowing it).  Risks of product failure. I’ve historically used potential impact and likelihood of happening to manage risks.  I first learned to assign a score from 1 to 3 for likelihood of the risky thing happening, and a score from 1 to 3 of how bad would it be if it did happen.  Multiply the two together, and you get a score from 1 to 9 (1,2,3,4,6,9).  I learned this from PMO-trained people in the late 1990’s.  Maybe their thinking has evolved since then.  There are two problems with creating a score like this.
  1. Likelihood of occurrence and potential impact are treated as equally important factors.  An unlikely but major impact risk would be “as important” as a likely risk with minimal impact.  Each particular approach to risk management will value these differently.
  2. Combining the two pieces of information into a single piece of information is discarding useful information.  If I tell you one risk is a “3” and the other is a “4”, you cannot know which risk is more important to you.  The “4” is something that reasonably could happen, and would be “bad.”  Would that be more important than understanding the risk of an unlikely, but company-ending risk? Would it be more important than a very likely annoyance – one which may cause death by a thousand cuts for your company is large volumes of support costs absorb profits.
That’s why I’ve treated this as a two-dimensional space – visualizing a graph of likelihood vs impact. Laura proposed my now-favorite labels for this graph, relabeling my vertical axis.  I’m shamelessly stealing this from Laura.  It seemed fitting as Laura credits part of her presentation to Janice Frasier.  Maybe one of the ideas I’m adding to the mix will be stolen by the next person to add to our blog-post conga line. likelihood vs impact graph As a team, you can reach consensus around the relative placement of all of the risks.  We then began tracking against our top 10. top 10 risks mapped against impact and likelihood scales As Laura would say – you start with the “uppiest and rightiest.”  What you are doing is asking the question  – what risk is most likely to kill your product, damage your stock price, get your CEO fired, etc. There’s another dimension which makes treating risks this way difficult – uncertainty.  You don’t actually know that this risky think is likely to happen.  You’re incept-assuming as you make assumptions about your assumptions. The easiest way to think about this it acknowledge that your impact and likelihood “measurements” are not measurements – they are estimates.  They may be calibrated estimates, ala Hubbard’s How to Measure Anything or they may be guesses based on which way the wind is blowing.  Treat them as estimates, and then – plot them either as your “most likely” or your “worst case” point of view – that’s a stylistic call, I think. Removing Risks

man playing whack-a-mole game

The reason you test a hypothesis is to reduce a risk.  I think Laura used the phrase “to de-risk” the risk. To de-risk the risk, the first think you need to do is remove the uncertainty you have about how bad things could really possibly be.  You need to run an experiment.  In the example above, you would prefer to test hypothesis 7 first if you can – it is the uppiest and rightiest.  You would not be far wrong if you tested 4 or 8 first (assuming it is easier, faster, or cheaper to test one of those).  If you were to first test anything other than 4,8,7, you really should have a good reason. Once you run your experiment and determine that the risk is not a risk, go back and address the next-most-important risk.  This is a game of whack-a-mole.  You will never run out of testable risks.  You will only eventually reach a point where the economic value of delaying your product to keep testing risks no longer makes sense. Note that an experiment could result in multiple outcomes and next steps.  Here are a couple
  • This risk is not as impactful as we thought, we won’t address it with product changes, we will absorb those costs into our profitability model and revisit pricing to assure the business case still holds up.
  • This risk is every bit as likely as we were afraid.  Let’s determine a problem restatement (or solution design approach) where this risk no longer has  the impact or likelihood it did before.  As an example – a risk of users not adopting a product with an inelegant experience may justify rethinking the approach and investing to improve the user experience.

Trying to tackle all the ways you can respond to risks (and de-risked risks) would make this overly long article ridiculously long.

Validation Board

validation board

n 2012 I came across the hypothesis board from  At the time, it was free for use by consultants :)  I don’t believe it has gained widespread adoption.  At least people look at me funny when I mention it. Maybe now, more people will know about it. I personally never used it because something felt not-quite-helpful enough for me, for the problems I was helping my clients to solve.  I could never figure out why, however.  The board has many of the important components.  In hindsight, this is an indicator that the validation board is likely solving a problem I don’t have (as opposed to being a bad solution to a problem I do have). The validation board is structured more for early-startup customer-discovery work – with three categories of hypotheses to track – customer, problem, and solution
  • How big is the potential market?
  • How valuable is the problem we would solve?
  • Are we able to solve the problem for these people?
The tool was positioned as something to help you pivot as you discover that you have the wrong customers, or problems, or solutions. What I need is to know what hypothesis to test next.  I think that may be best done with a simple graph like the ones Laura and I use.  but use her labels. Whack Some Moles Instead of debating about implementation details, consider assessing the risks to your product.  Determine if those risks warrant making an investment to reduce them.  Form a measurable hypothesis and validate it. Then go after the next risk.  Until the remaining risks are no longer big enough for you to pursue.
Categories: Blogs

Invitation-based SAFe implementation

Agile Product Owner - Sun, 01/08/2017 - 17:54

Howdy folks:

Yuval Yeret, CTO of AgileSparks, is an SPC and a SPCT candidate.  He is a prolific blogger on the topics of Agile, Lean, Kanban, SAFe and more. Yuval has over 17 years of industry experience and always has an interesting viewpoint and pragmatic advice. AgileSparks is a Scaled Agile gold partner.

Yuval has written a novel guidance article on how an invitation-based approach to implementing SAFe can create a more collaborative organizational change effort. The article describes ways to invite leaders and team members to understand SAFe, while decentralizing the timing and details of the change.  I found the following ideas to be particularly innovative and useful:

  • The implementation workshop
  • Invitation based ART launch
  • Self-selection of teams within an ART

Please let us know what you think about this approach, and feel free to share any similar ideas and techniques that you have successfully implemented, in the comments section below.

Thanks Yuval for sharing this approach with us. Please click here to read the article.

The team at Scaled Agile would also like to wish you a Happy and SAFe New Year!

—Richard and the framework team
SAFe Fellow and Principal Consultant

Categories: Blogs

Clean Disruption

Scrum Breakfast - Sun, 01/08/2017 - 15:13
Why Energy & Transportation will be Obsolete by 2030 by Tony Seba. The horse was displaced by the automobile in just 13 years. Oil, Cars and the Power Grid are about to be transformed in a similar way. What other technologies will be displaced faster than you think, and why?

I don't usually have patience to watch a 45 minute video, but I had to watch this one to the end!
Categories: Blogs

The Simple Leader: Continue to Learn

Evolving Excellence - Sun, 01/08/2017 - 11:16

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen

Learning is not compulsory…neither is survival.
– W. Edwards Deming

Many, if not most people go to school and college, and then, when they are finished, rarely open another book (at least one with big words in it). They may continue to grow their skills and knowledge through experience, but this is the slow boat to improvement.

Over the years, I’ve found that the primary predictor of executive leadership competency is the desire to seek, learn, analyze, distill, and share new knowledge. It doesn’t necessarily have to be within the leader’s current field or competency, nor does it have to be strictly via reading books. There are multiple path- ways to new knowledge, including online courses, magazines, and workshops.

Gaining new knowledge can also mean gaining new perspectives. As I discussed earlier, in a world of multiple sources of information, it is very easy to succumb to confirmation bias and only embrace information that fits our existing perspective. In reality, there is almost always some truth in every perspective. Challenge yourself to mindfully look at other perspectives on political, scientific, or social issues in an unbiased manner. You may not change your mind, but you will grow and your positions will be more authentic.

I try to read one fiction and one non-fiction book each month, which is sometimes difficult with my schedule. The non- fiction books, generally business-related, challenge me intellectually. The fiction, often science fiction or action thrillers, challenge my imagination. Each morning, I read The Wall Street Journal on my iPad, forwarding articles to friends and family that I find interesting. I purposely try to read articles from different political sources instead of only the ones that agree with my perspectives. I try to continually evaluate my perspectives, think about where bias is setting in and develop countermeasures to overcome the bias.

Think about your own pursuit of knowledge. What have you learned recently? What do you want or need to learn this year? How will you do it? What will you do with the new knowledge? How does it fit in with your new self-awareness? How will you encourage and provide opportunities for your team to learn?

Categories: Blogs

Case Study Update: LEGO finds the sweet spot

Agile Product Owner - Fri, 01/06/2017 - 20:30

“ … this has ​improved the motivation​ of the team members. Going to work is more fun when there’s less confusion and less waste. And motivated people do better work, so it’s a positive cycle!

Another impact we’ve seen is that other parts of LEGO visit the meeting, get super inspired, and start exploring how to implement some of these principles and practices in their own department. In fact, agile is spreading like a virus within the company, and the highly visible nature of the PI planning event is like a a catalyst.

case-study-box-lego-2About a year ago, the folks from LEGO® shared experiences from the first leg of their SAFe journey. What captured our attention was their innate understanding, right from the start, that every step of the implementation was going to involve discovery and learning and adapting. When something didn’t seem like a good fit, they weren’t afraid to experiment. Taking results from Inspect and Adapt, they tweaked SAFe to their needs with a simple guiding principle, “Keep the stuff that generates energy.”

One year later, Henrik Kniberg and Eik Thyrsted are back with the next chapter of their story. Their 20-team working group, LEGO Digital Solutions, is at the forefront of LEGO’s movement toward adapting to the faster-paced digital world, so the need to get it right is critical as it ultimately impacts the entire 17,000-person organization.

Their nipping and tucking of SAFe for optimal results runs the gamut from large edits to small tweaks. For instance, to keep energy and engagement up, they cut PI Planning from two days to one, and now limit the presentation of the draft plans to four teams doing 7.5 minute presentations. They started doing their program backlog on a physical board with printed cards, but moved that online to their backlog management tool, and projected it on the wall. For reality checks to avoid over-commitment, they use ‘Yesterday’s Weather,’ a feature from Extreme Programming (XP).

While they are sticking with one-day PI Planning, their consensus is that they needed the two-day event in the beginning to help them learn how to do it more effectively. It’s noteworthy that while they reduced length of PI Planning, to ensure that PI Planning is effective they now have three pre-planning sessions before each boundary.

Their determination to make it work has had an impact. They talk about the experience being “surprisingly positive,” and nobody seems to want to go back to how things were before SAFe. This is their latest summary of the outcome:

  • Less duplicated work​. Teams are more in tune with each other, so they waste less time on redundant work.
  • Less dependency problems. ​Teams waste less time being blocked waiting for each other. Teams interact more smoothly with other departments and stakeholders.
  • Managers can update priorities and resolve impediments faster​, because they have a better idea of what is actually going on.
  • Client trust has improved​, because they have a better understanding of what the teams are working on and why.
  • Planning is easier and commitments are met more often​, because the teams and portfolio planners learn how much work we can commit to and what our actual capacity is.

We’re glad to see LEGO getting these kinds of results. SAFe is a framework and as such, it is intended to be applied and evolved in context. We don’t care if people modify it, so long as they make it leaner and get the right business results!

Their downloadable 36-page in-depth summary makes for fascinating reading as it’s full of candid commentary and generously describes the thought process behind each decision. It also includes the top four things that helped them get a successful start. Go to the LEGO case study page to get the download. There you will also find the original video from Henrik and Eik discussing the first phase of the implementation.

Thanks, as always, to Henrik (aka ‘Dr. Agile’) and Eik (‘Captain Agile’) for documenting the LEGO journey. It’s a great service to the community and showcases what is possible when people approach new ideas with open minds and a commitment to learn.

Stay SAFe!

Categories: Blogs

TASTE Success with an X-Matrix Template

AvailAgility - Karl Scotland - Fri, 01/06/2017 - 14:24

I’ve put together a new X-Matrix A3 template to go with the Backbriefing and Experiment A3s I published last month. Together, these 3 templates work well together as part of a Strategy Deployment process, although I should reiterate again that the templates alone are not sufficient. A culture of collaboration and learning is also necessary as part of Catchball.


While creating the template I decided to change some of the language on it – mainly because I think it better reflects the intent of each section. However a side-benefit is that it nicely creates a new acronym, TASTE, as follows:

  • True North – the orientation which informs what should be done. This is more of a direction and vision than a destination or future state. Decisions should take you towards rather than away from your True North.
  • Aspirations – the results we hope to achieve. These are not targets, but should reflect the size of the ambition and the challenge ahead.
  • Strategies – the guiding policies that enable us. This is the approach to meeting the aspirations by creating enabling constraints.
  • Tactics – the coherent actions we will take. These represent the hypotheses to be tested and the work to be done to implement the strategies in the form of experiments.
  • Evidence – the outcomes that indicate progress. These are the leading indicators which provide quick and frequent feedback on whether the tactics are having an impact on meeting the aspirations.

Hence working through these sections collaboratively can lead to being able to TASTE success

Categories: Blogs

DevOps and Legacy Systems

BigVisible Solutions :: An Agile Company - Thu, 01/05/2017 - 19:00

legacy-softwareLegacy systems do not have to be the anchor which holds a company back from aggressively competing in today’s high speed markets. Moving to an automated testing strategy can make the difference in maintaining competitive advantage.

The emphasis of Agile, as demonstrated in the Agile Manifesto, is on a set of values pertaining to how we treat each other as human beings within the context of delivering software, which has value to other our clients. Respect, integrity, collaboration, and communication are virtues that result in superior outcomes for all concerned. DevOps capitalizes on those values and focuses on continuous delivery of value to production environments. It moves the ball forward from “potentially shippable” to “shipped” software.

In a recent article in DZone entitled DZONE’S Guide to Continuous Delivery, I was encouraged to see the importance of quality emphasized to be as important as speed of delivery. In the article by Andrew Phillips three points are elaborated. They are:

  • Continuous Delivery is about shipping software better, not just more quickly.
  • As the scope/size of changes in the pipelines get smaller (and pipeline runs more frequent), validating existing behavior becomes at least as important as verifying the new features.
  • We need to go beyond automated functional testing and add stress and performance testing into our pipeline to validate the behavior of an application at scale.

Phillips is clear that, “… pretty much every Continuous Delivery initiative that gets off the ground quickly realizes that automated testing—and, above and beyond that, early automated testing—is essential to building trust and confidence in the delivery pipeline.” Why is that?

There are several reasons why automated testing is essential to continuously delivering value to the customer. The first is that automated tests provide an executable specification of the requirements of the application feature being developed. We know we have written the right code, not just because it compiles, but because it does what the test specifies it will do. This early feedback can be at the unit/module level, the integration level, the end-to-end functional level which is often used as acceptance tests from the user standpoint, as well as the load and performance level.

If the desired direction for the delivery team is putting new features into production quickly and with high quality, then testing must be automated and a complete set of tests must be created from the very beginning of the development/delivery cycle. What about legacy systems?

With legacy systems, new features can be fully tested at the unit/module level, but there are often challenges with testing legacy code, which is impacted by these new features. Retrofitting legacy systems is addressed in a white paper by Jenny Stuart entitled, “Retrofitting Legacy Systems with Unit Tests”. In it she discusses such topics as:

  • Reasons to Retrofit
  • How to Decide the Approach to Building Out Unit Testing
  • Selecting strategies for incrementally building out coverage
  • Creating the Necessary Infrastructure
  • Investing in resources
  • Select unit-testing tools
  • Integrate unit tests with the build system
  • Train staff how to perform automated unit testing
  • Incrementally Improve System Testability
  • Incrementally refactor the system

These are all important factors in improving legacy system testability and in moving to a Continuous Delivery model. The practical benefits of improving legacy systems through automated testing as part of a DevOps/Agile initiative reduces risk and improves the speed of feature delivery.

The post DevOps and Legacy Systems appeared first on SolutionsIQ.

Categories: Companies

Connecting with Humans

Johanna Rothman - Thu, 01/05/2017 - 17:59

I just read Zappos is struggling with Holacracy because humans aren’t designed to operate like software. I’m not surprised. That’s because we are humans who work with other human people. I want to talk with people when I want to talk with them, not when some protocol tells me I must.

It’s the same problem when managers talk about “resources” and “FTEs” (full-time equivalents). I don’t know about you. I work with resourceful humans. I work with people, regardless of how much time they work at work.

If the person I need isn’t there, I have some choices:

  • I can cc the “other” person(s) and create a ton of email
  • I can ask multiple people and run the risk of multiple people doing the same work (and adding to waste)
  • I can do it myself—or try to—and not finish other work I have that’s more important.

There are other options, but those are the options I see most often.

We each have unique skills and capabilities. I am not fond of experts working alone. And, I want to know with whom I can build trust, and who will build trust with me.

We build relationships with humans. (Okay, I do yell at my computer, but that’s a one-sided relationship.) We build relationships because we talk with each other:

  • Just before and just after meetings. This is the “how are the kids? how was the wedding? how was the weekend?” kind of conversation.
  • When we work with each other and explain what we mean.
  • When we extend trust and we provide deliverables to build trust.

When we talk with each other, we build relationships. We build trust. (Some of us prefer to talk with one person at a time, and some of us like to speak with more. But we talk together.) That discussion and trust-building allows us to work together.

This relationship-building is one of the problems of geographically distributed teams not feeling like teams. The feelings might be missing in a collocated team, too. Standups work because they are about micro-commitments to each other. (Not to the work, to each other as humans.)

I’m a Spock-kind of person, I admit. I work to build human relationships with colleagues. I work at these relationships because the results are worth it to me. Some of you might start with the people first, and you will build relationships because you like people. I’m okay with that

Categories: Blogs

Global warming – simplified summary

Henrik Kniberg's blog - Thu, 01/05/2017 - 13:59

OK, here’s a (very) simplified summary of what I’ve learned about global warming after digging deep the past few weeks.

  1. Global warming is a major threat to life as we know it. It’s ALOT worse than most people realize.
  2. Global warming is caused (mostly) by increasing CO2 in the atmosphere.
  3. The CO2 increase comes (mostly) from us burning oil & coal (“fossil fuels”). Adds about 20-30 billion tons of CO2 per year.
  4. So we need to (mostly) stop burning oil & coal.
  5. We burn oil & coal (mostly) for electricity and transport. Coal power plants, car/plane/ship fuel, etc.
  6. We want to keep electricity and transport, but we also want to stop global warming, therefore we need to get electricity and transport without burning oil & coal.
  7. We know how to do that (solar, wind, electric cars, etc). The technology has been figured out, and the prices are at the tipping point where oil & coal can’t compete economically.
  8. So now we just need to hurry up and roll out those solutions! Every single reduced ton of CO2 counts.
  9. Unfortunately shit is going to hit the fan either way (because it’s already launched so to speak), but at least we can slow it down, reduce the impact, and buy us some time.

So pull whatever strings you can to help out – technology, policy, economy, communication, etc. Inform yourselves & each other. People have varying degrees of discretionary time, money, knowledge, voting power, contacts, influence, and motivation. But the more people try to help in one way or another, the more difference it will make as a whole.

More info:


Categories: Blogs

A Qualitative Formula for WSJF?

Improving projects with xProcess - Thu, 01/05/2017 - 10:18
In this series of blogs we have been examining the use of Cost of Delay as a way of understanding ordering work - either from a quantitative approach using estimates for value, urgency, duration and/or size, or from a qualitative approach, such as the use of Delay Cost Profiles [1], Risk Profiles [2,3] or Value Size matrices [4]. The SAFe definition of WSJF is something of a hybrid, since it uses a formula, but a formula that has only "qualitative" value at best. Here is their definition [5]:

The 4 terms are determined by Planning Poker estimation using a Fibonacci scale (usually 1 to 20). The work items are ordered according to the formula following an estimation workshop. While this formula cannot provide a true quantitative analysis for ordering items to maximise value, some consultants using it have said that is a useful technique for the discussion it engenders among stakeholders. Once the numbers have been generated the items ordered, re-ordering to a better order is straightforward because of all the discussion that has preceded this point. Needless to say, I strongly disagree with this. While the discussion is necessary for a quantitative or qualitative approach, creating a spurious anchor from numbers which cannot be meaningful will lead to cognitive bias rather than better ordering.

The above formula cannot give meaningful quantities for 2 reasons:
  1. Dimensionally the formula is inconsistent
  2. The terms are not estimated on a proportional scale
These problems were addressed in Joshua Arnold's proposed modification to the formula [4] which rearranged the terms as follows:

"WSJF" = Time Criticality x (User-Business Value + Risk Reduction | Opportunity Enablement) / Size

Dimensionality is addressed subject to the following assumptions:
  1. Time Criticality, τ, has units of the reciprocal of time (e.g. days-1). In other words an option expiring in 2 months would have double the τ of one expiring in 4 months.
  2. User-Business Value and Risk Reduction | Opportunity Enablement are measured in consistent units, possibly using a weighting factor to translate the intangible values to the units of the tangible values
  3. Size is proportional to the blocking time caused by implementing the item. This term may in such a case be used as a proxy for duration measured in units of time. This issue has been addressed in this series here: WSJF - Should you divide by Lead Time or Size? which also identifies additional assumptions required for this to be true.
  4. WSJF itself is used consistently with its intended dimensions, which are value/time2
Proportionality must also be addressed. The scale used in estimating must be proportional (including 0 and not limiting maximum range) not just ordinal as would occur if the items with minimum and maximum value are set at say 1 and 20 respectively. The use of a weighting term to ensure the tangible and intangible business value terms are appropriately scaled relative to each other which is also required for proportionality.

The modified SAFe formula suggests a more general expression for WSJF using the weighting factors for a set of "business value types", v, and "exchange rates" that convert the values to a common "currency" of value:

WSJF = τΣ(vn Xn) / D

τ (tau) is time criticality, vn is the nth business value, Xn is the exchange rate for this business value type, D is duration, for which Size may be a proxy subject to the assumptions discussed above.

This blog has not addressed the issue of when quantitative or qualitative approaches should be used. As well as having formulae that are coherent, the work of estimating to provide numbers for the formulae must be worthwhile and comprehensible to the business doing such estimation. In many cases it is not - for example where the domain or context in inherently "non-plannable". The concept of cost of delay is still important, but we should for different techniques for ordering work. Further discussion of this must wait for the next article in the series.


[1] David J. Anderson and Andy Carmichael, Essential Kanban Condensed. (United States: Lean Kanban University Press. 2016)
[2] Anderson, David. 2015. ESP: Scaling the benefits of Kanban. Slides 45-49. April 23. (January 5, 2017).
[3] Sharvari, Sawant. 2016. "SwiftKanban help - risk module.". 5, 2017).
[4] Magennis, Troy. 2016. Better Backlog Prioritization (from random to lifetime cost of delay).
[5] Agile, Scaled. 2016. "WSJF – scaled agile framework.". wsjf/ (January 5, 2017).

Categories: Companies

Game Play-Throughs During the Sprint

Agile Game Development - Wed, 01/04/2017 - 20:19

Regular team play-throughs of the game can add a lot of value through improved focus on the sprint goal and increased cross-discipline collaboration.
PracticeDuring the sprint, when the game is in a state where progress can be seen by the team, they hold a play-through of the areas related to the sprint goal.  Anyone can take the controls of the game, but usually it’s not the Scrum Master.  Depending on the state of a feature or mechanic, the developer who is directly working on what is to be played may show it, but it’s preferable to have someone less family drive the play-through.  This shows areas of where the player interface might be confusing.  During the play-through, anyone on the team can suggest improvements or identify problems to be fixed.The duration and frequency of play-throughs can vary.  If they are short, they can be done daily but longer ones once or twice a week work too.
Coaching tips
If the team has nothing to show half-way through the sprint, this is a great opportunity to ask them if there is any way to demonstrate progress earlier.  Earlier play-throughs create more opportunity to take advantage of emergence and to avoid late-sprint crunch and compromise.Additionally, you may want to invite a senior designer or art director to listen in.  This creates the opportunity for feedback (after the play-through) among the disciplines.   Make sure that people outside the team understand that the play-through is not an opportunity to change the sprint goal.I've always found that play-throughs held just before the daily scrum or at the end of the day are best (for different reasons).  Experiment!
Categories: Blogs

Certified Agile Leadership (CAL1) Visual Summary

Agilitrix - Michael Sahota - Wed, 01/04/2017 - 19:54

I am so grateful to Zuzi Šochová creating this wonderful infographic to summarize what she learned at my Certified Agile Leadership (CAL1) training in California last month. You can see a detailed list of my course contents and learn more about this training on the course description page.

The post Certified Agile Leadership (CAL1) Visual Summary appeared first on - Michael Sahota.

Categories: Blogs

Measuring Agile Team Performance at Spotify

Scrum Expert - Wed, 01/04/2017 - 19:27
How do we actually know if our Agile teams are doing well? Is gut instinct enough? Furthermore, in a rapidly growing organization such as Spotify, how can we ensure some sort of consistency in our baseline level of Agile knowledge across the technology, product, and design organization? This talk discusses techniques we have developed and use at Spotify to benchmark health and performance for our Agile teams and some tactics we use to bring them closer to—and beyond!—being the best teams they can be. The presentation explains frameworks that can be used to give us tangible evidence about how we’re doing as teams, as Agile Coaches, and as managers of people and product. Furthermore, this talk tells you about the organization-level methods uses at Spotify to share knowledge and maintain alignment of our Agile practices as it scales in order to bring music to people all around the world. Video producer:
Categories: Communities

#noprojects: If Start a Project, You’ve Already Failed

TV Agile - Wed, 01/04/2017 - 19:07
I want to be controversial for a moment and propose an end to IT projects, project management & project managers. I propose that the entire project process is flawed from the start for one simple reason. #noprojects means that if you need to run a project, you’ve already failed. By definition, an IT project is […]
Categories: Blogs

5 Qualities of a Bad ScrumMaster

Leading Agile - Mike Cottmeyer - Wed, 01/04/2017 - 15:02

A ScrumMaster is one of the three key roles of the Scrum Framework. Ken Schwaber and Jeff Sutherland conceived the Scrum process in the early 90’s. With so many years having passed, you’d think organizations would better understand qualities of a good ScrumMaster. More noteworthy, they should know qualities of a bad ScrumMaster.

Because of this, I created a simple infographic to focus on both good and bad qualities of ScrumMasters.  I’ve noticed, as organizations begin to scale, roles and responsibilities begin to blur. People may be asked to take on ScrumMaster responsibilities.  Do you have the right qualities?

View and download the free infographic:  10 ScrumMaster Qualities

5 qualities of a bad scrummaster

5 Qualities of a Good ScrumMaster

First, a Servant Leader is an empathetic listener and healer. This self-aware steward is committed to the growth of people. Second, a Coach can coach the other team members on how to use Scrum in the most effective manner.  Third, the Framework Champion is an expert on how Scrum works and how to apply it. Next, the Problem Solver protects the team from organizational disruptions or internal distractions or helps remove them.  Last, the Facilitator is a neutral participant who helps a group of people understand their common objectives and assists them to achieve these objectives.

5 Qualities of a Bad ScrumMaster

First, the Boss has the ability to hire and fire others.  Second, the Taskmaster myopically focuses on assigning and tracking progress against tasks. Third, a Product Manager is responsible for managing schedule, budget, and scope of the product. Next, if you are Apathetic you lack interest in or concern about emotional, social, or spiritual well being of others.  Last, the Performance Reviewer is responsible for documenting and evaluating job performance.


While you may call yourself a ScrumMaster, understand that people who understand Scrum are going to have expectations.  If you have any of the bad qualities that I listed above and in the infographic, maybe you should find someone else to do the job.

The post 5 Qualities of a Bad ScrumMaster appeared first on LeadingAgile.

Categories: Blogs

The Simple Leader: Connected Spirit

Evolving Excellence - Wed, 01/04/2017 - 11:27

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen

We are not human beings on a spiritual journey. We are spiritual beings on a human journey.
– Stephen R. Covey

The majority of humans believe in some type of a connection to a greater power, be it truly divine or just universal. Some may believe but simply go through motions drilled into them since birth, never questioning or validating the experience. Some, like myself, affirm the existence of something else—even if we don’t understand what that is.

The scientist in me stares up at the stars, knowing there are countless billions of them potentially with civilizations vastly older and more developed than ours. Then I contemplate recent advances in fields like quantum mechanics, where entanglement creates instantaneous connections over vast distances, making me wonder if we’re starting to see the connection between the physical world and the soul. I see how the evolution of the “internet of things” has already made billions of devices instantly accessible and controllable, and wonder how long will it be before every molecule in our world can be similarly addressed and manipulated.

The curious learner in me has spent years reading and analyzing numerous books on the history of religions, and I am amazed at the remarkable similarities between them. As one religious scholar friend once told me, it’s as if different groups of people were watching the same game from different parts of a stadium—some from the front row, others from high up in the standing-room-only section, still others from behind obstructions where they could only see part of the field. Each group recorded their experience in ways that were then distorted over time.

Episcopalian bishop and theologian John Shelby Spong has written about the impact of perspectives on religious literalism. One example he gives is the many ways ancient peoples described the rise of the sun each morning, from it being a star to being the powerful god Ra. Culture, religion, and knowledge shaped how different groups understood the same event. Other theologians, such as Catholic priest Thomas Merton, have found how seemingly disparate religions, such as Buddhism and Christianity, can be very complementary.

Like many people, I have felt an unequivocal, undeniable force at many times in my life. When dealing with exceptional stress, loss, or difficult decisions, it was there. It’s no longer faith for me—it’s real. I feel it while walking in nature, or even at this very moment, while looking out over the Caribbean while on vacation.

Each person’s experience is unique. But take time, perhaps while surrounded by the beauty of nature, to contemplate your spiritual existence. Being able to draw strength from that will bring peace. Peace will help calm your mind, enabling you to understand who you are.

Categories: Blogs

Using Lean Thinking to Develop a Testing Mindset

What is Lean Thinking?
“Lean thinking defines value as providing benefit to the customer; anything else is waste.”...

The post Using Lean Thinking to Develop a Testing Mindset appeared first on Blog | LeanKit.

Categories: Companies

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.