Skip to content

Feed aggregator

How Scrum Created the Greatest Team in the World

Rally Agile Blog - Thu, 11/19/2015 - 18:58

The Scrum approach to delivery has produced the greatest team in the world. And the elements behind the team’s success are repeatable, meaning your team could be next in becoming the greatest team in the world. (That sure has a nice ring to it.)


View image |


The team I’m talking about is the All Blacks, which retained its Rugby World Cup crown recently—the first team to do so since the tournament started almost 30 years ago. In case you’re not familiar with the All Blacks, in the past four years the team has only lost three of its 54 matches and the last time it lost consecutive games was in 2011. In fact, the All Blacks hasn’t lost on its home turf since 1994, before some of its current players were born. Yet their captain is from tiny Kurow, New Zealand (population 339), the coach is a former policeman and none of the players has the letters MBA after their names or got on the team through family connections. The team also comes from a country with a much smaller population and lower GDP than its next few rivals, so they’re not the biggest or the richest.

How does a country with more sheep than people produce a team recognized as the world’s greatest, and what does this have to do with Scrum?

Indeed, why are we referencing a sports team when most of us work in teams in urban offices where we chase paper and deadlines rather than an oval-shaped ball? Because high-performing teams, no matter what their domain, share many commonalities. And it’s a happy coincidence that the All Blacks practice the sport that gave the Scrum approach to product delivery its name.  (Very briefly: for those who have just joined us, two Japanese researchers of high-performing teams in 1980s workplaces came up with the term “Scrum” as a good fit for the characteristics associated with high performance, including speed, flexibility and handling uncertainty.)  

Let’s examine a few of the characteristics of the greatest team in the world to better understand how our own teams can become ultra high-performing. Given that rugby in New Zealand/Aotearoa is our metaphor here for high performance, I’ll be introducing a few new words into your vocabulary.

Principle 1:  Be Your Role

People on high-performing teams know their roles inside-out. Knowing your role doesn’t just relate to the bullet points in your job description: It means intrinsically knowing how your skills influence your teammates and how you collectively use these skills to succeed.


View image |


For All Blacks captain Richie McCaw, who has an impressive win rate of around 90 percent from his 140 caps, this meant sitting down with his uncle when he was a teenager and identifying on a scrap of paper what he needed to do to become a “G.A.B.”—a Great All Black—then taping it to his bathroom mirror so he could see it every day. In other words, visual management. Sound familiar?  

On your team, what’s the difference between a developer and a great developer, or a product owner and a great product owner? What about for your own role? These probably aren't characteristics that were bulleted in your job description.

Knowing your role means sticking to the basics, even when things are difficult, and trusting your team to do the same. For example, if you’re leading 8–7 in a tense final against your bogey team (France) you should focus on solidly executing the fundamentals, again and again. Nothing fancy, just K.I.S.S. This is where your 10,000 hours of purposeful practice pays off. Several past and present All Blacks sprint fast enough to compete in the early rounds of the Olympics 100 meters event. They’ve completed years of purposeful practice in order to deliver effectively under pressure.  Keep executing the drills you have practiced in training. Don’t skip your planning sessions or retrospectives because “you’re too busy” or “there’s a deadline.”

Once you know your role, the next step is to elevate it, redefine it and be your role. For example, number 10 for the All Blacks, Dan Carter, has arguably redefined the role of first-five eighth (relax, that’s as technical as we’ll get with rugby). In Japanese martial arts, this aligns to the concept of shu-ha-ri. While many of his contemporaries are content to score points through kicking, Carter is sufficiently cross-functional to play well outside his role and carry the ball across the line, rather than pass it to another player. Given that several players oin the All Blacks are similarly cross-functional, that makes for a powerful advantage over mono-functional teams.

Principle 2:  Be Cross-functional

Fundamental to Scrum is the cross-functional team, which means there are multiple paths for work to reach the “done” state. This complements, rather than contradicts, the principle of knowing your role. On a rugby team there are 15 players, each with a specific role. However, a player’s role is superseded by the team’s overall goal of delivering the ball across the try line (or goal posts). This means that a player like Dan Carter, whose primary role is to score points through kicking, has also scored 29 tries (similar to a touchdown) in international rugby because he is T-shaped. Unlike many of his rivals, his jersey is just as dirty at the end of the game as the rest of the team.


View image |


When we talk about “T-shaped people” in agile it means people whose deep technical knowledge is complemented by a breadth of skills. This counters situations where a person becomes a single point of failure or a bottleneck, as is so well demonstrated in the agile cult classic book, The Phoenix Project. Teams in the workplace can incrementally increase their cross-functionality over time by creating a simple skills matrix and allocating a few hours per week to upskilling. After three to six months, the team will be quantifiably stronger and have reduced its dependence on a single individual.

As a team member your versatility makes you more valuable to the organization, and that’s something to bear in mind for challenging times.

Cross-functionality goes hand in hand with diversity and high performance. The All Blacks are arguably the most diverse team at the Rugby World Cup, and it’s no accident that in recent decades, as the team has embraced more diversity, it has won more games. Players come from several different cultural, religious and linguistic backgrounds. Harnessed to a strong vision (discussed in the next section), this diversity creates open-mindedness around trying new ideas and synergies that enable team members to be authentic and bring their whole selves to the team.

Research confirms that team diversity leads to increased creativity, better decisions and harder-working teams.

Your team may be more diverse than you realize. Many workplaces lose their human touch and it’s easy to get caught up in “doing” our work at the expense of building high-performing teams and getting to know the person behind the job title. A half-day investment in effective team-building activities a couple of times per year (which needn’t involve blindfolds, high wires or singing Kumbaya) will pay dividends, create “ba” and directly improve team performance.

Principle 3:  Be United Around a Vision

“We are the most dominant team in the history of the world.” Pretty audacious, huh? Perhaps not. The All Blacks created this vision before other teams started calling them “the world’s greatest team.” However audacious it is, it would be much harder to become the world’s most dominant team with a watered-down vision that exudes mediocrity, or with no vision at all. There’s no doubt about what this team wants to achieve.

So, back to our teams in the workplace: what’s your team’s vision?


View image |


We’ve already talked about being your role and being cross-functional. Perhaps it’s no surprise how well the All Blacks embody teamwork and uniting around a collective vision, given the strong influence of Maori and Polynesian cultures on the team.

As the cliche goes, “there is no 'I' in team.”

Just as Japanese culture embodies Lean principles, Maori and Polynesian cultures elevate the importance of the team (or family, collective or other group of people) over the importance of the individual. Western cultures, by contrast, often elevate the individual over the team (e.g. individual performance targets) which can lead to local optimization at the expense of the overall system’s performance. Focusing on team performance as the unit of success has a dramatic impact on how the team plays.  

Scrum and SAFe® require a sprint goal and product vision, respectively. Build on that in your workplace and establish a vision for your team. What are its values, goals and outcomes? This fosters a team that is united, focused and aligned, rather than a group of people who “happen” to work together.

Next Steps

Several All Blacks are expected to retire in the coming months. The good news is that there’s now an opportunity for another team to be the world’s greatest. Maybe it will be a team in your organization. How can you elevate your team’s performance so it exhibits the same traits as the world’s greatest team?

  1. Help your team move from doing their role to being their role
  2. Make cross-functionality and T-shaped team members the norm through purposeful practice
  3. Create a compelling vision with your team that encourages high performance

There’s a Maori phrase that concisely sums up these themes:

Ko ahau te kapa, ko te kapa ahau – I am the team and the team is me." Suzanne Nottage
Categories: Companies

Analysis of Visual Studio Solutions with the SonarQube Scanner for MSBuild

Sonar - Thu, 11/19/2015 - 17:19

At the end of April 2015 during the Build Conference, Microsoft and SonarSource Announced SonarQube integration with MSBuild and Team Build. Today, half a year later, we’re releasing the SonarQube Scanner for MSBuild 1.0.2. But what exactly is the SonarQube Scanner for MSBuild? Let’s find out!

The SonarQube Scanner for MSBuild is the tool of choice to perform SonarQube analysis of any Visual Studio solution and MSBuild project. From the command line, a project is analyzed in 3 simple steps:

  1. MSBuild.SonarQube.Runner.exe begin /key:project_key /name:project_name /version:project_version

  2. msbuild /t:rebuild

  3. MSBuild.SonarQube.Runner.exe end

The “begin” invocation sets up the SonarQube analysis. Mandatory analysis settings such as the SonarQube project key, name and version must be passed in, as well as any optional settings, such as paths to code coverage reports. During this phase, the scanner fetches the quality profile and settings to be used from the SonarQube server.

Then, you build your project as you would typically do. As the build happens, the SonarQube Scanner for MSBuild gathers the exact set of  projects and source files being compiled and analyzes them.

Finally, during the “end” invocation, remaining analysis data such as Git or TFVC one is gathered, and the overall results are sent to the SonarQube server.

Using the SonarQube Scanner for MSBuild from Team Foundation Server and Visual Studio Online is even easier: there is no need to install the scanner on build agents, and native build steps corresponding to the “begin” and “end” invocations are available out-of-the-box (see the complete Microsoft ALM Rangers documentation for details).

A similar experience is offered for Jenkins users as well since the Jenkins SonarQube plugin version 2.3.

Compared to analyzing Visual Studio solutions with the sonar-runner and the Visual Studio Bootstrapper plugin, this new SonarQube Scanner for MSBuild offers many advantages:

  1. Having a Visual Studio solution (*.sln) file is no longer a requirement, and customized *.csproj files are now supported! The analysis data is now extracted from MSBuild itself, instead of being retrieved by manually parsing *.sln and *.csproj files. If MSBuild understands it, the SonarQube Scanner for MSBuild will understand it!

  2. For .NET, analyzers can now run as part of the build with Roslyn, which not only speeds up the analysis but also yields better results; instead of analyzing files one by one in isolation, the MSBuild integration enables analyzers to understand the file dependencies. This translates into fewer false positives and more real issues.

  3. Enabling FxCop is now as simple as enabling its rules in the quality profile. There is no longer any need to manually set properties such as “sonar.visualstudio.outputPaths” or “sonar.cs.fxcop.assembly” for every project: All the settings are now deduced by MSBuild.

As a consequence, we are deprecating the use of sonar-runner and the Visual Studio Bootstrapper plugin to analyze Visual Studio solutions, and advise all users to migrate to the SonarQube Scanner for MSBuild instead. Before you begin your migration, here are a few things you need to be aware of:

  1. The analysis must be executed from a Windows machine, with the .NET Framework version 4.5.2+ installed, and the project must be built using MSBuild 12 or 14. Note that the project you analyze can itself target older versions of the .NET Framework, but the SonarQube Scanner for MSBuild itself requires at least version 4.5.2 to run.

  2. Obviously, you now need to be able to build the project you want to analyze!

  3. Most old analysis properties (such as “sonar.cs.fxcop.assembly“, “sonar.dotnet.version”) are no longer used and should be removed. The only useful ones are unit test result and code coverage reports paths.

  4. The “” file is no longer used and should be deleted.

Try it out for yourself and get started
! Download the SonarQube Scanner for MSBuild, install it, and start to analyze your projects! If you are new to SonarQube, the end-to-end guide produced by the Microsoft ALM Rangers will take you through every step.

Categories: Open Source

How Long Are Your Iterations? Part 1

Johanna Rothman - Thu, 11/19/2015 - 17:08

I spoke with a Scrum Master the other day. He was concerned that the team didn’t finish their work in one 2-week iteration. He was thinking of making the iterations three weeks.

I asked what happened in each iteration. Who wrote the stories and when, when did the developers finish what, and when did the testers finish what? Who (automated tests, testers or customers) reported defects post-iteration?

He is the Scrum Master for three teams, each of whom has a different problem. (The fact that he SMs for more than one team is a problem I’ll address later.)

Team 1 has 6 developers and 2 testers. The Product Owner is remote. The PO generates stories for the team in advance of the iteration. The PO explains the stories in the Sprint Planning meeting. They schedule the planning meeting for 2 hours, and they almost always need 3 hours.

Staggered_dev_testingThe developers and testers work in a staggered iteration. Because the developers finish their work in the first two-week iteration, they call their iterations two weeks. Even though the testers start their testing in that iteration, the testers don’t finish.

I explained that this iteration duration was at least three weeks. I asked if the testers ever got behind in their testing.

“Oh, yes,” he replied. “They almost always get behind. These days, it takes them almost two weeks to catch up to the developers.”

I explained that the duration that includes development and testing is the duration that counts. Not the first two weeks, but the total time it takes from the start of development to the end of testing.

“Oooh.” He hadn’t realized that.

He also had not realized that they are taking too much work (here, work in progress, WIP). The fact that they need more time to discuss stories in their planning meeting? A lot of WIP. The fact that the developers finish first? That creates WIP for the testers.

Sequential work makes your iterations longer. What would it take for you to work as a team on stories and reduce the lag time between the time the development is done and the testing is done?

The next post will be about when you have a longer duration based on interdependencies.

Categories: Blogs

The Sunk Cost Fallacy Fallacy

Xebia Blog - Thu, 11/19/2015 - 16:41

Imagine two football fans planning to attend a match 60 miles away. One of them paid for a ticket in advance; the other was just about to buy a ticket when he got one from a friend for free. The night of the game, a blizzard hits. Which fan do you think is more likely to drive through a blizzard to see the game?

You probably (correctly) guessed that the fan who paid for his ticket is more likely to drive through the blizzard. What you may not have realized, though, is that this is an irrational decision, at least economically speaking.

The football fan story is a classic example of the Sunk Cost Fallacy, adapted from Richard Thaler's "Towards a Positive Theory of Consumer" (1980) in Daniel Kahneman's excellent book, "Thinking, Fast and Slow" (2011).  Many thanks to my colleagues Joshua Appelman, Viktor Clerc and Bulat Yaminov for the recommendations.

The Sunk Cost Fallacy

The Sunk Cost Fallacy is a faulty pattern of behavior in which past investments cloud our judgment on how to move forward. When past investments are irrecoverable (we call them 'sunk' costs, and they should have no effect on our choices for the future. In practice, however, we find it difficult to cut our losses — even when it's the rational thing to do.

We see the Sunk Cost Fallacy effect in action every day when evaluating technical and business decisions. For instance, you may recognize a tendency to become attached to an "elegant" abstraction or invariant, even when evidence is mounting that it does the overall complexity more harm than good. Perhaps you've seen a Product Owner who remains too attached to a particular feature, even after its proven failure to achieve the desired effect. Or the team that sticks to an in-house graphing library even after better ones become available for free, because they are too emotional about throwing out their own code.

This is the Sunk Cost Fallacy in action. It's healthy to take a step back and see if it's time to cut your losses.

Abuse of the Sunk Cost Fallacy

However, the Sunk Cost Fallacy can be abused when it's used as an excuse to freely backtrack on choices with little regard for past costs. I call this the Sunk Cost Fallacy Fallacy.

Should you move from framework A to framework B? If B will help you be more effective in the future, even when you've invested in A, the Sunk Cost Fallacy says you should move to B. However, don't forget to factor in the 'cost of switching': the past investments in framework A may be sunk costs, but switching could introduce a technical debt of code that needs to now be ported. Make sure to compare the expected gain against this cost, and make a rational decision.

You might feel bad about having picked framework A in the first place. The Sunk Cost Fallacy teaches you not to let this emotion cloud your judgment while evaluating framework B. However, it is still a useful emotion that can trigger valuable questions: Could you have seen this coming? Is there something you could have done in the past to make it cheaper to move from framework A to framework B now? Can you learn from this experience and make a better initial choice next time?


An awareness of the Sunk Cost Fallacy can help you make better decisions: cut your losses when it is the rational thing to do. Be careful not to use the Sunk Cost Fallacy as an excuse, and take into account the cost of switching. Most importantly, look for opportunities to learn from your mistakes.

Categories: Companies

Impressions from Gartner Symposium/ITxpo

TargetProcess - Edge of Chaos Blog - Thu, 11/19/2015 - 15:55

Last week I attended Gartner IT Symposium in Barcelona. My interest was to check what is on the agenda of CIOs these days as well as to look closer at what Gartner is saying about the Project Portfolio Management Market. Over 2000 CIOs from Europe flew to Barcelona this year, around 45% from companies with 20k+ employees, the majority from Financial or Government sectors, UK and Scandinavia.


Here I’ll share some findings, mostly related to the topic of Project Portfolio Management, as this was my focus there.

Where CIOs spend IT budgets

Among all CIO’s priorities, the top list this year looks like this:

  1. BI
  2. Infra/Datacenter
  3. Cloud
  4. ERP

Overall, the digitalisation of businesses happens rapidly, 60% of all services are expected to digitalise in the next 2-3 years.

One buzzword which I heard almost every half an hour was


Simply put, bi-modal means that there are two modes of operation: mode 1 – stable, where priorities are on productivity, safety and optimised cost, and mode 2 – experimental, where the focus is on innovation and agility. The message here is: don’t just operate your IT, innovate and be agile too. Almost in every session I attended Gartner analysts mentioned this buzzword. The threatening message to CIOs was that if they don’t innovate there will be disruptive newcomers from even outside their known market segment who may leave them without jobs.


Agile was mentioned quite often, Gartner stated that 76% of companies do Agile these days and this certainly sounded as the way to go. Other practices mentioned from Mode 2 were Multi-Disciplinary Teams, Crowdsourcing, Different Metrics (operation, innovation, guardian), and working with Startups.

Agility in application context was also a big topic. The key recommendation was: reduce your application complexity (Application Agility = 1/Complexity). IT organisations should:

  • benchmark the current complexity of applications (there are tools that evaluate code complexity, etc.)
  • set goals for reducing the complexity
  • measure and refactor or replace complex applications

The agile methodology should be used to deliver value faster and should not be regarded just as an effective way to expedite the development processes. Here is a convincing slide about the direct results from agile delivery focused on speed only:


Talent Management

This fancy term stands for HR, although I certainly like that we talk about talent and not resource. Apart from the necessity of becoming good leaders and creating a proper company culture, getting it right with Talent Management is critical because people are the main asset of any company.

So here Gartner was telling CIOs to “let people work on what they want” and “help them develop their skills”.

This reminded me of an interesting case I heard recently from Jens Korte, an agile coach I had a chance to work with lately. Jens is well versed in both Talent Management and Project Portfolio Management. He consulted a customer who wanted to have a clear and visual way to manage talent in their company. The line of thought is the following:

what jobs do we want to get done? > what skills do we need for that? > which skills do we have? > where is the delta?

On the photo below you see a board where the horizontal lanes are people and the vertical lanes are different skills required for the best results of work. The red color of the post-it means this person has insufficient skills for a particular job, and green means s(h)e is great in it. A map like this helps to visualise some business capabilities which are currently underdeveloped in the team (e.g. in the 4th column from the left if Felix is sick – nobody can replace him for this kind of work).

That’s the physical version:


…or the digital version in Targetprocess (of course:)

Team Capabilities 2015-11-17 20-57-09

As a side note, I believe that helping modern companies to do Talent Management/Development is a promising area for a visual management tool like Targetprocess and we shall definitely be looking more into this soon.

PPM: Link between Strategy and Execution

I attended a session by one of PPM analysts at Gartner – Lars Mieritz – to the subject of Business Outcomes in a Project Portfolio.

The message was rather simple: there is a gap between the goals of a company’s senior management and the execution of the projects. And we need to close this gap by making business benefits more meaningful in the pre-project stage through a set of business outcome performance indicators and metrics that tie back to the business case and its stated benefits.

Example: CEO and the Board of Directors have a business initiative like “increase client satisfaction with our customer service in 2016”. This generates a portfolio of projects e.g. trainings for customer service teams, replacement of the old CRM with a more modern one, etc. Problem: when PMO plans this portfolio it is hard for them to understand which projects are more important than the others (they cannot do all of them due to limited resources) and how to see if projects were successful. As a result, they would often just evaluate the projects by the Cost/Benefit ratio and those projects which have the highest ratio are considered to be successful. But this is not always true.

Solution: CEO/Executive Board define Critical Success Factors (CSF) and  metrics to measure those business outcomes. E.g. CSF = 30% increase in satisfaction rate with  Customer Service and the metric = a customer survey in the beginning and the end of the year should benchmark the current and new satisfaction rates. Practically, this happens by providing a Business Case document for each objective which has a bunch of parameters. The Business Case document needs to be filled in by the business executives who should indicate expected outcomes and the metrics for measuring the project success.

PMO would then be able to clearly define their project portfolio based on these expected outcomes and related metrics. To summarize it, the correct sequence should be as follows:

  1. Business objectives are defined with Critical Success Factors (CSF)
  2. Business benefits are formulated
  3. Required business changes are defined along with the metrics (they should be directly related to the original business objective vs. being purely technical such as e.g. server availability %)
  4. The defined business changes are enabled
  5. Project portfolio is managed by the CSF and metrics


Another topical problem for many companies nowadays seems to be the necessity to enable fast feedback in both directions:

BUSINESS > IT (e.g. “guys, don’t do this project anymore, the priorities changed last month”)

IT>BUSINESS (e.g. “this project is stuck or this has been already delivered, try this out”)

Gartner recommends to have an agile feedback mechanism between Business and IT, which means to exchange information faster and more often.

(Side thought: Targetprocess customers could solve these problems by

  1. a) Letting company’s executives create initiatives and specify their expected outcomes and metrics as part of the initiative’s attributes
  2. b) PMO/IT Dept quickly feed back the status of projects and are updated on the changes in strategy

Our product specialists can help you set up Targetprocess that way.)

PPM Hype Cycle

Another PPM presentation I attended was called “PPM Hype Cycle” by Teresa Jones. A hype cycle is something that could be easily described with an analogy with a couple’s relationship. In the beginning there is a lot of excitement then a lot of disappointment and later (if the couple did not get divorced in the greyest days) the line goes up again, and this is called a “mature relationship” then:)

It looks like this:


So now here’s what Gartner says about the hype cycle of the Project Portfolio Management market. I will go through different stages of its development, I marked the most important ones with numbers on the graph above. I didn’t catch all items so I will mostly highlight here those that I think are especially interesting:

1)… (not sure what it was)

2) adaptive Enterprise

3) Business Transformation office

4) Hybrid of Cloud and On-Premise application management skill set

5) Project Collaboration space (=online tool for project management)

6) Adaptive Program Management

7) Collaborative Work Management (knowledge work, not just projects)

8) Integrated IT Portfolio Analysis (this means an alignment of applications, projects and services in the same portfolio)

9) NPD = New Product Development Portfolio (here the Gartner analyst pointed out that this space is new and there are no good tools covering this yet, it is still a niche to be filled and new tools сan be expected in this space soon).

10) Agile project management

11) Kanban for programs

12) Application Portfolio Management (the challenge here is to monitor the status of applications in real-time)

13) Cloud-based PPM tools (here Gartner said that the new cloud-based tools are good at Project Management and not yet good enough at Portfolio management)

14) PPM Certification

15) Reporting Enterprise PMO

16) Resource Management. Gartner’s point here was that current implementation of Resource Management in PPM tools is too detailed, too complex, needs to be done differently: rather high-level capacity management is needed than too detailed person-level management)

17) IT PMO

18) Earned Value management

19) IT PPM applications. They said here that old PPM tools are hard to update and a new generation of PPM tools should come around

20) PPM for Professional Services

21) Idea Management (they thought that this area may soon move into another market segment).

An interesting thing is that Integrated PPM, NPD, Agile and Kanban are on top of this hype wave and these are the areas where agile tools like Targetprocess are surely most familiar with and powerful at.

I had a chat with Teresa after her session and introduced Targetprocess quickly (she didn’t know of it, of course). We didn’t go into many details but to my question about Capacity Management, she said that current PPM vendors may be doing too much there and 80% of this functionality should be more than enough. Also she said that planning and managing people resources too granularly and too in advance could be a big mistake and her suggestion is that there should be a 2-step process:

Step 1: High-level capacity planning. We ask ourselves “Can we do this project next summer?” And if the answer is most probably YES, we put it on the roadmap

Step 2: When it comes closer to the implementation of the project, we plan it in more detail (which skills do we need, do we have people with the required skillset available? Days off and public holidays?)

Capacity Management is a topic we are working on full speed right now and it would be indeed interesting to collect more feedback from any of you reading this post and involved with Project Portfolio planning and capacity management in your companies. How long in advance is capacity and resource planning relevant for your business, do you plan on the personal or less granular (team? squad?) levels? Please feel free to post in the comments here or contact us directly by mail info(@) if you want to share your ideas.

Categories: Companies

Targetprocess v.3.7.12: minor bug fixing

TargetProcess - Edge of Chaos Blog - Thu, 11/19/2015 - 15:03
Fixed Bugs
  • Customize cards: “Progress” unit shows 0% for completed work if it’s estimated at 0 points/hours
  • Fixed: Failed drag-n-drop of a card with custom fields if drop-down custom fields are lanes
  • Fixed: Team iteration split error “[UserStory.Effort]: Effort should be equal or greater than zero.”
  • Fixed: Custom Rule ‘Assign Task to a person who started it’
  • Fixed: Loss of custom field values on a card moved within a multi-select custom field lane
  • Fixed: Avatars of Targetprocess users are not shown in the ‘Owner’ field list on a Request view
  • Improved performance when expanding hierarchical lists


Categories: Companies

ACE! Conference 2016 Call for Speakers

Scrum Expert - Thu, 11/19/2015 - 10:57
The ACE! (Agile Central Europe) conference is the largest event about Agile project management and software development in Central Europe, attracting people from all over the region. It will take place in Krakow, Poland on 14-15 April 2016. The ACE! 2016 conference combines lean/agile and LeanUX/Lean Startup topics in two tracks of 45 minute talks and adds a workshop track for workshops of 1-2 hours in ...
Categories: Communities

Performance Improvements, New Features, and a Peek at the Road Ahead

Pivotal Tracker Blog - Wed, 11/18/2015 - 20:05

If you’re among the folks who have been wondering whether we’ve been trapped under a rock or lost the source code to Tracker, fear not! We are alive and well, and our coding fingers are in fine shape. Many of our efforts over the last few months have been focused on making Tracker rock-solid stable and secure, as well as iterating on brand-new features to improve the overall experience. But it’s one thing for us to say “We’ve been busy!” and hope you take our word for it; we thought it would be more useful to outline what we’ve been working on.

One major change related to the improved stability and performance theme is that Tracker now uses push instead of polling for client synchronization. This means that project changes and notifications should now appear immediately.

cut_and_paste_lrgWe’re also gradually shifting more focus to core usability improvements and new features. One recent example is the ability to attach images to story comments by pasting them from the clipboard. Due to browser limitations, it only works in Chrome, but give it a try! Copy an image to the clipboard (with CMD-CTRL-Shift-4 on a Mac), then paste it to a new story comment.

Also for Chrome users, the one-click copy-to-clipboard feature (for story URL and IDs) no longer depends on Flash.

When copying story or epic IDs to the clipboard, note that we now prefix the ID with a # (or ## for epics). This makes it easier to paste the IDs into story descriptions or comments, and have them turn into automatic story/epic links. You can then mouse over those to see details about the linked item.

We’ve been hard at work refining our new reporting and analytics features. Thanks to all those who provided feedback on these features so far. We’re now just around the corner from releasing these to everyone, so stay tuned!

Screen Shot 2015-11-18 at 9.23.29 AM

Our mobile app teams have also been hard at work. We just released a fresh update of our Android app, and a new version of the iOS app is coming up soon.

And if you’re wondering, yes—one aspect of all these new features is that we’re expanding. Check out our Jobs page if you’re interested in getting in on the action.

We’re looking forward to a lot of excitement in 2016. As always, we welcome your input throughout the process. Please send your comments and feedback to

The post Performance Improvements, New Features, and a Peek at the Road Ahead appeared first on Pivotal Tracker.

Categories: Companies

When Do You Need a DevOps Team?

Leading Agile - Mike Cottmeyer - Wed, 11/18/2015 - 18:41


With all the focus on continuous delivery and test automation, the inevitable question arises, “do I need a DevOps team?”. Just as with other Domain type teams that support software delivery teams, looking at the general question, “should I form a Domain team?” will help answer our DevOps question. So let’s back up.

What is a Domain Team?

A domain team is a cross-functional software delivery team with a special skill set. This skill set is needed by some or all of the delivery teams in an organization.  An example of a special skill set is ETL Developers. Where the database structure and queries can be developed within a delivery team, the data loading and associated file maintenance is a small portion of the entire functionality. To bring in an ETL specialist for one or two sprints and then send them off again defeats the purpose of keeping the team together.

Collecting the ETL new development work with the ETL maintenance work into a single backlog presents the opportunity to staff an ETL team that can support multiple delivery teams.  Now here is the tough question in Agile Software delivery terms. Is this an Agile team? They have a defined backlog of work. They are cross-functional if we include a Business Analyst to help with story analysis and Quality Analyst to help with testing. What they may not have is a product or project and probably not a release as thought of in the Java world. The kicker is whether or not they would have a product owner. In most cases they will not have a product owner or a product team, but will need to gather stories from other teams.

Justifying the Need for a Domain Team

The answer to the question, how do I know if I need a domain team, is to collect the numbers. Is the capacity of the delivery team impacted by the need to “occasionally” perform with specialty skills? Do we have a quality issue with the work completed? Are there other teams that need this skill set? Will there be a maintenance load that will need this skill set?

Answering these questions and collecting data will not only help to define the size of the team needed but will provide numbers for the justification of the team. We all underestimate the cost in productivity of context switching. Having members of the team stop functional delivery for operational maintenance or to use a special tool can drag down the entire team. Use velocity variance, escaped defects and hangover points to build the case for having a specialty team. Collect the same data from other teams to strengthen your case.

How Do I Know if I Need a DevOps Team?

First define the work of the team. The team could be responsible for only building and deploying software in different environments. It could be responsible for building metrics, analyzing regression suite failures, running security and performance test.

Once you draw the line on the responsibilities collect your data.  If your team in combination with other teams in the organization are spending more than one person’s effort on these activities every sprint make the business case for creating the team.

Create the policies for this team to accept work. These policies and their work-flow must be explicit. Without clear guidance the other delivery teams may see your new DevOps team as an opportunity to move all tasks remotely related to the DevOps team.  Defining how the software will be deployed and writing the deployment scripts still lies with the delivery teams. Making sure the deployments are fast, accurate and report results are the responsibilities of the DevOps teams.
DevOps cartoon

Finally, expect some hiccups along the way. Whenever new teams and new processes are introduced there will be the need for education as well as trial and error. Make sure you have strategies that allow for some mistakes. Whether it is training, communication plans, user guides or on-boarding practices, everyone needs to know what the new team does and how to work with them.

You need a DevOps team if you can justify the need with data, define the responsibilities, and define a strategy for communicating who is in the team and what the new processes are.

The post When Do You Need a DevOps Team? appeared first on LeadingAgile.

Categories: Blogs

Teach the World Your Experience in a Mobile-First, Cloud-First World

J.D. Meier's Blog - Wed, 11/18/2015 - 18:10

“Be steady and well-ordered in your life so that you can be fierce and original in your work.”  -- Gustave Flaubert

An important aspect of personal effectiveness and career development is learning business skills for a technology-centric world.

I know a lot of developers figuring out how to share their expertise in a mobile-first, cloud-first world.  Some are creating software services, some are selling online courses, some are selling books, and some are building digital products.    It’s how they are sharing and scaling their expertise with the world, while doing what they love. 

In each case, the underlying pattern is the same:

"Write once, share many." 

It’s how you scale.  It’s how you amplify your impact.  It’s a simple way to combine passion + purpose + profit.

With our mobile-first, cloud-first world, and so much technology at your fingertips to help with automation, it’s time to learn better business skills and how to stay relevant in in an ever-changing market.   

But the challenge is, how do you actually start?

On the consumer side ...
In a mobile-first, cloud-first world, users want the ability to consume information anywhere, anytime, from any device.

On the produce side ...
Producers want the ability to easily create digital products that they can share with the world -- and automate the process as much as possible. 

I've researched and tested a lot of ways to share your experience in a way that works in a mobile-first, cloud-first world.  I’ve went through a lot of people, programs, processes, and tools.  Ultimately, the proven practice for building high-end digital products is building courses.  And teaching courses is the easiest way to get started.  And Dr. Cha~zay is one of the best in the world at teaching people how to teach the world what they love.

I have a brilliant and deep guest post by Dr. Cha~zay on how to teach courses in a mobile-first, cloud-first world:

Teach the World What You Love

You could very much change your future, or your kid’s future, or your friend’s future, or whoever you know that needs to figure out new ways to teach in a mobile first, cloud-first world.

The sooner you start doing, testing, and experimenting, the sooner you start figuring out what works in a Digital Economy could mean to you, your family, your friends, in a mobile-first, cloud-first world.

The world changes. 

Do you?

Categories: Blogs

New Blog Entry at

NetObjectives - Wed, 11/18/2015 - 15:55
We've got another new entry over at: ...which is all about the way unit tests can be structured to make them better specifications. We welcome comments at the Sustainable Test-Driven Development blog site. Thanks!

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Lead Time Metrics: Why Weekends Matter

At the Agile2015 conference last August, I overheard a tool vendor demonstrating metrics reports to an interested visitor....

The post Lead Time Metrics: Why Weekends Matter appeared first on Blog | LeanKit.

Categories: Companies

Finitely Iterating Infinite Data With ES6 Generators

Derick Bailey - new ThoughtStream - Wed, 11/18/2015 - 14:30

ES6 generators provide a new way of working with data that should be iterated. In fact, generators produce an iterator when you initially call the generator function.

Infinite iterator

One of the things that you can do with a generator, that might not be obvious, is iterate over a never-ending series of data items while still allowing the iteration to be halted when you want it to be. 

Finite Iteration

Normally, iteration takes place over a finite set of items. There are a large number of ways to do this, including for loops, forEach loops and more:

In this example, there is a finite number of items in the list. Iteration happens and then the code continue on.

With generators, though, you can have an infinite set of data over which you can iterate.

Infinite Data

The methods of generating an infinite amount of data are numerous, but a simple example would be to have a counter that increments and then reset on a regular basic (to avoid number overflow, etc). 

Once you have an infinite amount of data, you can modify the above code to use a generator and yield it.

From here, you can iterate over the data using a generator iterator.

Infinite Iteration

A generator function returns an iterator object when you execute it. The easiest way to iterate through the data in the iterator is with the new “for of” loop in ES6.

When you run this code, you will get an infinite loop of CPU burning console.logs. 

Be sure to have your ctl-c fingers ready, because it won’t stop otherwise!

Finitely Iterating An Infinite Data Set

There are some real possible uses for this beyond burning your CPU up. For example, you may have an “infinite scroll” web app where you continuously load data as the user scrolls.

Data could be loaded in chunks, and then iterated to display a set number of items. Once the user scrolls to a certain spot, continue iterating the existing data instead of having to request more items. 

To do this, you would need to skip “for of” loops, however. That method of iteration will literally run forever if you let it. Instead, look at using the iterator API directly. 

In this example, the code is manually iterated using the “.next()” method on the iterator returned from the generator function. Once that happens, the current iteration number is checked, and a check is made to ensure iteration is not yet complete. Inside of the while loop, the item is rendered, the “i” counter is incremented and the iterator is moved to the next item. 

This loop repeats until the iterator runs out of items, or the “i” counter hits 10. To have it resume iteration on a scroll even or otherwise, a little more encapsulation will need to happen.

Resuming The Iteration

Whether you want to use a “more” button, a “next page” button or a scroll event doesn’t really matter. To have the rendering of items resume, you need to resume iteration. Fortunately, this is exactly what generators allow you to do!

By wrapping the generator code in a slightly nicer abstraction (function set), you can easily resume iteration when you want to.

With this setup, you can “continueRendering” whenever you want!

In this case, the first rendering happens when the page loads (using jQuery) and then when a “more” button is clicked.

Easy Iteration, Many Options

The idea of infinite scrolling, or continuously loading / rendering new items with a “more” button click is certainly not new. Plenty of website have been doing this for a very long time. With generators, though, you have more options for pre-loading the data and only rendering a certain number of items.

With the generator version of this kind of code, iteration takes place when you want it to – when you call the “.next()” method on the iterator. This allows you to more easily handle an unknown number of data items. You no longer have to deal with fixed array sizes and ensuring you are looking at the next item by index. Generator iterators handle the complexity of figuring out which item is the current item.

There are still plenty of other good uses of generators, too, as I’ve talked about before. Any time you want to iterate through items, pause iteration and process them, or even abandon iteration entirely, generators may be a good choice. 



Still Need The Basics Of Generators?

If you’d like to start from scratch with generators – see how the protocol for iteration works, how you can use them to better handle asynchronous code, etc – check out my Learning ES6 Live! series on WatchMeCode.


This series includes several episodes to cover the ground-up learning experience of generators. You’ll see how they work, some of the more advanced possibilities with asynchronous code, and more!

Categories: Blogs

Docker to the on-premise rescue

Xebia Blog - Wed, 11/18/2015 - 11:18

During the second day at Dockercon EU 2015 in Barcelona, Docker introduced the missing glue which they call "Containers as a Service Platform". With both focus on public cloud and on-premise, this is a great addition to the eco system. For this blogpost I would like to focus on the Run part of the "Build-Ship-Run" thought of Docker, and with the focus on on-premise. To realize this, Docker launched the Docker Universal Control Plane which was the project formerly known as Orca.

caas-private I got to play with version 0.4.0 of the software during a hands-on lab and I will try to summarize what I've learned.

Easy installation

Of course the installation is done by launching Docker containers on one or more hosts, so you will need to provision your hosts with the Docker Engine. After that you can launch a `orca-bootstrap` container to install, uninstall, or add an Orca controller. The orca-bootstrap script will generate a Swarm Root CA, Orca Root CA, deploy the necessary Orca containers (I will talk more about this in the next section), after which you can login into the Docker Universal Control Plane. Adding a second Orca controller is as simple as running orca-bootstrap with a join parameter and specifying the existing Orca controller.


Let's talk a bit about the technical parts and keep in mind that I'm not the creator of this product. There are 7 containers running after you have succesfully run the orca-bootstrap installer. You have the Orca controller itself, listening on port 443, which is your main entry point to Docker UCP. There are 2 cfssl containers, one for Orca CA and one for Swarm CA. Then you have the Swarm containers (Manager and Agent) and the key-value store, for which Docker chose etcd. Finally, there is an orca-proxy container, whose port 12376 redirects to the Swarm Manager.  I'm not sure why this is yet, maybe we will find out in the beta.

From the frontend (which we will discuss next) you can download a 'bundle', which is a zip file containing the TLS parts and a  sourceable environment file containing:

export DOCKER_CERT_PATH=$(pwd)
export DOCKER_HOST=tcp://orca_controller_ip:443
# Run this command from within this directory to configure your shell:
# eval $(
# This admin cert will also work directly against Swarm and the individual
# engine proxies for troubleshooting.  After sourcing this env file, use
# "docker info" to discover the location of Swarm managers and engines.
# and use the --host option to override $DOCKER_HOST

As you can see, it also works directly against Swarm manager and Engine to troubleshoot. Running `docker version` with this environment returns:

Version:      1.9.0
API version:  1.21
Go version:   go1.4.2
Git commit:   76d6bc9
Built:        Tue Nov  3 17:43:42 UTC 2015
OS/Arch:      linux/amd64
Version:      orca/0.4.0
API version:  1.21
Go version:   go1.5
Git commit:   56afff6
OS/Arch:      linux/amd64


Okay, so when I opened up the frontend it looked pretty familiar and I was trying to remember where I've seen this before. After a look at the source, I found an ng-app parameter in the html tag named shipyard. The GUI is based on the Shipyard project, which is cool because this was an already well functioning management tool built upon Docker Swarm and the Docker API, so people familiar with shipyard already know the functionality, so let me quickly sum up what it can do and wthat it looks like in Docker UCP.

ducp-dashboardDashboard overview

ducp-applications2Application expanded, quickly start/stop/restart/destroy/inspect running container

ducp-applications-applicationApplication overview, graphs of resource usage and container IDs can be included or excluded from the graph.

ducp-containersContainers overview, multi select containers and execute actions

ducp-containers-container-logsAbility to quickly inspect logs

ducp-contaienrs-container-consoleAbility to exec into the container to debug/troubleshoot etc.

Secrets Management & Authentication/Authorization

So, in this hands-on lab there were a a few things that were not ready yet. Eventually it will be possible to hook up Docker UCP to an existing LDAP directory but I was not able to test this yet. Once fully implemented you can hook it up to your existing RBAC system and give teams the authorization they need.

There was also a demo showing off a secret management tool, which also was not yet available. I guess this is what the key-value store is used for as well. Basically you can store a secret at a path such a secret/prod/redis and then access it by running a container with a label like:

docker run -ti --rm -label com.docker.secret.scope=secret/prod

Now you can access the secret within the container in the file /secret/prod/redis.

Now what?

A lot of the new things are being added to the ecosystem, which is certainly going to help the adoption of Docker for some customers and bringing it into production. I like that Docker thought of the on-premise customers and deliver them an equally as the cloud users. As this is an early version they need feedback from users, so if you are able to test it, please do so in order to make it a better product. They said they are already working on multi-tenancy for instance, but no timelines were given.

If you would like to sign up for the beta of Docker Universal Control Plane, you can sign up at this page:



Categories: Companies

How Mattermark is Creating Market Intelligence for the Startup Community

Pivotal Tracker Blog - Tue, 11/17/2015 - 23:56

We know that funding is critical for the life of a startup, and founders can opt to bootstrap or raise capital from VCs. Many do both as we saw in Episode 8, when I interviewed Melody McCloskey.

Then in Episode 9, I spoke to Shruti Gandhi, the founding and managing partner at Array VC, a fund that invests in early stage startups. Shruti shared with us the different ways investors can help a company grow, plus tips for dealing with different types of investors, and how to dig into an investor’s thesis to see if they’re the right fit for you.

In today’s episode, we’re going to expand beyond the mechanics of funding, and learn about a startup that’s helping investors and founders learn all they can to make decisions when it comes to investing. Danielle Morrill is the CEO and co-founder of Mattermark, a data platform that keeps track of startups and their growth signals.

Danielle began her startup career working with Pelago, then went on to become the first employee at Twilio, and now she’s launched Mattermark.  

Her goal is to make Mattermark the go-to source for information about startups and their investment potential. Think of it as Bloomberg for private companies.  

In this episode, you’ll learn:

  • How Mattermark compiles information about private companies and helps investors make informed investment decisions;
  • What Mattermark’s Startup Index and Growth Score are, and how they benefit startups;
  • How startup founders can benefit from Mattermark using it as a one-stop shop for finding the right investors; and
  • How Mattermark helps startups discover potential customers.

It was also fascinating to hear Danielle’s thoughts on the future of public companies and how she believes it’s more important than ever for investors and start-up founders to be as knowledgeable as possible. I’m interested to see how the Growth Score develops, too. How about you?

Share your thoughts and questions about today’s episode in the comments below.

Please subscribe to the Femgineer TV YouTube channel so you’ll be the first to know when our new episode launches. In December, Sandi MacPherson will share with us how she went from being a climate change scientist to the founder of Quibb. Her story will inspire you.

The post How Mattermark is Creating Market Intelligence for the Startup Community appeared first on Pivotal Tracker.

Categories: Companies

Agile Project Management with Kanban

TV Agile - Tue, 11/17/2015 - 20:30
There’s a way to organize your work, stay focused, avoid mistakes, and be hyper-productive that you can learn in five minutes using sticky notes and markers. It’s been used by Toyota to make cars, by Xbox to build software, and by individuals to maintain sanity. It’s called Kanban, and Eric Brechner, an Xbox development manager, […]
Categories: Blogs

Clarizen Fall Release Enhances JIRA Integration

Scrum Expert - Tue, 11/17/2015 - 18:08
Clarizen has announced its Fall 2015 Product Release, delivering dynamic new options for viewing work, as well as a stronger integration between Atlassian JIRA and Clarizen. The enhanced JIRA integration allows JIRA users’ work to be reflected and managed in Clarizen, providing full content creation and collaboration capabilities. The JIRA v2 integration strengthens everyone’s work connections and align teams with an enterprise-grade bidirectional integration. When one ...
Categories: Communities

Free Retrospective Tools for Distributed Scrum Teams

Scrum Expert - Tue, 11/17/2015 - 17:06
Even if Agile approaches favor collocated teams, distributed Scrum teams are more common that what we might think. Many Agile software development teams are based on a virtual organization. This article presents some free online tools that can be used to facilitate retrospectives for distributed Scrum teams. You will find in this article only tools that are supposed to be used for free in the long ...
Categories: Communities

Javascript Goes Back to Class

Not long ago at a user group I saw a strange piece of sample code like this on an overhead projector:

class Person {

  constructor(firstName, lastName) {
    this.firstName = firstName;
    this.lastName = lastName;

  fullName() {
    return this.firstName + ' ' + this.lastName;


I chuckle a little bit inside. I’ve heard plenty of arguments over the years that Javascript’s prototypical inheritance was the right way to do things and trying to force traditional OO on Javascript was doing it all wrong:

If you’re creating constructor functions and inheriting from them, you haven’t learned JavaScript. It doesn’t matter if you’ve been doing it since 1995. You’re failing to take advantage of JavaScript’s most powerful capabilities. — Eric Elliot

It turns out ECMAScript 6 has officially added class style OO to the language. So the needs of many occasional Javascript developers to have a more familiar looking construct that would be at home in Java, C#, or Ruby eventually won.

Categories: Blogs

How team diversity enables continuous improvement

Ben Linders - Mon, 11/16/2015 - 23:57
Continuous improvement requires that people reflect and find ways to do their work in a better way. Having diversity in agile teams makes it possible to discover and explore new ways of working, where uniform teams with identical kinds of people would aim for steadiness and don't want things to change. Let's explore how you diversity can enable continuous improvement using agile retrospectives. Continue reading →
Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.