Skip to content

Feed aggregator

Free Retrospective Tools for Distributed Scrum Teams

Scrum Expert - Tue, 11/17/2015 - 17:06
Even if Agile approaches favor collocated teams, distributed Scrum teams are more common that what we might think. Many Agile software development teams are based on a virtual organization. This article presents some free online tools that can be used to facilitate retrospectives for distributed Scrum teams. You will find in this article only tools that are supposed to be used for free in the long ...
Categories: Communities

Javascript Goes Back to Class

Not long ago at a user group I saw a strange piece of sample code like this on an overhead projector:

class Person {

  constructor(firstName, lastName) {
    this.firstName = firstName;
    this.lastName = lastName;

  fullName() {
    return this.firstName + ' ' + this.lastName;


I chuckle a little bit inside. I’ve heard plenty of arguments over the years that Javascript’s prototypical inheritance was the right way to do things and trying to force traditional OO on Javascript was doing it all wrong:

If you’re creating constructor functions and inheriting from them, you haven’t learned JavaScript. It doesn’t matter if you’ve been doing it since 1995. You’re failing to take advantage of JavaScript’s most powerful capabilities. — Eric Elliot

It turns out ECMAScript 6 has officially added class style OO to the language. So the needs of many occasional Javascript developers to have a more familiar looking construct that would be at home in Java, C#, or Ruby eventually won.

Categories: Blogs

How team diversity enables continuous improvement

Ben Linders - Mon, 11/16/2015 - 23:57
Continuous improvement requires that people reflect and find ways to do their work in a better way. Having diversity in agile teams makes it possible to discover and explore new ways of working, where uniform teams with identical kinds of people would aim for steadiness and don't want things to change. Let's explore how you diversity can enable continuous improvement using agile retrospectives. Continue reading →
Categories: Blogs

Android Update—Version 0.9

Pivotal Tracker Blog - Mon, 11/16/2015 - 22:50

Since we first released the Android Alpha over the summer, we’ve been hard at work revising and improving it to stay on the path we outlined. And now we’re pleased to release Android v0.9, with a host of updates. Get it now!

2015-11-13 14.09.15About those updates. Here’s what we changed for this new version:

* We removed the Everything panel and replaced it with Backlog, Icebox, and Done panels, which should be vastly more useful, as well as more reflective of the Tracker Web Experience (coming to IMAX theaters in 2017).

* The aforementioned Backlog panel has also been gussied up with iteration markers, for your iterating pleasure.

* At long last, you can add a story! For now, you can only add that story to the Icebox, but more story adding options are on the horizon.

* We conducted a thorough zapping of the bugs.

More updates are coming soon. After you have a chance to play around with it, let us know how this update is working out for you. Please send your feedback to



The post Android Update—Version 0.9 appeared first on Pivotal Tracker.

Categories: Companies

Going Down Under: Workshops in Melbourne, Australia

Ben Linders - Mon, 11/16/2015 - 22:31

In February 2016 I will be giving three workshops in Melbourne Australia, one on continuous improvement and two on agile retrospectives. The workshop are organized by Elabor8 who invited me to come to Australia:

We are extremely excited to have Ben Linders over from Europe for a series of workshops in February. Ben is the co-author of “Getting Value out of Agile Retrospectives”and is a regular keynote conference speaker (including Agile Greece Summit 2015 and QCon Beijing 2015).

I will … Continue reading →

Categories: Blogs

Peter Drucker understood agile leadership and agility before it even existed!

Agile For All - Bob Hartman - Mon, 11/16/2015 - 21:52

Management and leadership in the 21st century need to be significantly different or businesses will be left behind. People recognize this and management is slowing changing from what has been known as “Taylorism” or “scientific management” to something that has a variety of names, but the easiest one for me to relate to is “agile leadership.” I remember Certified Scrum Trainer and Coach, Pete Behrens, describing the agile leadership section of the Certified Scrum Coach application by saying something like, “We want coaches that understand agile leadership is both a verb and a noun.” What he meant is we often think of agile leadership as a noun and list behaviors and patterns that identify a style we call agile leadership. As a verb he meant we want leaders that actively lead organizations and people in agile ways. We want there to be evidence of doing Agile (verb) so that we can see the identifiable Agile result (noun).

Drucker-portrait-bkt_1014Famed management guru Peter F. Drucker understood this and started fighting it way before the world was actually ready for it. While Drucker passed away in 2005, his legacy lives on in a variety of ways, including the annual Drucker Management Forum which in 2015 was sponsored in part by the Scrum Alliance. Drucker’s legacy outlived him. His ideas are standing the test of time and have actually become more important in the 21st century as the speed of change in industries, markets, and business models are increasing at rates beyond anything experienced in prior history.

Lately I’ve been reflecting on various Drucker quotes and how they align so perfectly with what we currently think of as agility and agile leadership. For example, one famous quote is:

There is nothing so useless as doing efficiently that which should not be done at all.

The following quote was written many years later:

Simplicity–the art of maximizing the amount of work not done–is essential.

Many of you recognize the second quote as the 10th principle behind the “Manifesto for Agile Software Development” more commonly called just the “Agile Manifesto.” Of course the authors of the Agile Manifesto may have read similar quotes from other authors, including Antoine de Saint Exupéry, L’Avion who said, “It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.” Getting more modern, Lean Startup has the concept of the “Minimum Viable Product.” All of these are very similar thoughts.

What amazes me is that Peter Drucker wasn’t taking a product point of view; instead he was taking a management point of view! Think about this statement in management terms for a moment. He wasn’t being obvious by saying we should spend time efficiently creating something people wouldn’t use. He was saying we should examine everything we DO and make sure we aren’t DOING something that shouldn’t be done at all! That’s actually kind of mind blowing when I think about it.

For example, how many of us are really efficient at reading all kinds of news on the web (I’ll raise my hand on this one!)? And how many of those with your hands raised should be spending more time exercising (I grudgingly keep my hand in the air)? What, that’s not a good example because it isn’t work related? Ok, how about this one: How many of you are managers that efficiently generate metrics about your team? That might be good, but here’s the kicker: how many of you with your hands up actually have metrics that are accurate AND actually can be used to confirm or change behavior in a meaningful way? I’ll do another blog entry on a Drucker quote about metrics in the future, but for now let me just give some data from the Certified ScrumMaster courses I teach: Having had several thousand people in my courses answer those same questions, I have observed a very small percentage of people actually have accurate and meaningful metrics that confirm or change behavior. In other words, most of our metrics stink, are not useful, or worse – they are misused! If that’s the case, why should we be generating them at all? Please don’t take this to say that I think all metrics are useless. I don’t think that at all, but I do think the majority of what we measure is useless because the data is inaccurate, non-objective, biased, gamed, or simply that we don’t use the results effectively or correctly.

When Agile For All works with companies, we walk them through something I’ll call the “Agile For All Way.” I’ll admit, that’s a self-serving name, but the concepts boil down to treating companies as unique in what they do, while recognizing their problems are rarely unique. Part of what we do is to help organizations understand everything they do relative to delivering value. It is a rare company that couldn’t drop significant amounts of work they believe is important, but actually doesn’t contribute to delivering anything of value. There are many feelings that come into play when we do this, and the biggest are fear and loss of control. Reactions at this point in a transformation always fascinate me because while mentally people see that something is not valuable, they are so attached to it they can’t give it up. I understand that, and I’m very empathetic to their feelings, but at the same time I know that breaking that bond for the first time will make everything easier.

So my challenge to you, delivered through Peter Drucker’s quote: “There is nothing so useless as doing efficiently that which should not be done at all” is to take a hard look at what you are doing as part of your job. Be particularly concerned with things you do habitually, or things you do “because it’s always been done that way.” Find the things that are being done really efficiently and don’t need to be done at all! I often say, “Doing Scrum is the easy part, thinking in an agile way is the hard part!” This may be your first step toward changing how you think, and by so doing, you will be living the verb “agile leadership” as a step toward showing others the “agile leadership” noun in you.

I’m hoping to have a few more blog entries about Drucker quotes in the coming months. If there is one you find particularly appealing, please leave a comment.

If you liked this article, please recommend it and share it with others using the buttons below. Also feel free to comment on the article so others can see how you feel about it.

The post Peter Drucker understood agile leadership and agility before it even existed! appeared first on Agile For All.

Categories: Blogs

Agile Frameworks and Template Zombies

Leading Agile - Mike Cottmeyer - Mon, 11/16/2015 - 16:49

A while back, I had a notable conversation with our COO, Dennis Stevens.  When not sharing war stories from the Marine Corps or trying to one-up each other on who has more scars, we have constructive conversations around the correlation between team delivery metrics and team level competencies we assess, change management, and other “work” stuff.  This time, we were talking about Agile Frameworks and Templates.


A few weeks ago was Halloween. It reminded me about a book I wrote a few years ago titled Zombie Project Management.  It was fun to poke fun at so many things we all take for granted and not really think about.  On the one hand, you have the craft of Project Management. You have skilled professionals managing projects from start to finish.  On the other hand, you have zombies.  Zombies crave to eat living flesh though it will never satisfy their hunger.  Zombies do things because they don’t know any better.  They are compelled to do what they do.  Sadly, Dennis and I see people act like zombies every day.  Here are a few examples.

Agile Frameworks

It doesn’t matter if you’re choosing an Agile development framework like SAFe or an Agile Transformation Framework like the LeadingAgile Basecamp model, the models and frameworks are incomplete, by design!  They need to be adapted to meet your organizational goals.  Do you think the Agile Manifesto would have lasted as long as it has, if it answered all of your questions in two pages?  To that, if you think all of your questions are completely answered by a single “big picture” poster, you’re being naive. But, that’s exactly what we see happening. I hereby give you permission to mix and mash whatever you need, to make your organization operate better. Do you really think the Agile police are going to break down your door because you’re not following a framework as it was originally written? What happens if the author or creator changes it? Does that mean your business is now broken?

Don’t just follow the horde.  Look for a framework that looks like a potential organizational end-state.  Evaluate what your company values from a planning perspective. Next, evaluate what your customers value from a planning perspective.  Pick a framework and then refine it (through structure, governance, and metrics/tools) to align with an ideal end-state.


I’ve seen one too many companies do TDD.  Wait, don’t we want our clients to do TDD (Test Driven Development)?  Yes, but what I see happening is Tool Driven Development.  Tell me if this sounds familiar. Someone buys an expensive tool but they don’t know how to customize it to align with the organization. So, they change the processes to align with the tool.  I’ve lost count of how many people call a user story a “JIRA”.  I’ve also seen countless people do away with valuable activities that provide a shared understanding, merely because they weren’t trained how to leverage a tool or the tool is offline.  Yes, tools can be critical when a company scales. But, if you’re sitting next to someone on your team and your Instant Messaging client goes down, do you not talk with them? Do you cancel the daily standup?

Templates (and Template Zombies)

The term template zombies comes from the book Adrenaline Junkies and Template Zombies: Understanding Patterns of Project Behavior by Tom DeMarco et al. (Dorset House).

The authors’ definition:

Template Zombie: The project team allows its work to be driven by templates instead of by the thought process necessary to deliver products.

Even outside the application development world, we use templates as a pattern for processes such as painting, cutting, or drilling. In the application development world, we use templates as a preset format for some kind of knowledge work so that something does not have to be created from scratch.  I understand that we’re trying to save time and lower our rate of errors. But while we’re trying to save time, we sometimes turn into zombies. We forget that we need to provide context to what we’re working on and sometimes extend the template we started with.  If I’m provided a single template and it doesn’t align with my vision, I’m either going to move beyond the template or I’m not going to use it at all.  I’m not going to discount years of experience, just to follow a template someone else wrote.


Use your brains, trust your gut, and use some common sense. If the framework, tool, or template doesn’t look like it’s going to work for you, don’t sweat it. There are countless companies and people out there willing to solve the problem. It may not be cheap, but at least it will be correct.

The post Agile Frameworks and Template Zombies appeared first on LeadingAgile.

Categories: Blogs

Creating Great Estimates as a Team

Johanna Rothman - Mon, 11/16/2015 - 16:18

I’ve been teaching workshops these last few weeks. A number of the participants think that they need to create great estimates. I keep hearing, “I have to create accurate estimates. My team needs my estimate to be accurate.”

I have found that the smaller the work, the better the estimate. If people work as a team, they can provide more accurate estimates than they can alone. And, if they work as a team, the more likely they are to meet the estimate.

The people in my workshops did not want to hear this. Many of them wanted to know how to create an estimate for “their” work, accounting for multitasking.

I don’t know how to create great estimates when people assume they work alone, or if they multitask.

In all of my experience, software is a team activity (especially if you want to use agile or lean). For me, creating an estimate of “my” work is irrelevant. The feature isn’t done until it’s all done.

When we create solo estimates, we reinforce the idea that we work alone. We can work alone. I have discovered I have different ideas when I pair. That’s one of the reasons I ask for review, if I am not actively pairing. I have also discovered that I find problems earlier when I pair or ask for frequent review. That changes my overall estimate.

Multitasking creates context switching, with built-in delays. (See Cost of Delay Due to Multitasking, Part 2 or Diving for Hidden Treasures.) I don’t know how to account for the context-switch times. For me, the context-switching time varies, and depends on how many switches I need to do.

PredictingUnpredictable-smallIf you want to create great estimates, estimate as a team. For hints, see Predicting the Unpredictable: Pragmatic Approaches to Estimating Project Cost or Schedule.

I urge you to make the thing you estimate small, you consider how you work with other people to deliver this thing, and you do one chunk of work at a time. All of those ideas will help you create better estimates. Not for “your” work, but for the work you deliver to your customer.

Categories: Blogs

Kill .apply With The …Spread Operator

Derick Bailey - new ThoughtStream - Mon, 11/16/2015 - 14:30

A question was asked in a conversation about ES6 recently, about the real use of the ES6 spread operator. This person wanted a slightly better explanation than they had seen previously, as they weren’t yet sure of where this new operator was really useful. 

Spread operator

There are several things that this new operator can do, but one of the most useful may be that you no longer need fn.apply in many cases.

An Array Of Arguments

It’s common to be given an array and need to pass that array as the list of arguments to a function. When you pass that array without any special code or syntax – just passing it as a parameter – the whole array gets assigned to the named argument within the function.

But what happens when you want the values in the array to be applied to each individual named argument within the function? In the past, this was handled with the .apply method on a function.

With this syntax, your array would be torn apart and put into each of the arguments. This is a really useful tool to have – and with ES6 and the spread operator, it gets even easier.

Use The Spread Operator

ES6 adds a new form of syntax where you can pass an array to a function and have it’s contents “spread” across the method’s arguments. This is done using a “…array” syntax – called the spread operator.

With this, the above call to the .apply method can be replaced in many cases.

These two chunks of code (the .apply version above, and this one) are almost equivalent – in many cases they are equivalent in result. But, there is a legitimate difference that needs to be understood: “this”.

Handling “this” With .apply vs …spread

With the call to .apply, the first parameter will always be used to set the value of “this” within the function. This value may not matter at all – when “this” is not used within the function – and passing “undefined” is a valid option.

If your function uses “this”, however, then you would run into a problem with “undefined” as the first parameter – “this” would be undefined.

To correct that, you need to include your expected “this” as the first parameter.

With the spread operator, though, you don’t need to adjust for “this”. Your function can be called with an array as the list of parameters, and the implied value of “this” is managed through the function invocation mechanism.

In this version, there is no need to specify the value of this, if the implicit “this” is what you want. The spread operator allows the “” call to keep “obj” as “this” inside of the function.

Spread Won’t Completely Kill .apply

The usefulness of the spread operator extends far beyond the application of arrays over function arguments – but this is one of the big benefits that they provide. With this syntax, you can skip the call to .apply in many cases. It reduces the code a little, but more importantly it takes away an extra layer of required knowledge and complexity. 

However, the spread operator will not set “this” in any way other than the mechanics of calling the functions. If you need to explicitly set the value of “this”, the .apply call is still going to be valuable. Even with this limitation, having the spread operator available should reduce the amount of code requires to make calls to functions with an array as a list of arguments. 

Categories: Blogs

Agile and Scrum Trello Extensions

Scrum Expert - Mon, 11/16/2015 - 14:00
Trello is a free on-line project management tool that provides a flexible and visual way to organize anything. This approach is naturally close to the visual boards used in the Scrum or Kanban approaches. As the tool as an open architecture, some extensions have been developed for a better implementation of Agile project management in Trello and provides additional features like Scrum burndown charts. Updates November 16 ...
Categories: Communities

Unit Tests, How to Write Testable Code and Why it Matters by Sergey Kolodiy - Mon, 11/16/2015 - 01:19
Sergey Kolodiy wrote a very nice article: Unit Tests, How to Write Testable Code and Why it Matters
Categories: Communities

jq: Filtering missing keys

Mark Needham - Sun, 11/15/2015 - 00:51

I’ve been playing around with the API again over the last few days and having saved a set of events to disk I wanted to extract the venues using jq.

This is what a single event record looks like:

$ jq -r ".[0]" data/events/0.json
  "status": "past",
  "rating": {
    "count": 1,
    "average": 1
  "utc_offset": 3600000,
  "event_url": "",
  "group": {
    "who": "Web Peeps",
    "name": "London Web",
    "group_lat": 51.52000045776367,
    "created": 1034097743000,
    "join_mode": "approval",
    "group_lon": -0.12999999523162842,
    "urlname": "londonweb",
    "id": 163876
  "name": "London Web Design October Meetup",
  "created": 1094756756000,
  "venue": {
    "city": "London",
    "name": "Roadhouse Live Music Restaurant , Bar & Club",
    "country": "GB",
    "lon": -0.1,
    "phone": "44-020-7240-6001",
    "address_1": "The Piazza",
    "address_2": "Covent Garden",
    "repinned": false,
    "lat": 51.52,
    "id": 11725
  "updated": 1273536337000,
  "visibility": "public",
  "yes_rsvp_count": 2,
  "time": 1097776800000,
  "waitlist_count": 0,
  "headcount": 0,
  "maybe_rsvp_count": 5,
  "id": "3261890"

We want to extract the keys underneath ‘venue’.
I started with the following:

$ jq -r ".[] | .venue" data/events/0.json
  "city": "London",
  "name": "Counting House Pub",
  "country": "gb",
  "lon": -0.085022,
  "phone": "020 7283 7123",
  "address_1": "50 Cornhill Rd",
  "address_2": "EC3V 3PD",
  "repinned": false,
  "lat": 51.513407,
  "id": 835790
  "city": "Paris",
  "name": "Mozilla Paris",
  "country": "fr",
  "lon": 2.341002,
  "address_1": "16 Bis Boulevard Montmartre",
  "repinned": false,
  "lat": 48.871834,
  "id": 23591845

This is close to what I want but it includes ‘null’ values which means when you extract the keys inside ‘venue’ they are all empty as well:

jq -r ".[] | .venue | [.id, .name, .city, .address_1, .address_2, .lat, .lon] | @csv" data/events/0.json
101958,"The Green Man and French Horn,  -","London","54, St. Martins Lane - Covent Garden","WC2N 4EA",51.52,-0.1
107295,"The Yorkshire Grey Pub","London","46 Langham Street","W1W 7AX",51.52,-0.1

If functional programming lingo we want to filter out any JSON documents which don’t have the ‘venue’ key.
‘filter’ has a different meaning in jq so it took me a while to realise that the ‘select’ function was what I needed to get rid of the null values:

$ jq -r ".[] | select(.venue != null) | .venue | [.id, .name, .city, .address_1, .address_2, .lat, .lon] | @csv" data/events/0.json | head
11725,"Roadhouse Live Music Restaurant , Bar & Club","London","The Piazza","Covent Garden",51.52,-0.1
11725,"Roadhouse Live Music Restaurant , Bar & Club","London","The Piazza","Covent Garden",51.52,-0.1
11725,"Roadhouse Live Music Restaurant , Bar & Club","London","The Piazza","Covent Garden",51.52,-0.1
11725,"Roadhouse Live Music Restaurant , Bar & Club","London","The Piazza","Covent Garden",51.52,-0.1
76192,"Pied Bull Court","London","Galen Place, London, WC1A 2JR",,51.516747,-0.12719
76192,"Pied Bull Court","London","Galen Place, London, WC1A 2JR",,51.516747,-0.12719
85217,"Earl's Court Exhibition Centre","London","Warwick Road","SW5 9TA",51.49233,-0.199735
96579,"Olympia 2","London","Near Olympia tube station",,51.52,-0.1
76192,"Pied Bull Court","London","Galen Place, London, WC1A 2JR",,51.516747,-0.12719
101958,"The Green Man and French Horn,  -","London","54, St. Martins Lane - Covent Garden","WC2N 4EA",51.52,-0.1

And we’re done.

Categories: Blogs

Single Prioritised Input Queue

Problem Context
There are multiple sources for requests to do work which leads to unclear priority OR a de facto prioritisation approach based on who shouted loudest most recently.

Most things seem to take too long to complete which leads to more direct requests to individuals to expedite work.

People doing the work are frustrated by the inability to focus on anything as interruptions are constant.
Move all requests to a single input queue with a common prioritisation method - no requests are processed unless they come from this queue.

The prioritisation method may or may not allow explicit expediting but it will be a common policy, not a backdoor request.
Expected Consequences
  • People making requests who were used to jumping the queue may feel disappointed
  • Average lead time will drop and become more predictable
  • People doing the work will feel more focus and satisfaction
Categories: Blogs

Are your node modules secure?

Xebia Blog - Fri, 11/13/2015 - 14:05

With over 200k packages, npm is the world's largest registry of open source packages. It serves several million downloads each month. The popularity of npm is a direct result of the popularity of JavaScript. Originally npm was the package manager for Node.js, the server-side JavaScript runtime. Since Node.js developers mostly follow the Unix philosophy, the npm registry contains many very small libraries tailored to a specific purpose. Since the introduction of Browserify, many of these libraries suddenly became suitable for use in the web browser. It has made npm not only the package manager for Node.js, but for the entire JavaScript ecosystem. This is why npm is not an abbreviation of Node Package Manager, but a recursive bacronymic abbreviation for "npm is not an acronym". Wow.

If you do any serious JavaScript development, you cannot go without libraries, so npm is an indispensable resource. Any project of meaningful size is quickly going to rely on several dozen libraries. Considering that these libraries often have a handful of dependencies of their own, your application indirectly depends on hundreds of packages. Most of the time this works out quite well, but sometimes things aren't that great. It turns out that keeping all of these dependencies up to date can be quite a challenge. Even if you frequently check your dependencies for updates, there's no guarantee that your dependencies' authors will do the same. With the pace at which new JavaScript packages are being released, it's close to impossible to keep everything up to date at all times.

Most of the time it's not a problem to rely on an older version of a package. If your package works fine with an outdated dependency, there's no compelling reason to upgrade. Why fix something that isn't broken? Unfortunately, it's not so easy to tell if it is. Your package may have been broken without your knowledge. The problem is in the definition of "broken". You could consider it to mean your application doesn't work in some way, but what about the non-functionals? Did you consider the fact that you may be relying on packages that introduce security vulnerabilities into your system?

Like any software, Node.js and JavaScript aren't immune to security issues. You could even consider JavaScript inherently less secure because of its dynamic nature. The Node Security Project exists to address this issue. It keeps a database of known security vulnerabilities in the Node ecosystem and allows anyone to report them. Although NSP provides a command line tool to check your dependencies for vulnerabilities, a new company called Snyk has recently released a tool to do the same and more. Snyk, short for "so now you know", finds security vulnerabilities in your entire dependency tree based on the NSP database and other sources. Its CLI tool is incredibly simple to install and use. Just `npm install snyk` and off you go. You can run it against your own project, or against any npm package:

> snyk test azure

✗ Vulnerability found on validator@3.1.0
From: azure@0.10.6 > azure-arm-website@0.10.0 > azure-common@0.9.12 > validator@~3.1.0
No direct dependency upgrade can address this issue.
Run `snyk protect -i` to patch this vulnerability
Alternatively, manually upgrade deep dependency validator@~3.1.0 to validator@3.2.0


Tested azure for known vulnerabilities, found 32 vulnerabilities.

It turns out the Node.js library for Azure isn't quite secure. Snyk can automatically patch the vulnerability for you, but the real solution is to update the azure-common package to use the newer version of validator. As you see, most of the security issues reported by Snyk have already been fixed by the authors of the affected library. That's the real reason to keep your dependencies up to date.

I think of Snyk as just another type of code quality check. Just like your unit tests, your build should fail if you've accidently added an insecure dependency. A really simple way to enforce it is to use a pre-commit hook in your package.json:

"scripts": {
 "lint": "eslint src test",
 "snyk": "snyk test",
 "test": "mocha test/spec",
"pre-commit": ["lint", "test", "snyk"]

The pre-commit hook will automatically be executed when you try to commit to your Git repository. It will run the specified npm scripts and if any of them fail, abort the commit. It must be noted that, by default, Snyk will only test your production dependencies. If you want it to also test your devDependencies you can run it with the `--dev` flag.

Categories: Companies

5 Ways to Find Slack Time for Critical IT Improvements

As an IT department, you receive so many requests that you often have to put your own...

The post 5 Ways to Find Slack Time for Critical IT Improvements appeared first on Blog | LeanKit.

Categories: Companies

SonarQube Enters the Security Realm and Makes a Good First Showing

Sonar - Thu, 11/12/2015 - 16:45

For the last year, we’ve been quietly working to add security-related rules in SonarQube’s language plugins. At September’s SonarQube Geneva User Conference we stopped being quiet about it.

About a year ago, we realized that our tools were beginning to reach the maturity levels required to offer not just maintainability rules, but bug and security-related rules too, so we set our sights on providing an all-in-one tool and started an effort to specify and implement security-related rules in all languages. Java has gotten the furthest; it currently has nearly 50 security-related rules. Together, the other languages have offer another 50 or so.

That may not sound like a lot, but I’m pleased with our progress, particularly when tested against the OWASP Benchmark project. If you’ve heard of OWASP before, it was probably in the context of the OWASP Top 10, but OWASP is an umbrella organization with multiple projects under it (kinda like the Apache Foundation). The Top 10 is OWASP’s flagship project, and the benchmark is an up-and-comer.

The benchmark offers ~2700 Java servlets that do and do not demonstrate vulnerabilities corresponding to 11 different CWE items. The CWE (Common Weakness Enumeration) contains about 1,000 items, and broadly describes patterns of insecure and weak code.

The guys behind the benchmark are testing all they tools they can get their hands on and publishing the results. For commercial tools, they’re only publishing an average score (because the tool licenses don’t allow them to publish individual, named scores). For open source tools, they’re naming names. :-)

When I prepared my slides for my “Security Rules in SonarQube” talk, the SonarQube Java Plugin arguably had the best score, finding 50% of the things we’re supposed to and only flagging 17% of the things we should have ignored for an overall score of 33% (50-17 = 33). Compare that to the commercial average, which has a 53% True Positive Rate and 28% False Positive rate for a final score of 26%. Since then, a new version of Find Security Bugs has been released, and it’s spot on the graph has jumped some, but I’m still quite happy with our score, both in relative and absolute terms. Here’s the summary graph presented on the site:

Notice that the dots are positioned on the x and y axes based on the True Positive Rate (y-axis) and False Positive Rate (x-axis.) Find Security Bugs is higher on the True Positive axis than SonarQube, which threw me for a minute, but it’s also further out on the False Positive axis too. That’s why I graphed the tools’ overall scores:

Looked at this way, it’s probably quite clear why I’m still happy with the SonarQube Java scores. But I’ll give you some detail to show that it isn’t (merely) about one-upsmanship:

This graph shows the Java plugin’s performance on each of the 11 CWE code sets individually. I’ll start with the five 0/0 scores in the bottom-left. For B, E, G, and K we don’t yet have any rules implemented (they’re “coming soon”). So… yeah, we’re quite happy to score a 0 there. :-) For F, SQL Injection, we have a rule, but every example of the vulnerability in this benchmark slips through a hole in it. (That should be fixed soon.) On a previous version of the benchmark, we got a better score for SQL Injection, but with the newest iteration, the code has been pared from 21k files to 2.7k, and apparently all the ones we were finding got eliminated. That’s life.

For A and D, it’s interesting to note that while the dots are placed toward the upper-right of the graph, they have scores of -2% and 0% respectively. That’s because the false positives cancelled out the true positives in the scoring. Clearly, we’d rather see a lower false positive rate, but we knew we’d hit some FP’s when we decided to write security rules. And with a mindset that security-related issues require human verification, this isn’t so bad. After all, what’s worse: manually eliminating false positives, or missing a vulnerability because of a false negative?

For ‘I’, we’ve got about the best score we can get. The cases we’re missing are designed to be picked up only by dynamic analysis. Find Security Bugs gets the same score on this one: 68%.

For the rest, C, H, and J, we’ve got perfect scores: a 100% True Positive Rate and a 0% False Positive Rate. Woo hoo!

Of course, saying we’ve got 100% on item C or 33% overall is only a reflection of how we’re doing on those particular examples. We do better on some vulnerabilities and less so on others. Over time, I’m sure the benchmark will grow to cover more CWE items and cover in more depth the items it already touches on. As it does, we’ll continue to test ourselves against it to see what we’ve missed and where our holes are. I’m sure our competitors will too, and we’ll all get gradually better. That’s good for everybody. But you won’t be surprised if I say we’ll stay on top of making sure SonarQube is always the best.

Categories: Open Source

Stop Doing Retrospectives

TV Agile - Wed, 11/11/2015 - 23:02
You have been doing Agile for a few years now. With a regular cadence you have retrospectives and a lot of problems and great improvement opportunities are raised but nothing seems to really improve. Stop doing retrospectives! It is time to take your improvement work to a whole new level! It’s time to shift your […]
Categories: Blogs

Principles Are More Important Than Practices

Scrum Expert - Wed, 11/11/2015 - 22:58
One might wonder why it’s not so easy to adopt agile engineering practices and to achieve technical excellence. When we think of practices, we tend to think of simple things: sticking a shopping list on the fridge with a magnet, having a clear and prioritized list of things to do, and doing them in that order. Why is it then that with Agile practices, things ...
Categories: Communities

Guidelines for Agile Development using Tracker

Pivotal Tracker Blog - Wed, 11/11/2015 - 22:02

When you’re bogged down working on your day-to-day chores and features, it can be tough to stay focused on the broader goal of smooth Agile development. And while we don’t pretend that we have all the solutions, Tracker is, after all, designed to facilitate precisely that goal. Luckily, the time we’ve spent getting deep down and dirty with Tracker has allowed us to suss out the better Agile practices from the . . . less better ones.

Here, then, are what we consider some leading practices for getting the most juice from the Tracker fruit, and for Agile success more broadly.

vector 3d isometric cartoon illustration of men and women characters. corporate business infographic or advertising template. communication concept

Tracker is not a replacement for communication.

Language is fluid and subject to interpretation, so try not to use a Tracker story as a stand in for a conversation. Consider doing a design walkthrough of a story or project with support and testers, so they can help with different perspectives on usage and customer requests. Use techniques such as story mapping and example mapping to get a shared understanding of features and stories. You can add artifacts from these discussions, such as a photo of the whiteboard, to your Tracker story as a reminder.

Are your story comments stacking up? Get people together briefly to go over the issues, then update the story with the key points. Discussing issues in person can minimize misunderstandings that can too easily become a distraction or time suck.

The team that writes stories together succeeds together.

Whenever possible, customers and the delivery team should write stories together, because a story is both a customer business value and a deliverable. In this way, everyone’s interests and viewpoints can be shared and aligned.

Plan for success.

Conduct a regular iteration planning meeting so the team can review and estimate upcoming stories, as well as understand the value provided by each story. Develop estimates as a group, so everyone can be heard. To make the process lighter, you could play an estimation game. We do not suggest Settlers of Catan. Instead, try something more like Rock, Paper, Scissors. To estimate a given story, have each team member toss out fingers—in line with the estimation scale they’ve chosen—to indicate their suggestion for story complexity. Did everyone estimate the same? Great! If not, begin a discussion and estimate the story together.

Go small.

Create stories that are incremental and focused on the perspective of the user. So if you need to repair a brick wall, try to focus on the user’s interaction with a specific aspect of the wall, not the entire wall itself. The story, “Wall should be in good shape,” would be more useful as, “Passer-by should not see visible cracks in wall.” In general, a good guide is to keep stories small enough so that they can be completed in two or three days, including all testing.

Put another way: try to avoid large estimates.

At the same time, some stories will be grander in scope or more complicated despite your best intentions. You should still try to minimize this and reserve the practice only for stories of unclear or enormous scope, and then break them down. Otherwise, an estimate of 8 (based on the Fibonacci scale) is a cry for help. As a developer, you should ask for clarification and look for seams where the story can be broken down into multiple stories. Tip: use the clone story feature to reproduce it and break it into smaller bite-sized stories.

Name a Tracker Czar.

Steering the ship while simultaneously fixing a leak is a challenge, to say the least. To that end, you should have a Tracker Czar, who shouldn’t also be coding in the project they own. Owning a project is a lot of responsibility, but it makes a huge difference.

The customer should prioritize stories.

While it’s true that anyone can create stories and put them in the Icebox, only the customer (or a PM acting on behalf of the customer) should prioritize them. As the business owner, part of a customer’s decision-making process is to decide which features have priority over others. In other words, the customer should be making the hard choices.


Turn chores into feature stories.

Turning chores into features reframes them as items of direct and verifiable value to both the end user and project goals. This could simply be a matter of rephrasing the story, or arguing more strenuously for its business value.

Accept and then move on.

Never restart an accepted story; instead, make a new story or bug. It’s cleaner, you can keep new information more focused, and it doesn’t detract from the work that’s already been done. You can always paste in the URL to the original story for context.

Reject with class.

Rejecting a story with both tact and clarity can be challenging, but there are some strategies to make it go more smoothly. If you’re not onboard with a given feature or story, prefix your comment with “reject:”—it’s easier to scan and figure out which comment is related to the rejection.

Don’t reject a story if it’s missing criteria, or if you’ve changed your mind.

After all, there could be more here than meets the eye. Again, have a conversation. Reassess what’s missing and make a new story; don’t just reject it without knowing all the details.

We encourage our product managers to provide specific acceptance criteria in the story descriptions, and testers will often add tasks to stories during an IPM or Design Meeting to identify potential issues to dive deeper on during testing. Good developers will take those testing notes into account while developing to reduce the number of rejection cycles for stories.

Move rejected stories to the top.

Location, location, location—it’s of paramount importance. Move a rejected story below currently started stories, adjacent to unstarted stories at the top of the Backlog. When developers look to see the next story to work on, they’ll see the rejected story as the next one to pick up.

Even if there is no verifiably universal way to use Pivotal Tracker for Agile development, and though it can accommodate a variety of approaches, time and experience have proven to us that some practices lead to better results. We hope these tips will help you use the app as efficiently and smartly as possible.  

And as always, we’d love your comments. Send your feedback to, or respond in the comments below.


The post Guidelines for Agile Development using Tracker appeared first on Pivotal Tracker.

Categories: Companies

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.