Skip to content

Feed aggregator

Video: SAFe in 8 Minutes – Part 1: Team

Agile Product Owner - Tue, 10/27/2015 - 01:26

A favorite Steve Jobs quote speaks to the idea that simplicity is harder than complexity. We get that. With a framework as robust as SAFe, distilling the essence of the underlying concepts into the shortest possible format is a challenge.

So when we stumble onto something that does a wonderful job of doing just that—boiling SAFe down to its essence, we want to tell the world about it. Our most recent discovery comes from Ole Jepsen, an SPC from GoAgile in Denmark. He has created “SAFe in 8 Minutes,” the first in a series of  videos designed to explain the 3 levels of SAFe. It uses simple story-telling tools—cut-outs, fictional characters (someone named Peggy Oda wants to build a new house!), and drawings to describe SAFe.

Many thanks to Ole for putting this together. I just think it’s really cool.

We’ll stay tuned for future installments!

Stay SAFe,

P.S. Look below the video for the Steve Jobs quote.

When you start looking at a problem and it seems really simple, you don’t really understand the complexity of the problem. Then you get into the problem, and you see that it’s really complicated, and you come up with all these convoluted solutions. That’s sort of the middle, and that’s where most people stop. But the really great person will keep on going and find the key, the underlying principle of the problem—and come up with an elegant, really beautiful solution that works.”—Steve Jobs



Categories: Blogs

HBR:: Why Organizations Don't Learn

Agile Complexification Inverter - Mon, 10/26/2015 - 21:46
A nice article on HBR - "Why Organizations Don't Learn", by

  • Francesca Gino and 
  • Bradley Staats; take a look.
    They list these reasons:

    • Fear of failure
    • Fixed mindset
    • Over reliance on past performance
    • Attribution bias

    The authors then give some strategies for overcoming these reasons for the lack of learning.  Many of these will be familiar to the agile community.

    See Also:
    Pitfalls of Agile Transformations by Mary Poppendieck
    Categories: Blogs

    Pair Programming for Remote Teams

    Scrum Expert - Mon, 10/26/2015 - 18:37
    Pair programming is one of the original practice of eXtreme programming, but it is also one of the least used by Agile software development teams. In his blog post, Alisdair McDiarmid explains how uses pair programming with remote teams. has been growing rapidly and its teams work in different location. Pairing has been used to allow people in different teams to work together virtually. ...
    Categories: Communities

    Learning about test automation with Lego

    Xebia Blog - Mon, 10/26/2015 - 16:44

    “Hold on, did you say that I can learn about test automation by playing with Lego? Shut up and take my money!” Yes, I am indeed saying that you can. It will cost you a couple hundred Euro’s, because Lego isn’t cheap, especially the Mindstorm EV3 Lego. It turns out that Lego robots eat at a lot of AA batteries, so buy a couple of packs of these as well. On the software side you need to have a computer with a Java development environment and an IDE of your choice (the free edition of IntelliJ IDEA will do). 

    “Okay, hold on a second. Why do you need Java? I thought Lego had its own programming language?”. Yes, that’s true. Orginally, Lego provides you with their own visual programming language. I mean, the audience for the EV3 is actually kids, but it will be our little secret. Because Lego is awesome, even for adults. Some hero made a Java library that can communicate with the EV3 hardware, LeJos, so you can do more awesome stuff with it. Another hero dedicated a whole website to his Mindstorm projects, including instructions on how to build them.

    Starting the project

    So, on a sunny innovation day in August at Xebia, Erik Zeedijk and I started our own Lego project. The goal was to make something cool and relevant for Testworks Conf. We decided to go for The Ultimate Machine, also known as The Most Useless Machine.  It took us about three hours to assemble the Lego. If you’re not familiar with the Useless Machine, check this video below. 

    Somehow, we had to combine Lego with test automation. We decided to use the Cucumber framework and write acceptance tests in it. That way, we could also use that to figure out what options we wanted to give the machine (sort of a requirements phase...what did I just say!?). The Ultimate Machine can do more than just turn off the switch, as you could tell if you watched the above video. It can detect when a hand is hovering above the switch and that can trigger all kinds of actions: driving away to trick the human, hiding the switch to trick the human, etc. With Acceptance Test Driven Development, we could write out all these actions in tests and use those tests to drive our coding. In that sense, we were also using Test Driven Development. In the picture below is an example of a Cucumber feature file that we used.


    The idea sounded really simple, but executing it was a bit harder. We made a conceptual mistake at first. To run our tests, we first coded them in a way that still required a human (someone who turned the switch on). Also, the tests were testing the Lego hardware too (the sensors) and not our own code. The Lego hardware has quite some bugs in it, we noticed. Some of the sensors aren’t really accurate in the values they return. After some frustration and thinking we found a way to solve our problem. In the end, the solution is pretty elegant and in retrospect I face-palm because of my own inability to see it earlier.

    We had to mock the Lego hardware (the infrared sensor and the motors), because it was unreliable and we wanted to test our own code. We also had to mock the human out of the tests. This meant that we didn’t even need the Lego robot anymore to run our tests. We decided to use Mockito for our mock setup. In the end, the setup looked like this. 

    robot setup

    The LeJos Java library uses a couple of concepts that are important to grasp. An arbitrator decides which behavior should run. All the behaviors are put in a behaviorList Inside each behavior is a boolean wantControl that becomes 'true' when certain conditions arise. See the picture below for an example 'wantControl' in the DriveBehavior class. 


    Then the behavior starts to run and when it is finished it returns 'idle = true'. The arbitrator then picks a new behavior to run. Because some behaviors had the same conditions for 'wantControl' we had to think of a way to prevent the same behavior from triggering all the time. In each behavior we put a boolean chanceOfBehaviorHappening and we assigned a chance to it. After a bit of tweaking we had the robot running the way we liked it.

    The tests were reliable after this refactoring and super fast. The test code was neatly separated from the code that implemented the robot’s behaviour. In addition, you could start the real Lego robot and play with it. This is a picture of our finished robot. 

    Lego Ultimate Machine

    We didn’t implement all the behaviors we identified on purpose, because our goal was to get attendants of TestWorks Conf to code for our robot. This little project has taught both Erik and me more about writing good Cucumber feature files, TDD and programming. We are both not really Java experts, so this project was a great way of learning for both of us. I certainly improved my understanding of Object Oriented programming. But even if you are a seasoned programmer, this project could be nice to increase your understanding of Cucumber, TDD or ATDD. So, convince your boss to shell out a couple hundred to buy this robot for you and start learning and have fun.

    FYI: I will take the robot with me to the Agile Testing Days, so if you are reading this and going there too, look me up and have a go at coding.

    Categories: Companies

    Why Java keeps plugging along

    Indefinite Articles - John Brothers - Mon, 10/26/2015 - 16:39

    When I was a young programmer, COBOL was the primary “enterprise” business language. It had a few advantages: everyone knew it, hardware supported it, libraries extended it, no one got fired for using it.

    There were several other languages out there, that were used for various projects – FORTRAN, C, Pascal, Ada , to name a few big ones.

    But it didn’t matter for “enterprise” software.

    In the mid 90s, two things “exploded” – the Internet (because of the World Wide Web) and Java. Over the next decade or so, Java essentially took over as the “enterprise” software leader. Other languages had tried and failed. Java succeeded almost despite itself – the EJB constructs were incredibly clumsy and overly-complicated, it wasn’t super fast, and the language was (and is) often clunky.

    But Java came with the Internet in its DNA – and as the Internet exploded, COBOL was simply not equipped to keep up.

    Java succeeded because it rode the coattails of the “next big thing” all the way to glory. COBOL simply wasn’t the safe choice anymore, because it was clearly obsolete.

    Over time, Java became the safe choice, the one with the libraries that everyone knew.

    When you talk about a language displacing Java as the dominant “enterprise” language, you have to have the second part – you need a major upheaval that makes the world realize that Java isn’t the safe choice anymore.

    So the key isn’t the features of your language. It’s finding the things that the language can do that Java can’t, and in such a way that it’s obvious Java won’t be able to do it for a long time.

    I know Java, I make a living knowing Java, but I also know that it won’t last forever.   But you’re probably not going to replace it anytime soon.

    Categories: Blogs

    Lean-UX & Enterprise BDD in Methods & Tools Fall 2015 issue - Mon, 10/26/2015 - 16:32
    Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Fall 2015 issue that discusses Ethnographic Approach to Software, Emotional Testing, Lean-UX and Enterprise-Scale BDD.
    Categories: Communities

    Red, Yellow, Green or RYG/RAG Reports: How They Hide the Truth

    Notes from a Tool User - Mark Levison - Mon, 10/26/2015 - 15:28
    What is Red Yellow Green?

    Red Yellow Green lightsRed-Yellow-Green (or Red/Amber/Green) is a status reporting mechanism used to help executives understand the current state of a project. Green means everything is good; we’re on track both with time and budget. In some cases, green means within 5% of budget. Yellow means there is some risk that there are scope or time problems, but with sufficient re-planning we can come back to target. Yellow is usually measured by some number crossing a predetermined threshold. Red indicates the project is in serious trouble.


    RYG is just a model of reality (in truth, all reporting simply models reality). All models hide details with the goal of making the situation easier to read and understand. Some are hidden so much that the truth gets lost.

    A team’s Task Board, Scrum Wall, or the overall Portfolio Kanban Wall are all first-order models of reality. While not perfect mirrors of the current state of the work, they’re fairly close. They help demonstrate qualitative information; i.e. which stories are complete, not just how much.

    A Release Burndown Chart, Burnup Chart or Cumulative Flow Diagram are all second-order models of reality. In other words, they summarize information contained on the Walls/Task Boards. They’re useful because they help spot trends. The Cumulative Flow Diagram provides the most information on trends, but it still has less information on it than the Portfolio Kanban.

    Red Yellow Green Reports are models of the charts, in that they summarize the charts, hiding even more informational details. They assume that a plan’s scope, budget, and time are fixed and unchanging. Models might be adequate in a world where these are true, but in a world that accepts change as the norm, colour-coded reports are dangerous.

    At best, these reports don’t measure truly done so much as whether a team is on track to make a target. In addition, since RYG reports usually have green as a default state, often there are too few questions that are asked until it’s too late.


    Even burndowns, burnups, and cumulative flow diagrams are imperfect because they focus on output (# of Stories or story points achieved) and not outcomes (the right features delivered to the customer.)

    Charts of all forms appear to promise certainty, when there isn’t any, which creates a false sense of security. If we’re going to chart at all, we should use forecasts and not precise lines, and make it clear this a forecast. If your chart just shows the average velocity, then you should provide a forecast that says we have a 50% chance of achieving this rate for the foreseeable future.

    Red Amber Green - number of stories remaining chart

    If you want to get more sophisticated, you can track error bars as well. Measure your best and worst three Sprint’s Velocity in the last six months, and use these as your 20% and 80% confidence bars. Then you can say that you’re 80% confident you will do at least as well as your worst three sprints, and 20% confident that you will do as well as your best three sprints.

    Red Amber Green - confidence line graph

    Even this model has a weakness in that it assumes your velocity will follow normal distributions. The reality is likely that it won’t. However, forecasting with error bars (or lines) usually gets the idea across that we’re forecasting a range of possible outcomes.

    Genchi Genbutsu

    Finally, all reports discourage people from leaving their offices. They give us the false feeling of safety. The report seems real and so it gives a good model of what is actually happening. Toyota has the practice of Genchi Genbutsu – go and see, literally. Review the charts but then leave your office and go to the place of the work. Watch, listen, ask, and review the Portfolio Kanban wall with the team(s). This will bring you back in contact with the reality of which features have been truly done. This will help you see if all the Story Points in the charts delivered the value that they claimed.


    So stop using RYG reports. They hide too much information. Do use Burndowns/Burnups/Cumulative Flow diagrams as a tool to help you spot trends, but don’t rely on them alone as the source of truth. And most importantly, review the Portfolio Kanban Wall with the Team(s) on a regular basis, as this is our best measure of reality.

    Other have taken up on it as well:

    Categories: Blogs

    Don’t Return A JSON Document From The toJSON Method

    Derick Bailey - new ThoughtStream - Mon, 10/26/2015 - 13:30

    As the birthplace of the JSON standard, JavaScript has an advantage in creating JSON documents. A call to the built-in JSON.stringify method will return a proper document from nearly any object. Customizing the output is also fairly straightforward, through the use of a `.toJSON` method on the target object.

    But there’s a catch to the `.toJSON` method – in spite of it’s name, you should not return a JSON string from it.

    Exit not an exit

    It’s a bit confusing… but then, much of JavaScript is.

    Defining The .toJSON Method

    The .toJSON method is not part of the JSON standard. Rather, it is part of the ECMAScript (JavaScript) standard, in support of JSON.

    The definition of toJSON resides inside of the specification for JSON.stringify in ECMAScript 5.1 and ECMAScript 6. But, looking at the technical definitions does not make it easy to understand the intent or purpose.

    Instead, it helps to look at the work of Douglas Crockford – the creator of the JSON standard. In his json2 polyfill for older browsers (prior to the JSON object / behaviors being a standard part of JavaScript), Crockford describes the toJSON method:

    When an object value is found, if the object contains a toJSON method, its toJSON method will be called and the result will be stringified. A toJSON method does not serialize: it returns the value represented by the name/value pair that should be serialized, or undefined if nothing should be serialized. The toJSON method will be passed the key associated with the value, and this will be bound to the value.

    This description is easier to understand than a pure technical specification, providing a very important point about the toJSON method: “A toJSON method does not serialize: it returns the value […] that should be serialized.

    Returning An Object To Serialize

    The JSON.stringify method will work on nearly any object, as-is – no special methods or features needed for that object. If you want to customize the output of the stringify method, though, you can provide a toJSON method on the target object.

    The return value for the toJSON method – as mentioned above – should be the value to serialize into the final JSON document.

    This means you don’t need to manually convert an object into a proper JSON document string. Instead, you can return an object or any other valid JSON data type, and the stringify process will serialize the result for you.

    For example, say you have a “user” object with three attributes:

    • firstName
    • lastName
    • fullName

    Maybe you’re using a getter for the fullName attribute, to concatenate firstName and lastName at runtime:

    With this getter, calling JSON.stringify on the user object will return all three attributes as a JSON string:

    If you need to exclude the fullName, you can do this in a number of different ways, including a custom toJSON method:

    (Note: There are many ways to modify the output of JSON.stringify with an object, including a “replacer” method / parameter of JSON.stringify, that I’ll cover in another post.)

    In this toJSON method implementation, the return value is not a string as a JSON document. Instead, it is an object literal – a set of key/values that will be serialized into the JSON document for you.

    In the same way that you can add fields to be serialized, you can also exclude fields from serialization. Don’t want that firstName attribute in the JSON document? Remove it from the object that toJSON returns. The toJSON method isn’t limited to returning object literals, either. Any valid JSON data type / value can be returned, meaning the possibilities for customizing the JSON document for an object are nearly endless.

    But what would happen if you returned a JSON document from the toJSON method?

    toJSON Returning JSON?

    It’s a common mistake and one that has caused countless hours of headaches and heartache for developers around the world. If you return a valid JSON string from the toJSON method, JSON.stringify will attempt to serialize it again, leaving you with a document like this:

    instead of a proper JSON document that looks like this:

    Do you see the difference, here?

    The addition of \ near all of the quotes is an indication of JSON.stringify serializing a string, instead of an object. This is done to escape the quotes that were present in the string already.

    Technically, this is valid JSON. However, running JSON.parse to turn the JSON document back into a JavaScript value will not return the expected result.

    Deserializing Dual-Serialized JSON

    To deserialize a JSON document into a JavaScript object (or other value), call JSON.parse and pass in the JSON document as the first paramter.

    The result, when working with a proper JSON document, will be a JavaScript object.

    If you return a JSON string from toJSON, however, you will end up with a second layer of serialization – the \ escape characters on the quotes, inside of the string value, as shown before:

    When you attempt to deserialize this document, JSON.parse will turn the dual-stringified and escaped JSON into a string literal – a proper JSON document that is not escaped, as shown previously:

    To get this back to a usable JavaScript object, you would have to call JSON.parse a second time:

    All of this headache and dual-parsing was caused by returning JSON from the toJSON method, instead of returning an object or value to be serialized.

    Don’t Return JSON From toJSON

    The toJSON method is a part of the JavaScript (ECMAScript 5.1+) standard that is often used incorrectly. The name unfortunately implies something that the method should not do – a conversion to a JSON document. The specification, in spite of the name, tells you to return an object or value that will be serialized into a JSON document.

    Making the mistake of returning a JSON document from to toJSON method seems to be a right of passage for JavaScript developers. But it doesn’t have to be this way. Take some time to educate yourself and your teammates on the toJSON standard, how it should be used and how the JSON stringify / parse methods interact with it and it’s results.

    Special thanks to Kyle Simpson for suggesting this blog post and for reviewing it!

    Categories: Blogs

    Model updates in Presentation Controls

    Xebia Blog - Mon, 10/26/2015 - 12:15

    In this post I'll explain how to deal with updates when you're using Presentation Controls in iOS. It's a continuation of my previous post in which I described how you can use Presentation Controls instead of MVVM or in combination with MVVM.

    The previous post didn't deal with any updates. But most often the things displayed on screen can change. This can happen because new data is fetched from a server, through user interaction or maybe automatically over time. To make that work, we need to inform our Presentation Controls of any updates of our model objects.

    Let's use the Trip from the previous post again:

    struct Trip {
        let departure: NSDate
        let arrival: NSDate
        let duration: NSTimeInterval
        var actualDeparture: NSDate
        var delay: NSTimeInterval {
            return self.actualDeparture.timeIntervalSinceDate(self.departure)
        var delayed: Bool {
            return delay > 0
        init(departure: NSDate, arrival: NSDate, actualDeparture: NSDate? = nil) {
            self.departure = departure
            self.arrival = arrival
            self.actualDeparture = actualDeparture ?? departure
            // calculations
            duration = self.arrival.timeIntervalSinceDate(self.departure)

    Instead of calculating and setting the delay and delayed properties in the init we changed them into computed properties. That's because we'll change the value of the actualDeparture property in the next examples and want to display the new value of the delay property as well.

    So how do we get notified of changes within Trip? A nice approach to do that is through binding. You could use ReactiveCocoa to do that but to keep things simple in this post I'll use a class Dynamic that was introduced in a post about Bindings, Generics, Swift and MVVM by Srdan Rasic (many things in my post are inspired by the things he writes so make sure to read his great post). The Dynamic looks as follows:

    class Dynamic<T> {
      typealias Listener = T -> Void
      var listener: Listener?
      func bind(listener: Listener?) {
        self.listener = listener
      func bindAndFire(listener: Listener?) {
        self.listener = listener
      var value: T {
        didSet {
      init(_ v: T) {
        value = v

    This allows us to register a listener which is informed of any change of the value. A quick example of its usage:

    let delay = Dynamic("+5 minutes")
    delay.bindAndFire {
        print("Delay: $$$0)")
    delay.value = "+6 minutes" // will print 'Delay: +6 minutes'

    Our Presentation Control was using a TripViewViewModel class to get all the values that it had to display in our view. These properties were all simple constants with types such as String and Bool that would never change. We can replace the properties that can change with a Dynamic property.

    In reality we would probably make all properties dynamic and fetch a new Trip from our server and use that to set all the values of all Dynamic properties, but in our example we'll only change the actualDeparture of the Trip and create dynamic properties for the delay and delayed properties. This will allow you to see exactly what is happening later on.

    Our new TripViewViewModel now looks like this:

    class TripViewViewModel {
        let date: String
        let departure: String
        let arrival: String
        let duration: String
        private static let durationShortFormatter: NSDateComponentsFormatter = {
            let durationFormatter = NSDateComponentsFormatter()
            durationFormatter.allowedUnits = [.Hour, .Minute]
            durationFormatter.unitsStyle = .Short
            return durationFormatter
        private static let durationFullFormatter: NSDateComponentsFormatter = {
            let durationFormatter = NSDateComponentsFormatter()
            durationFormatter.allowedUnits = [.Hour, .Minute]
            durationFormatter.unitsStyle = .Full
            return durationFormatter
        let delay: Dynamic<String?>
        let delayed: Dynamic<Bool>
        var trip: Trip
        init(_ trip: Trip) {
            self.trip = trip
            date = NSDateFormatter.localizedStringFromDate(trip.departure, dateStyle: .ShortStyle, timeStyle: .NoStyle)
            departure = NSDateFormatter.localizedStringFromDate(trip.departure, dateStyle: .NoStyle, timeStyle: .ShortStyle)
            arrival = NSDateFormatter.localizedStringFromDate(trip.arrival, dateStyle: .NoStyle, timeStyle: .ShortStyle)
            duration = TripViewViewModel.durationShortFormatter.stringFromTimeInterval(trip.duration)!
            delay = Dynamic(trip.delayString)
            delayed = Dynamic(trip.delayed)
        func changeActualDeparture(delta: NSTimeInterval) {
            trip.actualDeparture = NSDate(timeInterval: delta, sinceDate: trip.actualDeparture)
            self.delay.value = trip.delayString
            self.delayed.value = trip.delayed
    extension Trip {
        private var delayString: String? {
            return delayed ? String.localizedStringWithFormat(NSLocalizedString("%@ delay", comment: "Show the delay"), TripViewViewModel.durationFullFormatter.stringFromTimeInterval(delay)!) : nil

    Using the changeActualDeparture method we can increase or decrease the time of trip.actualDeparture. Since the delay and delayed properties on trip are now computed properties their returned values will be updated as well. We use them to set new values on the Dynamic delay and delayed properties of our TripViewViewModel. Also the logic to format the delay String has moved into an extension on Trip to avoid duplication of code.

    All we have to do now to get this working again is to create bindings in the TripPresentationControl:

    class TripPresentationControl: NSObject {
        @IBOutlet weak var dateLabel: UILabel!
        @IBOutlet weak var departureTimeLabel: UILabel!
        @IBOutlet weak var arrivalTimeLabel: UILabel!
        @IBOutlet weak var durationLabel: UILabel!
        @IBOutlet weak var delayLabel: UILabel!
        var tripModel: TripViewViewModel! {
            didSet {
                dateLabel.text =
                departureTimeLabel.text = tripModel.departure
                arrivalTimeLabel.text  = tripModel.arrival
                durationLabel.text = tripModel.arrival
                tripModel.delay.bindAndFire { [unowned self] in
                    self.delayLabel.text = $0
                tripModel.delayed.bindAndFire { [unowned self] delayed in
                    self.delayLabel.hidden = !delayed
                    self.departureTimeLabel.textColor = delayed ? .redColor() : UIColor(red: 0, green: 0, blue: 0.4, alpha: 1.0)

    Even though everything compiles again, we're not done yet. We still need a way to change the delay. We'll do that through some simple user interaction and add two buttons to our view. One to increase the delay with one minute and one to decrease it. Handling of the button taps goes into the normal view controller since we don't want to make our Presentation Control responsible for user interaction. Our final view controller now looks like as follows:

    class ViewController: UIViewController {
        @IBOutlet var tripPresentationControl: TripPresentationControl!
        let tripModel = TripViewViewModel(Trip(departure: NSDate(timeIntervalSince1970: 1444396193), arrival: NSDate(timeIntervalSince1970: 1444397193), actualDeparture: NSDate(timeIntervalSince1970: 1444396493)))
        override func viewDidLoad() {
            tripPresentationControl.tripModel = tripModel
        @IBAction func increaseDelay(sender: AnyObject) {
        @IBAction func decreaseDelay(sender: AnyObject) {

    We now have an elegant way of updating the view when we tap the button. Our view controller communicates a change logical change of the model to the TripViewViewModel which in turn notifies the TripPresentationControl about a change of data, which in turn updates the UI. This way the Presentation Control doesn't need to know anything about user interaction and our view controller doesn't need to know about which UI components it needs to change after user interaction.

    And the result:

    Hopefully this post will give you a better understanding about how to use Presentation Controls and MVVM. As I mentioned in my previous post, I recommend you to read Introduction to MVVM by Ash Furrow and From MVC to MVVM in Swift by Srdan Rasic as well as his follow up post mentioned at the beginning of this post.

    And of course make sure to join the do {iOS} Conference in Amsterdam the 9th of November, 2015. Here Natasha "the Robot" Murashev will be giving a talk about Protocol-oriented MVVM.

    Categories: Companies

    Two Types of Learning Require Two Types of Coaching

    NetObjectives - Mon, 10/26/2015 - 07:25
    There are three kinds of men. The ones that learn by readin'. The few who learn by observation. The rest of them have to pee on the electric fence for themselves. Will Rogers There are two types of learning - learning how to do something and learning to avoid something.  In 16 years being in the Agile community I have seen a persistence of bad habits and newbie errors. Although many in the...

    [[ This is a content summary only. Visit my website for full links, other content, and more! ]]
    Categories: Companies

    Agile is just throwing stuff together as quickly as possible?

    Is Agile just throwing stuff together as quickly as possible?

    The older version of this was some variant of "Extreme Programming is just hacking" or "Extreme Programming is just cowboy coding".

    In essence, the suggestion is that Agile is equivalent to "Code and Fix" or "Cowboy Coding".

    Kent Beck describes the heartbeat of an Extreme Programming episode in response to the "Why is XP no just hacking?" question.  Paraphrasing for length...
    1. Pair writes next automated test case to force design decisions for new logic independent of implementation.
    2. Run test case to verify failure or explore unexpected success.
    3. Refactor existing code to enable a clean and simple implementation. Also known as "situated design".
    4. Make the test case work.
    5. Refactor new code in response to new opportunities for simplification.
    Does this reasonably sound equivalent to "throw stuff together as quickly as possible"?

    Granted, not every Agile team has this kind of technical discipline.  Hence, so-called Flaccid Scrum and the advocacy of Two-Star Agile fluency.

    Also, granted, that sometimes one should throw stuff together quickly when the purpose of the exercise is to test an experimental concept.  For example, "spiking a solution" or an initial MVP.
    Categories: Blogs

    Keeping an eye on your Amazon EC2 firewall rules

    Xebia Blog - Sun, 10/25/2015 - 15:56

    Amazon AWS makes it really easy for anybody to create and update firewall rules that provide access to the virtual machines inside AWS. Within seconds you can add your own IP address so you can work from home or the office. However, it is also very easy to forget to remove them once your are finished. The utility aws-sg-revoker , will help you maintain your firewall rules.

    aws-sg-revoker inspects all your inbound access permission and compares them with the public IP addresses of the machines in your AWS account. For grants to IP addresses not found in your account, it will generate a aws CLI revoke command. But do not be afraid: it only generates, it does not execute it directly. You may want to investigate before removal. Follow the following 4 steps to safeguard your account!

    step 1. Investigate

    First run the following command to generate a list of all the IP address ranges that are referenced but not in your account.

    aws-sg-revoker -l x.y.z. a.b.c.

    You may find that you have to install jq and the aws CLI :-)

    step 2. Exclude known addresses

    Exclude the ip addresses that are ok. These addresses are added as regular expressions.

    aws-sg-revoker -l -w 1\.2\.\3\.4 -w 8\.9\.10\.11/16
    step 3. generate revoke commands

    Once you are happy, you can generate the revoke commands:

    aws-sg-revoker -w 1\.2\.\3\.4 -w 4\.5\.6\.7 -w 8\.9\.10\.11/16
    aws ec2 revoke-security-group-ingress --group-id sg-aaaaaaaa --port 22-22 --protocol tcp --cidr # revoke from sg blablbsdf
    aws ec2 revoke-security-group-ingress --group-id sg-aaaaaaaa --port 9200-9200 --protocol tcp --cidr # revoke from sg blablbsdf
    aws ec2 revoke-security-group-ingress --group-id sg-aaaaaaaa --port 9080-9080 --protocol tcp --cidr # revoke from sg blablbsdf
    aws ec2 revoke-security-group-ingress --group-id sg-bbbbbbbb --protocol -1 --cidr # revoke from sg sg-1
    aws ec2 revoke-security-group-ingress --group-id sg-bbbbbbbb --protocol -1 -cidr # revoke from sg sg-3
    step 4. Execute!

    If the revokes look ok, you can execute them by piping them to a shell:

    aws-sg-revoker -w 1\.2\.\3\.4 -w 8\.9\.10\.11/16 | tee revoked.log | bash

    This utility makes it easy to for you to regularly inspect and maintain your firewall rules and keep your AWS resources safe!

    Categories: Companies

    Question: Product Owner and Technical Debt

    Learn more about transforming people, process and culture with the Real Agility Program

    Question from Meredith:

    As a product owner, what are the best ways to record technical debt and what are some approaches to prioritizing that work amid the continuous delivery of working software?


    Hi Meredith! This is an interesting question. I’ll start by answering the second part of your question first.  The two most common ways of handling technical debt, quality debt and legacy debt are:

    1. Fix as you go. The Scrum Team works on new PBIs every Sprint, but every time a PBI touches a technical, quality or legacy debt area, the team fixes “just enough” to make the PBI implementation have no debt.  This means that refactoring and the creation of automated tests (usually through TDD) are done on the parts of the product/system that have the problems.
    2. Allocate a percentage. In this scenario, the Scrum Team reduces its velocity (sometimes significantly) to allow for time to deal with the technical, quality and legacy issues. This reduction could be adjusted every Sprint, but is usually consistent for several Sprints in a row.

    In both approaches, the business is paying for the debt accumulated, and the cost includes an “interest” fee.  In other words, the sooner you fix technical, quality and legacy debt, the less it costs.  This approach to thinking about your product/system is essential for long-term sustainability.  One organization I worked with took three years working on their system to clean it up without being able to add any new features!  Don’t let your system get to that point.

    Now to the first part of your question…

    As a Product Owner, you shouldn’t really be making decisions about this cleanup work. Your authority is limited to the Product Backlog which should not include technical items. The only grey area here is with defects which may be hard to classify as either fully business or fully technical. But technical design, duplication of code, technical defects, and legacy code all are under the full authority of the Scrum Development Team. Practically, this means that every Sprint the team has the authority to choose however few PBIs they feel they can take while considering the technical state of the product/system.  We trust and respect the team to make wise decisions.

    Therefore, your main job as a Product Owner is to provide the team with as much information as possible about the business consequences of the work they are doing.  With strong communication and collaboration about this aspect of their work, the technical members of your team can make good trade-off decisions, and balance the need for new features with the need to clean up previous compromises in quality.

    A final note: in order for this to work well, it is critical that the team not be pushed to further sacrifice quality and that they are given the support to learn the techniques and skills to create debt-free code.  (You might consider sending someone to our CSD training to learn these techniques and skills.)

    Using these techniques, I have been able to help teams get very close to defect-free software deliveries (defect rates of 1 or 2 in production per year!)

    Let me know in the comments if you would like any further clarification.

    Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!

    The post Question: Product Owner and Technical Debt appeared first on Agile Advice.

    Categories: Blogs

    Startupbucks (aka Starbucks) the OaaS Innovation Incubator

    As I was momentarily pausing from working on a presentation, I leaned back and noticed that there was quite a bit of “business” occurring in my local Starbucks.  There were a lot of enthusiastic conversations with a collection of open laptops, notebooks, tablets, and more being used to share, illustrate, and note ideas.  This wasn't the first time I’d seen this same level of engagement in a Starbucks. 
    Then something occurred to me.  I was witnessing an incubator of business, non-profit, and personal ideas being generated and matured.  Of course some people are there for just a coffee and a chair to relax.  Others are there for a quick pick-me-up for the day or remainder of the day.  Alternatively, it was clear that Starbucks was this place where business was happening, where new ideas were being generated, where collaboration was happening, and where progress was being made.  It was an incubator of innovation!  That’s where it occurred to me, Starbucks was part coffee shop and part office-as-a-service (OaaS) incubator.  I therefore have redubbed Starbucks to Startupbucks!  The first part of the name (Startup) is a place where new ideas come to life and blossom surrounded by the aroma of coffee beans.  The second part of the name (bucks) is coincidentally where hopes for financial reward is part of the innovation dream.   I wonder how much business has been conducted in Startupbucks and much money this translates into?   Next time you are in Starbucks, take a look around.  Are you seeing the same thing that I am?  Finally, thank you Starbucks and the many coffee shops like you for providing us this OaaS that makes a great incubator of ideas and haven for progress!    
    Categories: Blogs

    That’s what learning feels like

    Thought Nursery - Jeffrey Fredrick - Fri, 10/23/2015 - 07:55

    None of us like to be wrong. I’ve tested this with many audiences, asking them “how does it feel when you’re wrong?” “Embarrassing”, “humiliating” or simply “bad” are among the most common answers. Stop now and try and think of your own list of words to describe the feeling of being wrong.

    These common and universally negative answers are great from a teaching perspective, because they are answers to the wrong question. “Bad” isn’t how you feel when you’re wrong; it’s how it feels when you discover you were wrong! Being wrong feels exactly like being right. This question and this insight come from Kathryn Schulz’s TED Talk, On being wrong. Schulz talks about the “internal sense of rightness” we feel, and the problems that result. I think there’s a puzzle here: we’ve all had the experience of being certain while also being wrong. If the results are “embarrassing”, why do we continue to trust our internal feeling of certainty?

    My answer comes from Thinking Fast & Slow. That sense of certainty comes from our System 1, the fast, intuitive, pattern recognition part of our brain. We operate most of our lives listening to System 1. It is what allows us to brush our teeth, cross a street, navigate our way through a dinner party. It is the first filter for everything we see and hear. It is how we make sense of the world. We trust our sense of certainty because System 1 is the origin of most of our impulses and actions. If we couldn’t trust System 1, if we had to double check everything with the slow expensive analytical System 2, we would be paralyzed. So we need our System 1 and we need the sense of certainty it provides. We also need to be aware it can lead us astray.

    When our sense of being right guides us we are acting from a Model I / Unilateral Control mindset. The result of a Unilateral Control mindset is less information, reduced trust and fewer opportunities to learn. And we all like to learn right? I now ask my audiences this question and I get universal nods. We all like to learn. “No you don’t”, I reply. “You just told me that the feeling of becoming aware you were wrong feels bad! Well guess what? That’s what learning feels like.” That’s my recent ah-ha moment: that we claim we like to learn, but when it actually comes to learning, to correcting a wrong belief with a right one, we don’t like it.

    I find this discrepancy very interesting, very revealing. My theory is that when we imagine learning we are thinking of writing on a blank slate. It is about learning facts where before there were none. That is a good feeling, we get a little chemical kick from our brain when that happens. We don’t imagine correcting our mistaken beliefs when we think of learning, and that’s a real shame, that should change. By all rights we should value that kind of learning even more than learning new facts: “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.” (Will Rogers)

    I think the problem is that we are primates. To primates, from an evolutionary psychology standpoint, status is everything. Status is the primary determinant of reproductive success. Losing status can be the same as a reproductive, evolutionary death sentence. In our modern knowledge economy, chest thumping is the assertion that we are right, and winning the fight is proving the other person wrong. That’s how we put them in their place (in the status hierarchy). This means our instinctive reaction to becoming less wrong tends to be negative. The loss of status feels too high a price to pay for learning. Even trying to help someone else become less wrong is understood as a risky prospect. We don’t want them to lose face, we don’t want them to get angry with us for correcting them. Thus the habits of Unilateral Control, protecting both ourselves and others, are reinforced.

    All of this explains why developing habits for learning, developing Model II / Mutual Learning habits, requires a lot of practice. We are fighting decades of acculturation on top of millions of years of evolution. To win this fight we need to be committed to what we are fighting for. We need to care more about learning than being right. We’ve got to care about making the most informed choice possible. When I can remember to hold these values in mind it becomes easier to act differently. I can go seek out those people who are most likely to disagree with me, who are most likely to teach me something. I can deliberately share my chain of reasoning and invite others to poke holes in it. With practice, lots of practice, I can come to see the person who corrects me as more friend than rival, and to feel the correction as the victory of joint learning rather than an individual moment of shame.

    Categories: Blogs

    Agile, but still really not Agile? What Pipeline Automation can do for you. Part 2.

    Xebia Blog - Thu, 10/22/2015 - 14:51

    Organizations adopting Agile and teams delivering on a feature-by-feature basis producing business value at the end of every sprint. Quite possibly this is also the case in your organization. But do these features actually reach your customer at the same pace and generate business value straight away? And while we are at it: are you able to actually use feedback from your customer and apply it for use in the very next sprint?

    Possibly your answer is “No”, which I see very often. Many companies have adopted the Agile way of working in their lines of business, but for some reason ‘old problems’ just do not seem to go away...

    Hence the question:

    “Do you fully capitalize on the benefits provided by working in an Agile manner?”

    Straight forward Software Delivery Pipeline Automation might help you with that.

    In this post I hope to inspire you to think about how Software Development Pipeline automation can help your company to move forward and take the next steps towards becoming a truly Agile company. Not just a company adopting Agile principles, but a company that is really positioned to respond to the ever changing environment that is our marketplace today. To explain this, I take the Agile Manifesto as a starting point and work from there.

    In my previous post, I addressed Agile Principles 1 to 4, please read below where I'll explain about how automation can help you for Agile Principles 5 to 8.


    Agile Principle 5: Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

    This is an important aspect in Agile. People get motivated by acknowledged empowerment, responsibility, ownership and trusted support when performing a job. This is one of the reasons Agile teams often feel so vibrant and dynamic. Still, in many organizations development-teams work Agile but “subsequent teams” do not. Resulting in mini-waterfalls slowing down your delivery cycle as a

    “Environment and the support needed” means that the Agile team should work in a creative and innovative environment where team-members can quickly test new features. Where the team can experiment, systems “just work” and “waiting” is not required. The team should be enabled, so to speak .. in terms of automation and in terms of innovation. This means that a build should not take hours, a deployment should not take days and the delivery of new infrastructure should not take weeks.

    Applying rigorous automation will help you to achieve the fifth objective of the Agile manifesto. There is a bit of a chicken and egg situation here, but I feel it is safe to say that a sloppy, broken, quirky development environment will not help in raising the bar in terms of motivating individuals. Hence "give them the environment and support they need, and trust them to get the job done".


    Agile Principle 6: The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

    When working Agile, individuals and interactions are valued over the use of processes and tools. When starting a new project, teams should not be hindered with ticketing systems, extensive documentation to explain themselves and long service times. These type of “services” often exist on boundaries of business-units of bringing different ‘disciplines’ to the solution.500px-People_together.svg

    Although working Agile, many companies still have these boundaries in place. An important aspect of Continuous Delivery is executing work in Product teams dedicated to delivery and/or maintenance of an end-product. These product teams have all required disciplines working together in one and the same team. Operating in this manner alleviates the need for slow tooling & ticketing systems and inspires people to work together and get the job done.

    Organizing people as a team working on a product instead of individuals performing a task, which in itself has no meaning, will help you to achieve the sixth objective of the Agile Manifesto. There is not a lot automation can do for you here.


    Agile Principle 7: Working software is the primary measure of progress.

    Agile aims towards delivering working software at the end of each sprint. For the customer that is basically what counts: working software, which can actually be used. Working software means software without defects. There is no point in delivering broken software at the end of every sprint.Working-Software

    When sending a continuous flow of new functions to the customer, each function should adhere to the required quality level straight away. In terms of quality, new functions might need to be ‘reliable’, ‘secure’, maintainable’, ‘fast’, etc, which all are fully testable properties. Testing these type of properties should be integral part of team activities. One of the principles related to Continuous Delivery addresses this topic through Test automation. Without it, it is not possible to deliver working production-ready software by the end of each sprint.

    Proper implementation of test-disciplines, fostering a culture of delivering high quality software, testing every single feature, adhering to testing disciplines and applying matching & automated test tooling addresses topics related to the seventh object of the Agile Manifesto. Supply a test for every function you add to the product and automate this test.


    Agile Principle 8: Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

    As software complexity grows exponentially, it will become more difficult overtime to maintain a constant pace of delivering new features, when assembling, deploying, testing or provision manually. Humans are simply not made to perform multiple tasks fast, repetitively and consistently over a longer period of time, that is what machines are for!

    The eighth Agile principle typically comes down to a concept called ‘flow’. You might have an Agile team in place for creating new softflow-clipart-9183-flow-designware, but what about the flow in the rest of your organization? Should the team wait for requirements to drip through, should it wait for the testers to manually test software, or is it the Operations team that needs to free resources in order to deploy software? To address this, from idea to product, handover moments should be minimized as much as possible and where possible principles of automation should be applied. This brings us back to build automation, test automation, deployment automation and the automation of infrastructure.


    Stay tuned for the next post, where I'll address the final four Agile principles of the bunch.


    Michiel Sens.

    Categories: Companies

    Measuring Agile Success?!?#?

    Agile Management Blog - VersionOne - Thu, 10/22/2015 - 14:30

    Another ChartAbout six months ago, I wrote a blog post called Top 10 Tips for Measuring Agile Success, and the reality is that it wasn’t necessary a set of tips as it was a blog about the to ten ways people responded to the VersionOne State of Agile survey and some related metrics that support them. Way before that blog was ever published, the question of how to measure agile success was a common one that I and many other agile coaches would receive when working with organizations and executives. Since the blog was published, I’ve had more questions and in some cases some rather odd reactions to the concept of measuring agile success. Some questions are very direct — “which metrics really work?” Or, “which metrics should be used at the various levels of the organization?” Then there are the reactions or questions like, “aren’t you aware of the impact of metrics?” Or, the statement, “suggesting the one way is ridiculous.” Or, the best reaction, “dude, I hate metrics.”

    Okay, I can accept all this and I get the confusion and general concern, and trust me — I share some of these sentiments. Instead of looking at the question from the stand point of which metrics are the best, let’s explore the topic or the questions of how do we measure agile success and why is it important.

    Let’s start with the “why”, and I think the primary “why” is obvious — the cost of change can be significant. There’s not only a tangilble investment in training, coaching, reorganization, staff changes, and even re-engineering the physical environment, but there’s also the significant intangible cost associated with productivity loss due to teams reforming, working through the chaos, and emerging through the change usually with something that looks much different than what you started with. I don’t think I’ve been around a team or organization going through the change associated with adopting agile that hasn’t had staff turnover, fits-and-starts, and a brief time of general struggle both for the people and the software output as everyone comes up to speed. So, trying to understand the return or the off-setting value gained is an important reason to measure agile success. To that end, it’s not really measuring agile success, it is better stated as measuring the success of the process investment change that organization is embarking upon or has recently spent six-months enduring.

    Plan-Do-Check-ActAnother “why” for measuring agile success is to enable the PDCA loop. The PDCA loop (a.k.a. the Deming Circle or Plan-Do-Check-Act [Adjust]) is a core business and leadership practice and it is called out in all lean and agile approaches. The concept is simple — establish a goal, decide what you are going to do, get it done, inspect the results, make adjustments based on observations, and then do it all over again as you march to the goal — the essence of iterative development and continuous improvement. Measuring the organizations progress and performance allows for the inspection to occur; thus, you adapt and get better the next time around.

    So, we need to ensure that the organizational change we’ve embarked on is making the positive impact we expect and a key part of ensuring this is measuring to enable continuous improvement.

    How we measure our agile success is a bit more complex — mostly because there are two things to measure. First, we need to measure the adoption of agile principles, processes, and practices. Second, we need to measure how our organization is performing to assess the impact of changing to agile.

    The approach to measure agile process success is generally around leveraging Agile Assessments which hope to identify where your organization is on an “agile maturity” spectrum. There are several long established approaches that internal and external coaches use. The concept of measuring maturity is simple, conduct a self-assessment based on both quantitative and qualitative measures in several areas including team dynamics, team practices, requirements management, planning activities, and technical practices (just to name a few). For these measures to mean anything, you need to start with a baseline (how mature are you today?) and then select a reasonable cadence to re-assess on your road to … more maturity? There are some very-useful existing maturity assessments out there including Agility Health, the classic Nokia Test, and about 20+ others listed on Ben Linder’s blog.

    Agile assessments do have some aspects of measuring impact; however, the focus is generally isolated to certain areas and or used to reflect the success back to the process. Measuring agile success from the standpoint of impact on the organization should be more focused on The Moneyball metrics of the business. Measuring impact is much more difficult sometimes because it can be difficult to tie a direct correlation between the agile delivery metrics and the traditional business metrics. It is also difficult because the lack of understanding of the agile delivery metrics. Making matters worse is how people sometimes focus on the wrong ones, which takes me back to The Moneyball reference. It’s important for organizations to select the right metrics to focus on and the right ones to tie together. As mentioned by Michael Mauboussin in his HBR article The True Measures of Success, leadership needs to understand the cause and effect of metrics. What this means, metrics if not selected correctly can provide misdirection and can result in misbehaviors — basically people will make bad decisions and game the metric.

    To give you an example of a [not so solid] agile success impact metric, lets look at a common metric that people often argue about – sales revenue tied to the delivery organization’s velocity based on story points (e.g. revenue / velocity). The first challenge with this is using the term story points [and velocity], you tend lose or confuse people not familiar with the concept and, if they do, an argument about estimation generally ensues and people often change their point measuring stick. To avoid this challenge, go with safer, lean metrics or simply put, the count of stories or things (great advice from Jeff Morgan – @chzy). The next challenge with this metric is that it may be too generalized and not really lead to better results. There may be better goal focused measure such as publication mentions after a release that leads to an increase in the number of product trials. Or possibly a goal of reduces support tickets which leads to improvements in customer retention or renewals. All of these are good, but alone they don’t necessarily provide an ability to measure agile success. To help assess your agile success, correlate the impact metrics with the lean, agile metric — the number of stories delivered during the same period. For example, use the number of stories delivered to normalize product revenue, number of web visitors, number of trials, and the number of support calls. Watch and assess these trends over six months and see the impacts.

    I recently read a book called RESOLVED: 13 Resolutions for LIFE by Orrin Woodward. Although the book is aimed at leadership development, one of the resolutions talks about establishing and maintaining a scoreboard. The idea is that we should have a set of metrics that we constantly visit that help to power our PDCA loop. This is a long running practice in business, and if you don’t already, I suggest you establish a scoreboard that helps you measure your agile success. It should include metrics from your process adoption assessment as well as your organization’s agile-adapted Moneyball metrics. In agile we often talk about big-visible charts, your agile success scorecard should be one. Share the results of your agile journey and the impact it is having on your organization, and help people understand what the metrics mean and what decisions or actions should be made based on the indications of the metrics. There will be times things don’t look good, but done right, your agile success scorecard should help spur and inspire an environment of continuous improvement that embraces agile principles and practices you’ve embarked on implementing.

    Although, I don’t call out any specific examples of agile success scorecards — it would be great if you would share your examples or metrics you like or resources that can help others.

    There are many worthy reads on this topic, but a couple more that I like are Agile Fluency, established by Diana Larsen and James Shore, as well as this article by Sean McHugh, How To Not Destroy your Agile Teams with Metrics.

    The post Measuring Agile Success?!?#? appeared first on The Agile Management Blog.

    Categories: Companies

    SonarLint: Fixing Issues Before They Exist

    Sonar - Thu, 10/22/2015 - 08:44

    I’m very happy to announce the launch of a new product series at SonarSource: SonarLint, which will help you fix code quality issues before they even exist.

    SonarLint represents a new approach to code quality: instant issue checking. It sits in the IDE and is totally developer-oriented. We’ve started with three variations: SonarLint for VisualStudio, SonarLint for Eclipse, and SonarLint for IntelliJ.

    Version 1.x will be available for C# via SonarLint for VisualStudio, and for Java and PHP with both SonarLint for Eclipse and SonarLint for Intellij. So now you can start catching and fixing issues from your projects’ first keystrokes.

    Here’s a preview in VisualStudio:

    And here’s a preview for Eclipse:

    Later, we’ll add the ability to link SonarLint with a SonarQube instance.

    This complete break from the approach of previous implementations is what prompted us to start over with a new brand. With SonarLint, it’s a new day in code quality.

    Categories: Open Source

    AutoMapper 4.1.0 Released

    Jimmy Bogard - Thu, 10/22/2015 - 05:10

    Release notes here:

    Supports the following frameworks:

    • .NET 4.0
    • .NET 4.5
    • dotnet (all dnxcore/UWP targets)
    • MonoTouch
    • MonoTouch10
    • PCL profile 259
    • MonoDroid
    • WinRT/Windows Phone 8.1
    • Windows Phone 8.0
    • Silverlight 5

    This was a bit of an internal refactoring release. With the previous 4.0 release, I did some work to simplify type map resolution. Unfortunately, this left out a few corner cases. With the new 4.1 drop, if you’re using any sort of inheritance, you need to use Mapper.Initialize.

    I want to simplify this further in the 5.0 timeframe, but this is a start. The LINQ projections had some work as well, to support constructors and some groundwork to support OData.

    Finally, we added Dictionary/dynamic/ExpandoObject support for some simple, straightforward cases. This will be expanded in the future to support things similar to what the MVC model binder supports.


    Post Footer automatically generated by Add Post Footer Plugin for wordpress.

    Categories: Blogs

    Knowledge Sharing

    SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.