Skip to content

Feed aggregator

NeuroAgile Quick Links #6

Notes from a Tool User - Mark Levison - Fri, 11/07/2014 - 17:52
NeuroAgileInteresting reports from the world of Science that can be applied (or not) to Agile Teams

 

 

Categories: Blogs

Retrospective Exercise: Vital Few Actions

Ben Linders - Fri, 11/07/2014 - 15:03
The aim of an agile retrospective is to define actions for the next iteration that will improve the way of working and help teams to deliver more value to their customers. This retrospective exercise can be used within agile frameworks like Scrum, SAFe, XP or Kanban to have teams agree upon the vital few improvement actions that they will do. Continue reading →
Categories: Blogs

How a Product Team is improving value delivery rate with Kanban

Improving projects with xProcess - Fri, 11/07/2014 - 13:26


Here are my slides from #lkuk14. Full video available soon from Lean Kanban UK.
Categories: Companies

The Agile Reader – Weekend Edition: 11/07/2014

Scrumology.com - Kane Mar - Fri, 11/07/2014 - 05:59

You can get the Weekend Edition delivered directly to you via email by signing up http://eepurl.com/0ifWn.

The Weekend Edition is a list of some interesting links found on the web to catch up with over the weekend. It is generated automatically, so I can’t vouch for any particular link but I’ve found the results are generally interesting and useful.

  • Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Leading Scrum Experts, Braintrust Consulting Group, Return to Memphis to Host … – IT Business Net #Agile #Scrum
  • RT @yochum: The Guide on the Side #agile #scrum
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • 8 Ways To Overcome Resistance To An #Agile Process Rollout via @jonathanlevene #scrum
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Are you #Agile? Seriously who puts a scrum shirt on a baby #geekout
  • Planning, Tracking & Managing Agile Web Development Sprints w/ Scrum & Intervals by @intervals http://t.co/nhBGUjXO3R
  • RT @iranfleitas: Infographic about “The Scrum framework in 30 seconds” #agile #scrum @ScrumAlliance http://t.co/1Ypa…
  • Less than 20 tickets left for the 6th annual GIVE THANKS FOR SCRUM event 11/25 in #Boston: #scrum #agile #innovation
  • #Scrum was born in #Boston. And that is why we GIVE THANKS FOR SCRUM every year here: #agile #lean #collaboration
  • RT @DanielMezick: #Scrum was born in #Boston. And that is why we GIVE THANKS FOR SCRUM every year here: #agile #lean…
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Lookin for a #ProductOwner in #WilmingtonDE that has #agile and #scrum experience check out for info #career #job
  • The dangers inherent when Key Performance Indicators (KPIs) are used as a target to drive behavior #Scrum #Agile
  • Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #agile #scrum
  • RT @yochum: SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #ag…
  • RT @yochum: Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • RT @yochum: SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #ag…
  • RT @yochum: Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • RT @yochum: SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle #ag…
  • Confirmado @dbassi no #AgileTourBH 2014 #Agile #PMOT #AgileBrazil #AgileBR #ExtremAgile #SCRUM http://t.co/QHqbq0MCY1
  • Scrum Expert: User Stories for Agile Requirements #agile #scrum
  • RT @AgileTourBH: Confirmado @dbassi no #AgileTourBH 2014 #Agile #PMOT #AgileBrazil #AgileBR #ExtremAgile #SCRUM http…
  • Studie: Agile Methoden im Höhenflug – #Scrum #Kanban #DesignThinking via @heiseonline
  • FIRST LEGO League Team Sponsored By Scrum Alliance In Virginia – PR Newswire (press release) #Agile #Scrum
  • RT @scrum_coach: #Agility In All Things #scrumterms #agile #mentalagility #physicalagility #strategicagility
  • RT @yochum: Xebia Blog: Mutation Testing: How Good are your Unit Tests? #agile #scrum
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @iqberatung: Studie: Agile Methoden im Höhenflug – #Scrum #Kanban #DesignThinking via @heiseonline
  • RT @yochum: On Software Development, Agile, Startups, and Social Networking – Isaac Sacolick: Agile Data Sci… #agi…
  • Uzility now prompts you on your team’s activity, so you can track progress even easier. Check it out #agile #scrum
  • Agile by McKnight, Scrum by Day is out! Stories via @BLupano @AgileUniversity
  • Continuing the mission… and continually improving #Scrum #Agile
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @MasterScrum: What Does QA do on the First Day of a Sprint? #agile #scrum http://t.co/180ciIpkkt
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • RT @MasterScrum: Agile Data Scientist, A Disciplined Hiker or Reckless Hunter? #agile #scrum http://t.co/TuCFkpqwAA
  • Categories: Blogs

    People over Process

    Agilitrix - Michael Sahota - Fri, 11/07/2014 - 04:44

    Here are the slides from my Keynote at Lean Into Agile Conference:


    Overview

    Agile and Lean are a means to an end. Once we are clear what our goals are and our approach is consistent with what we truly value, then we may hope for success.

    When we simplify the Agile Manifesto’s “Individuals and Interactions over Processes and Tools” we get “People over Process”. Agile is about people. It’s about a people-first culture. Lean is simliar.

    Sadly, many organizations are mired in organizational debt: mistrust, politics and fear. Changing the process won’t fix this. We need to go to the root of it – to find a way to talk about and shift to a healthier culture: one that values people.

    The VAST (Vulnerability, Authentic Connection, Safety and Trust) model makes the dynamics of human systems visible and clarifies where we may apply leverage to foster lasting change.

    We outline a fundamentally different approach for organizational change: one where valuing people is integral to building lasting success.

    Why we need to pay attention Agile Mindset vs Practices

    The post People over Process appeared first on Catalyst - Agile & Culture.

    Related posts:

    1. Letting Go of Agile (Culture) “If you want something very, very badly, let it go...
    2. The Business Case for an Authentic Workplace People are messy: they have personalities and emotions. In this...
    3. WholeHearted Manifesto: We Value People The WholeHearted Manifesto consists on one value statement: We Value...

    Related posts brought to you by Yet Another Related Posts Plugin.

    Categories: Blogs

    R: Joining multiple data frames

    Mark Needham - Fri, 11/07/2014 - 03:29

    I’ve been looking through the code from Martin Eastwood’s excellent talk ‘Predicting Football Using R‘ and was intrigued by the code which reshaped the data into that expected by glm.

    The original looks like this:

    df <- read.csv('http://www.football-data.co.uk/mmz4281/1314/E0.csv')
     
    # munge data into format compatible with glm function
    df <- apply(df, 1, function(row){
      data.frame(team=c(row['HomeTeam'], row['AwayTeam']),
                 opponent=c(row['AwayTeam'], row['HomeTeam']),
                 goals=c(row['FTHG'], row['FTAG']),
                 home=c(1, 0))
    })
    df <- do.call(rbind, df)

    The initial data frame looks like this:

    > library(dplyr)
    > df %>% select(HomeTeam, AwayTeam, FTHG, FTAG) %>% head(1)
      HomeTeam    AwayTeam FTHG FTAG
    1  Arsenal Aston Villa    1    3

    And we want to get it to look like this:

    > head(df, 2)
                    team    opponent goals home
    HomeTeam     Arsenal Aston Villa     1    1
    AwayTeam Aston Villa     Arsenal     3    0

    So for each row in the initial data frame we want to have two rows: one representing each team, how many goals they scored in the match and whether they were playing at home or away.

    I really like dplyr’s pipelining function so I thought I’d try and translate Martin’s code to use that and other dplyr functions.

    I ended up with the following two sets of function calls:

    df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1)
    df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0)

    I’m doing pretty much the same thing as Martin except I’ve used dplyr’s select and mutate functions to transform the data frame.

    The next step was to join those two data frames together and with Nicole’s help I realised that there are many ways we can do this.

    The functions that will do the job are:

    We decided to benchmark them to see which was able to transform the data frame the fastest:

    # load data into data.frame
    dfOrig <- read.csv('http://www.football-data.co.uk/mmz4281/1314/E0.csv')
     
    original = function(df) {
      df <- apply(df, 1, function(row){
        data.frame(team=c(row['HomeTeam'], row['AwayTeam']),
                   opponent=c(row['AwayTeam'], row['HomeTeam']),
                   goals=c(row['FTHG'], row['FTAG']),
                   home=c(1, 0))
      })
      do.call(rbind, df)
    }
     
    newRBind = function(df) {
      rbind(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
            df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0))  
    }
     
    newUnion = function(df) {
      union(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
            df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0))  
    }
     
    newJoin = function(df) {
      join(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
           df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0),
          type = "full")  
    }
     
    newMerge = function(df) {
      merge(df %>% select(team = HomeTeam, opponent = AwayTeam, goals = FTHG) %>% mutate(home = 1),
           df %>% select(team = AwayTeam, opponent = HomeTeam, goals = FTAG) %>% mutate(home = 0),
           all = TRUE)  
    }
    > library(microbenchmark)
     
    > microbenchmark(original(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median    uq max neval
     original(dfOrig) 189.4 196.8 202.5    201 205.5 284   100
     
    > microbenchmark(newRBind(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median    uq   max neval
     newRBind(dfOrig) 2.197 2.274 2.396  2.309 2.377 4.526   100
     
    > microbenchmark(newUnion(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median   uq   max neval
     newUnion(dfOrig) 2.156 2.223 2.377  2.264 2.34 4.597   100
     
    > microbenchmark(newJoin(dfOrig))
     
    Unit: milliseconds
                expr   min    lq  mean median   uq   max neval
     newJoin(dfOrig) 5.961 6.132 6.817  6.253 6.65 11.95   100
     
    > microbenchmark(newMerge(dfOrig))
    Unit: milliseconds
                 expr   min    lq  mean median    uq   max neval
     newMerge(dfOrig) 7.121 7.413 8.037  7.541 7.934 13.32   100

    We actually get a 100 time speed up over the original function if we use rbind or union whereas with merge or join it’s around 35 times quicker.

    In this case using merge or join is a bit misleading because we’re not actually connecting the data frames together based on any particular field – we are just appending one to the other.

    The code’s available as a gist if you want to have a play.

    Categories: Blogs

    SolutionsIQ: Automating Application Development, SAFe, and Other Takeaways from AgilePalooza Seattle

    Agile Management Blog - VersionOne - Thu, 11/06/2014 - 22:39

    “How do we get the servers to be as responsive as our software development?”

    SolutionsIQ developer/coach Ben Tomasini asked this question last week at AgilePalooza in Seattle. It was just one of the many topics that he and a number of other speakers covered at the event. Attendees learned about system automation, scaling frameworks such as SAFe™ and some of the interesting ways that agile and scrum are being used at development organizations around the Seattle community.

    Check out Ben’s ESPN highlights reel from the event:

    For more information about SolutionsIQ visit the AgileIQ Blog.

    Or go to agilepalooza.com to find out when the next AgilePalooza is coming to your area.

    Categories: Companies

    Mutation Testing: How Good are your Unit Tests?

    Xebia Blog - Thu, 11/06/2014 - 21:56

    You write unit tests for every piece of code you deliver. Your test coverage is close to 100%. So when the point comes when you have to make some small changes to the existing code, you feel safe and confident that your test suite will protect you against possible mistakes.
    You make your changes, and all your tests still pass. You should be fairly confident now that you can commit your new code without breaking anything, right?

    Well, maybe not. Maybe your unit tests were fooling you. Sure they covered every line of your code, but they could have performed the wrong assertions.
    In this post I will introduce mutation testing. Mutation testing can help you find omissions in your unit tests.

    Let's begin with a small example:

    package com.xebia;
    
    public class NameParser {
      public Person findPersonWithLastName(String[] names, String lastNameToFind) {
        Person result = null;
        for(int i=0; i <= names.length; i++) { // bug 1
          String[] parts = names[i].split(" ");
          String firstName = parts[0];
          String lastName = parts[1];
          if(lastName.equals(lastNameToFind)) {
            result = new Person(firstName, lastName);
            break;
          }
        }
        return result;
      }
    }
    

    NameParser takes a list of strings which consist of a first name and a last name, searches for the entry with a given last name, instantiates a Person object out of it and returns it.
    Here is the Person class:

    package com.xebia;
    
    public class Person {
      private final String firstName;
      private final String lastName;
    
      public Person(String firstName, String lastName) {
        this.firstName = firstName;
        this.lastName = lastName;
      }
    
      public String getFirstName() {
        return firstName;
      }
    
      public String getLastName() {
        return firstName; // bug 2
      }
    }
    

    You can see that there are two bugs in the code. The first one is in the loop in NameParser, which loops past the end of the names array. The second on is in Person, which mistakenly returns firstName in its getLastName method.

    NameParser has a unit test:

    package com.xebia;
    
    import org.junit.Before;
    import org.junit.Test;
    import static org.junit.Assert.assertEquals;
    
    public class NameParserTest {
      private NameParser nameParser;
      private String[] names;
    
      @Before
      public void setUp() {
        nameParser = new NameParser();
        names = new String[]{"Mike Jones", "John Doe"};
      }
    
      @Test
      public void shouldFindPersonByLastName() {
        Person person = nameParser.findPersonWithLastName(names, "Doe");
        String firstName = person.getFirstName();
        String lastName = person.getLastName();
        assertEquals("John", firstName);
      }
    }
    

    The unit tests covers the Person and NameParser code for 100% and succeeds!
    It doesn't find the bug in Person.getLastName because it simply forgets to do an assertion on it. And it doesn't find the bug in the loop in NameParser because it doesn't test the case where the names list does not contain the given last name; so the loop always terminates before it has a chance to throw an IndexOutOfBoundsException.

    Especially the last case is easy to overlook, so it would be nice if there existed a tool which could detect these kinds of problems.
    And there is one: actually there are a couple. For this post I have chosen PIT, down at the end are links to some alternatives.

    But first: what is mutation testing?

    A mutation test will make a small change to your code and then run the unit test(s). Such a change is called a 'mutant'. If a change can be made and the unit tests still succeed, it will generate a warning saying that the mutant 'survived'.
    The test framework will try to apply a number of predefined mutants at every point in your code where they are applicable. The higher the percentage of the mutants that get killed by your unit tests, the better the quality of your test suite.
    Examples of mutants are: negating a condition in an If statement, changing a conditional boundary in a For loop, or throwing an exception at the end of a method.

    Putting NameParser's testcase to the test with PIT

    PIT stands for Parallel Isolated Test, which is what the project originally was meant for. But it turned out to be a much more interesting goal to do mutation testing, which required much of the same plumbing.

    PIT integrates with JUnit or TestNG and can be configured with Maven, Gradle and others. Or it can be used directly as a plugin in Eclipse or IntelliJ.
    I'm choosing for the last option: the IntelliJ plugin. The setup is easy: just install PITest from the plugin manager and you are ready to go. Once you're done, you'll find a new launch configuration option in the 'edit configurations' menu called PIT.

    PIT launch configuration

    You have to specify the classes where PIT will makes its mutations under 'Target classes'.
    When we run the mutation test, PIT creates a Html report with the results for every class.
    Here are the results for the NameParser class:

    NameParser mutation testing results

    As you can read under 'Mutations', PIT has been able to apply five code mutations to the NameParser class. Four of them resulted in a failing NameParserTest, which is exactly what we'd like to see.
    But one of them did not: when the condition boundary in line 6, the loop constraint, was changed, NameParserTest still succeeded!
    PIT changes loop constraints with a predefined algorithm; in this case, when the loop constraint was i <= names.length, it changed the '<=' into a '<'. Actually this accidentally corrected the bug in NameParser, and of course that didn't break the unit test.
    So PIT found an omission in our unit test here, and it turned out that this omission even left a bug undetected!

    Note that this last point doesn't always need to be the case. It could be that for the correct behavior of your class, there is some room for some conditions to change.
    In the case of NameParser for instance, it could have been a requirement that the names list always contained an entry with the last name that was to be found. In that case the behavior for a missing last name would be unspecified and an IndexOutOfBoundsException would have been as good a result as anything else.
    So PIT can only find strong indications of omissions in your unit tests, but they don't necessarily have to be.

    And here are the results for the Person class:

    Person mutation test results

    PIT was able to do two mutations in the Person class; one in every getter method. Both times it replaced the return value with null. And as expected, the mutation in the getLastName method went undetected by our unit test.
    So PIT found the second omission as well.

    Conclusion

    In this case, mutation testing would have helped us a lot. But there can still be cases where possible bugs can go unnoticed. In our code for example, there is no test in NameParser test that verifies the behavior when an entry in the names list does not contain both a first name and a last name. But PIT didn't find this omission.
    Still it might make good sense to integrate mutation testing in your build process. PIT can be configured to break your Maven build if too many warnings are found.
    And there's a lot more that can be configured as well, but for that I recommend to bring a visit to the website of PIT at www.pitest.org.

    Alternatives

    PIT is not the only mutation testing framework for Java, but it is the most popular and the one most actively maintained. Others are µJava and Jumble.
    Although most mutation testing frameworks are written for Java, probably because it's so easy to dynamically change its bytecode, mutation testing is possible in some other languages as well: notable are grunt-mutation-testing for Javascript and Mutator, a commercial framework which is available for a couple of languages.

    Categories: Companies

    Continuing our journey together to focus on YOUR One Shiny Object…

    Mike Vizdos - Implementing Scrum - Thu, 11/06/2014 - 21:25
    www.implementingscrum.com -- Cartoon -- May 6, 2008

    Hi.

    It’s been a while [again] since posting anything new at www.implementingscrum.com.

    I see people are sharing the information (all around the world) in cubicles and Scrum team rooms.  People are linking comic strips from their blogs, using them in presentations, and including them in books.

    The Chicken and Pig cartoons here are now firmly established as folklore in the Scrum and Agile communities today.

    People either love it or hate the story; however, an amazing thing is happening — people still talk about them AND they are still used to help start tough conversations in software development.

    Cool.

    And yet I ask myself, why am I not posting *regularly* here anymore?

    The answer is pretty simple.  I’ve moved on and evolved my interests over the years.  The comic strips are obviously not my area of *passion* anymore; they don’t get me out of bed excited to take on the world daily anymore.

    And.

    That. Is.  OK.

    There is still a boat load of information out here to share, consume, and learn with one another.

    So.

    Heading into 2015 (and beyond), I’ll still be maintaining this site and will continue to push out regular reminder postings with the old comic strips (and possibly some new information as a topic is relevant).  If you want to help me on this, please let me know (the ball is in your court on this one, and YES I am serious with this invitation).

    Where else am I headed into 2015?

    A concept I am calling, “One Shiny Object.”

    I’ve been on working with people (all around the world) for many years and have found that my passion today is helping people figure out their own concept of a, “One Shiny Object.”

    It’s about keeping things simple.  Removing complexity.

    Very similar to the concepts we live in the Scrum and Agile world.  It’s beyond that now (and probably has always been… I’ll be using the tools in that box heading out of just the software development world).

    Out on twitter (@mvizdos there) you can catch me doing this (almost daily) now.  At conferences, this is something I am passionately talking to audiences about.

    With clients, it’s all about this:

    Focus.  #deliver

    Interested in following me (Mike Vizdos) on this new journey?

    Please jump over to www.OneShinyObject.com and let me know your name and e-mail address; from there, we’ll branch off from the topic of this blog about Implementing Scrum and then you’ll see what we can do together to focus on your One Shiny Object.

    Amazing things are already happening.  Join me (still FREE!).

    You’ll still be getting information here about Implementing Scrum, and I reiterate my humble request that you to get involved here to keep things relevant in your world of Scrum today.

    It’s all a journey!

     

    Categories: Blogs

    Lean Metrics: Measure and Improve the Flow of Work

    Watch this webinar recording to learn how managing your work via a Kanban system gives you the tools to measure and improve your team’s effectiveness. About this Webinar Chris Hefley, CEO of LeanKit, provides an introduction to Lean Metrics that includes: – What to track, including total WIP, blockers, throughput, and lead time – How […]

    The post Lean Metrics: Measure and Improve the Flow of Work appeared first on Blog | LeanKit.

    Categories: Companies

    The Guide on the Side

    Agile Management Blog - VersionOne - Thu, 11/06/2014 - 19:59

    I’ve delivered quite a few training classes over the last several years, most of them in how to effectively utilize Scrum and Agile tooling in the delivery of software. And yes, I’ve relied heavily on Powerpoint decks to get my message across, doing my best not to read from the screen, but deliver the information in an engaging way. If you’re a trainer, you know this can be a tough thing to do. You gotta be on, and know your subject matter inside and out. Without that, not much else matters. Once you’ve got that down, it’s a matter of conveying the message and helping the learners understand. How we do this varies. Much of it is our own personal style, and what we learned from someone else along the way. But here’s a dirty little secret… many of us trainers aren’t really professional trainers at all. We just know something so well that someone made the decision to throw us in front of people to share our knowledge.

    I’ve seen some really good showmen/women as I’ve attended training classes over my career in IT. It’s a truly impressive (and sometimes entertaining) thing to watch; a performance really. They know their stuff, no referring to their notes, lots of eye contact, voice inflection and fluctuation, a few jokes thrown in for good measure, smooth flow and spot on timing. And we all clap at the end. Bravo! I aspired to be that polished.

    But a colleague recently told me about this idea called ‘Training from the BACK of the Room’ based on a book of the same name by Sharon L. Bowman.

    http://trainingfromthebackoftheroom.de/

    photoIn true Agile fashion, I tried it out in my Agile tool training classes. At first, it seemed uncomfortable to me. I had my old slide deck down pat and knew it well, so who better to share all that great information than yours truly? In my traditional training model, I was the ‘sage on the stage’.

    But I vowed to really give this new thing a shot. I still used a Powerpoint deck, but the number of slides went from about 100 to 15. Training became more of a conversation, a series of exploratory exercises, and discussions afterward.

    Rather than providing step by step instructions on how to perform a certain exercise, I’d give them a challenge, like…

    • Identify at least three help options in the tool.
    • Create two user stories. Name one ‘Add book to wish list’, and the other ‘Remove book from wish list’.
    • Create an Epic called Manage Customer Account
    • Create three child backlog items under that Epic called…

    You get the idea here. I’d have similar challenges around Release Planning, Sprint Planning, blocking stories, closing stories, setting up notifications, creating and sharing conversations, reports, etc.

    I get folks heavily involved in showing the class what they did and how they did it. Dig into their thought process. Recognize and appreciate that others may have done the same thing differently, but achieved the same result. Literally have them come up to the front of the class and drive on my laptop, showing us all how they did it (see pic above). I thought I’d have to call on folks to get participation, but I didn’t really. They mostly volunteer. They’re eager to share what they learn. I encourage folks to work in pairs (paired learning), but not everyone does, which is ok.

    The feedback was surprising (to me anyway). Initially, I felt like I wasn’t really doing my job as a professional trainer. But folks loved this new format! The feedback forms (which were better than my previous classes) only told part of the story. In addition, people would come up after class and tell me they had never had a training course like this before. The majority of students really liked being engaged in this new way. To be fair, there was a minority that didn’t really care for it. I understand.

    Oh, and yes, I literally did train from the back of the room (sometimes the side or front too). Most of my time was spent walking around, helping folks who got stuck or had questions about the exercises/challenges I gave them. The struggle is part of the learning.

    At the end of the day, what I learned is that being the ‘sage on the stage’ is not as good as being the ‘guide on the side’. I know… it rhymes, but it’s true. We shouldn’t expect a performance from our trainers, or to sit in awe, as they impress us with all their knowledge and showmanship. That’s not the point. As students in a training class, our goal should be to learn something that hopefully helps us do your jobs better. As a trainer, it should be to help them learn. When they go back to their real job, they should be able to recall what they learned, and apply it to their own unique situation. As Trainers, we can make it stick by engaging students, challenging them, asking questions and guiding.

    If you’ve attended training like this, what did you like (or dislike) most?

    If you’re a trainer, have you seen this method applied to training other than ‘technical ‘or ‘tool’ training?

    Categories: Companies

    User Stories for Agile Requirements

    Scrum Expert - Thu, 11/06/2014 - 18:34
    The technique of expressing requirements as user stories is one of the most broadly applicable techniques introduced by the agile processes. User stories are an effective approach on all time constrained projects and are a great way to begin introducing a bit of agility to your projects. This session explain how to identify and write good user stories. It describes the six attributes that good stories should exhibit and presents thirteen guidelines for writing better stories. Learn how user role modeling can help when gathering a project’s initial stories. Because requirements ...
    Categories: Communities

    Advanced Topics in Agile Planning

    TV Agile - Thu, 11/06/2014 - 18:27
    Velocity is perhaps the most useful metric available to agile teams. In this session we will look at advanced uses of velocity for planning under special but common circumstances. We will see how to forecast velocity in the complete absence of any historical data. We will look at how a new team can forecast velocity […]
    Categories: Blogs

    Agile Tour Paris, Paris, France, November 26 2014

    Scrum Expert - Thu, 11/06/2014 - 18:11
    Agile Tour Paris is a one-day conference focused on agile software development and Scrum that takes place in the capital of France. All the presentations and workshops are in French. In the agenda Agile Tour Paris, you can find presentations and workshops like “Scrum Coach Clinic”, “Lego 4 Scrum”,  “Human Centered Lean Management”, “Ways to Solution: an Unexpected Journey”, “Communication on Agile Projects  – the Palo Alto Theory on Communication”, “The Seven Agile Sins”. Web site: http://at2014.agiletour.org/fr/paris.html Location for the 2014 conference: Microsoft France, 41 Quai Président Roosevelt, 92130 Issy-les-Moulineaux, France
    Categories: Communities

    How To Improve the IT-Business Relationship

    J.D. Meier's Blog - Thu, 11/06/2014 - 17:38

    It’s possible to change IT from a poorly respected cost center to a high-functioning business partner.

    Driving business transformation is a people, process, and technology thing.

    Some people think they can change their business without IT.   The challenge is that technology is the enabler of significant business change in today’s digital landscape.  Cloud, Mobile, Social, and Big Data all bring significant capabilities to the table, and IT can hold the keys.

    But the business doesn’t want to hear that.

    Business Leaders Want to Hear About the WHY and WHAT of the Business

    Business leaders don’t want to hear about the HOW of technology.

    Business leaders want to hear about the impact on their business.   They want to hear about how predictive analytics can help them build a better pipeline, or target more relevant offers.   Business leaders want to hear about how they can create cross-sell/upsell opportunities in real-time.   And, business leaders want to hear about the business benefits and the KPI that will be impacted by choosing a particular strategy.

    The reality is that the new Digital Masters of the emerging Digital Economy bring their IT with them, and in many cases, their IT even helps lead the business into the new Digital Frontier.

    In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are digital masters that created their digital visions and are driving business change.

    How IT Can Change Its Game

    While it takes work on both sides, IT can change it’s game by creating transparency around performance, roles, and value.  This includes helping employees think and talk differently about what they do.   IT can show very clearly how it delivers value for the money.  And IT can change the way IT and business leaders make investment decisions and assess the returns.

    IT Needs to Speak Business

    The CIO and everybody in IT, needs to speak the language of business.

    Via Leading Digital:

    “Poor relations between IT and business leaders can have many causes.  Sometimes it's the personality of the IT leader.  A common complaint among senior executives is that their CIO seems to speak a different language from the business.  Another is that the CIO doesn't seem to understand what's really important.  For example, a chemical company CIO we interviewed described how he communicates regularly with business executives about the innovative possibilities of digital technologies.  Yet none of his business executive peers (whom we interviewed separately) seemed to find the discussions credible.”

    IT Needs to Deliver Better, Faster, and More Reliably than Outsourcing

    It’s a competitive world and IT needs to continuously find ways to deliver solutions in a way that makes business sense.

    Via Leading Digital:

    “Sometimes the issue arises from IT's delivery capability.  According to Bud Mathaisel, who has served as CIO in several large companies, 'It starts with competence in delivering services reliably, economically, and at very high quality.  It is the absolute essential to be even invited into meaningful dialog about how you then build on that competence to do something very interesting with it.'  Unfortunately, some IT units today do not have this competence.  One business executive we interviewed said, 'IT is a mess.  It's costs are not acceptable.  It proposes things in nine or ten months, where external firms could do them in three to nine weeks.  We started offshoring our IT, and now our IT guys are trying to change.' A legacy of poor communication, byzantine decision processes, and broken commitments is no foundation on which to build a strong IT-business relationship.”

    IT Needs a Good Digital Platform to Be High-Performing IT

    In order to bet on IT, it needs to be high-performing.  And in order for IT to be high-performing, it needs to have a good digital platform.

    Via Leading Digital:

    “However, the fault doesn't always rest only with IT leaders.  In many cases, business executive share some of the blame ... high-performing IT requires a good digital platform, and good platforms require discipline.  If your approach to working with IT can be characterized by impatience, unreasonable expectations, or insisting on doing things your way, then you'll need to think about how to change your side of the relationship.”

    CIOs Can Lead Digital Business Transformation

    Key business transformation takes technology.  CIOs can help lead the business transformation, whether it's through shared goals with the business, creating a new governance mechanism, or creating a new shared digital unit.

    Via Leading Digital:

    “Regardless of the case, if your IT-business relationships are poor, it's essential to fix the problem.  A bank executive stated, 'IT has been brought closer to business during the last five years.  It is very important to success because man of the important transformations in our business are enabled by technology.'  With strong relationships, IT executives can help business executives meet their goals, and business executives listen when IT people suggest innovations.  Executives on both sides are willing to be flexible in creating new governance mechanisms or shared digital units.  At Codelco, Asian Paints, and P&G, the CIO even leads digital transformation for the company.”

    CIOs Can Help Drive the Bus with the Executive Team

    CIOs can help drive the bus, but it takes more than senior sponsorship.

    Via Leading Digital:

    “So, how can you start to improve your IT-business relationship?  Angela Ahrendts, CEO of Burberry, told her CIO he needed to help drive the bus with the executive team.  However, leadership changes or top-down mandates are only the start of the change.  Few CIOs can change the business by themselves, and not all business executives will climb on the bus with the CIO, even if the CEO demands it.”

    Fix How You Communicate to Fix the IT-Business Relationship

    Start by fixing how you communicate between the business and IT.

    Via Leading Digital:

    “Fixing the IT-business relationship can take time, as people learn how to trust each other and redefine the way they work together.  As with any struggling relationship, the best starting point is to fix the way you communicate.  Does IT really cost too much, or are costs reasonable, given what IT has to do? Is the IT unit really too bureaucratic, or do all of those procedures actually serve a useful purpose?  Are you a good partner to IT or a difficult one?  How can IT make it easier for you to get what you need, while still making sure things are done correctly?  What investments can help IT improve its technology, internal processes, cost-effectiveness, quality, or speed?”

    Change IT from a Poorly Respected Cost Center to a High-Functioning Business Partner

    It’s possible to change IT from a low performing cost center to a high-performing business partner.  Companies do it all the time, and MIT has the research.

    Via Leading Digital:

    “MIT research into IT turnarounds has identified a series of steps that can change IT from a poorly respected cost center to a high-functioning business partner.  The key change mechanism is transparency--around performance, roles, and value.  The first step is to help IT employees think, and talk, differently about what they do.  The second step proceeds to showing very clearly how well (or how poorly) IT delivers value for money--the right services at the right quality and right price, and where problems still exist.  And then the third step moves to changing the way IT and business leaders make investment decisions and assess the returns that projects deliver.  Through transparency around roles, performance, and investments, both sides can make smoother decisions and work together to identify and deliver innovation.”

    If you’re part of a business that wants to change the world, start by reimagining how IT can help you achieve the art of the possible.

    You Might Also Like

    Building Better Business Cases for Digital Initiatives

    Drive Business Transformation by Reenvisioning Your Customer Experience

    Drive Business Transformation by Reenvisioning Your Operations

    How Digital is Changing Physical Experiences

    The Future of IT Leaders

    Categories: Blogs

    Replacing Backlog Grooming

    Leading Agile - Mike Cottmeyer - Thu, 11/06/2014 - 17:37
    The Problem

    Over the last few years, I’ve worked with numerous teams. One thing they all struggle with is backlog grooming.  They all know they need to do it.  Unfortunately, they all seem to struggle with when to do it or who should do it.  The most interesting struggle with backlog grooming happened two years ago.  The “story time” meetings took place at the beginning of a month-long sprint. The manager stated, the work to be completed and delivered during that sprint had to be refined within the same sprint.  This helped explain why the team thought they needed month-long sprints.  When I asked why they would try to refine work the first two weeks of the sprint and then complete that work the second two weeks, you know what their answer was?  “It said to do it like that in the Scrum Guide!” After I clarified their misunderstanding, we established a cadence to continuously mature the backlog.  A few select people would participate in the scheduled meetings.  We would reserve capacity from each sprint to get that work ready for future sprints.  The team was able to shorten their sprints to 2 weeks.  They more than doubled their delivery rate without increasing defect rates.  With that as an example, over the last few years, I have evolved my practice of backlog grooming.  Let’s look at some key dates in the evolution of backlog grooming.

    Evolution of Backlog Grooming

    2005: “grooming the product backlog” is mentioned by Mike Cohn on what is now the Scrum Users Yahoo Group;

    I always have teams allot some amount of time to “grooming the product backlog” to make sure it’s ready for the next sprint.

    2008: A formal description of “backlog grooming” is given by Kane Mar, under the name Story Time, and recommending it as a regular meeting

    I call these meetings “Story Time” meetings….Although they are not a formal part of Scrum, I’ve found that Story Time greatly improves project planning and reduces confrontational planning meetings, which are all too common for many teams.  A Story Time meeting should be held at the same time and location every single week and involve the entire team, including the Product Owner and ScrumMaster. The sole intention of these weekly meetings is to work through the backlog in preparation for future work.

    2011: The practice of “backlog grooming” is called “backlog refinement” and promoted to an “official” element of Scrum with its inclusion in the Scrum Guide

    Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog. This is an ongoing process in which the Product Owner and the Development Team collaborate on the details of Product Backlog items. During Product Backlog refinement, items are reviewed and revised. The Scrum Team decides how and when refinement is done. Refinement usually consumes no more than 10% of the capacity of the Development Team. However, Product Backlog items can be updated at any time by the Product Owner or at the Product Owner’s discretion.

    2014: Derek Huether from LeadingAgile evolved the practice of backlog grooming with one of his clients, to allow the practice to work better at scale, calling it a “Progression” workshop.

    When operating at scale, my client deals with different problems than a standard Scrum team.  They’re dealing with separate lines of business. They’re dealing with multiple delivery teams for each line of business, to include external vendors. They’re dealing with a portfolio roadmap that has annual plan items and budgets. Our strategy encapsulated the entire product delivery value stream, while ensuring we had enough architectural runway. We progressed work to be consumed by delivery teams, via a series of workshops.

    Progression Workshops

    Our progression workshops differ slightly from the story time meeting detailed by Kane Mar and the refinement meeting mentioned in the Scrum Guide.  Counter to Story Time, we don’t invite the entire team. Instead, we have elevated the workshop to a group some have come to know as a Product Owner (PO) team.  The group of people within the PO team will vary, depending on the line of business.  Yes, there will be a Product Owner (Product Lead) and facilitator, but from there we’ll include the development lead, testing lead, and an architect.  We’ve found two key challenges when operating at scale. First, is there a well defined backlog that is ready enough to be consumed by different delivery teams?  Second, is the work being queued up to the delivery teams free and clear of other teams.  That is, have we decomposed it in such a way that we’ve minimized dependencies on other teams? Beyond that, in order to achieve some degree of architectural runway, we continually refactor existing platforms. Architectural changes are not only made incrementally but we require an architect to be present at every progression workshop.

    Counter to the Scrum Guide, I’m not going to be proscriptive as to how much capacity the PO Teams do/should commit to progression workshops.  The goal is to have enough work ready for delivery teams to consume for a few sprints.

    When progressing work, we do expect some artifacts to be generated, to contribute to the teams understanding of what will be developed, tested, and delivered. Below is a partial list of potential artifacts. To be clear, we do not expect all of these to be generated.

    Potential Deliverables
    • System Context Diagram
    • Dependency and Risk Work Items
    • System Architecture Guidance Acknowledgement
    • Use Case Diagram and document
    • Business Process Flow
    • Known Business Rules
    • High Level Technology Alignment
    • Architecture Backlog for Planned Work
    • As-Is Data Contracts
    • Feature Work Items Assigned to Delivery Team
    • Feature Business Value and Acceptance Criteria
    • Feature Stack Rank
    • Test Strategy

    So, that’s the high-level view of the Progression Workshop.  Most of the time, a feature will require two or more progression workshops before work is ready to be consumed by a delivery team.  Once features progress to a defined level of shared understanding, the delivery teams assist in the decomposition of features to user stories.  In this way, work is decomposed to the right level of detail for each delivery team.

    I’m curious, how have you scaled feature development and backlog grooming in your organizations? What mechanics outside of the standard Scrum process have you found useful to refine work to be completed by delivery teams?  Have you evolved Story Time or backlog refinement?

    The post Replacing Backlog Grooming appeared first on LeadingAgile.

    Categories: Blogs

    Swift Function Currying

    Xebia Blog - Thu, 11/06/2014 - 13:55

    One of the lesser known features of Swift is Function Currying. When you read the Swift Language Guide you won't find anything about curried functions. Apple only describes it in their Swift Language Reference. And that's a pity, since it's a very powerful and useful feature that deserves more attention. This post will cover the basics and some scenarios in which it might be useful to use curried functions.

    I assume you're already somewhat familiar with function currying since it exists in many other languages. If not, there are many articles on the Internet that explain what it is and how it works. In short: you have a function that receives one or more parameters. You then apply one or more known parameters to that function without already executing it. After that you get a function reference to a new function that will call the original function with the applied parameters.

    One situation in which I find it useful to use curried functions is with completion handlers. Imagine you have a function that makes a http request and looks something like this:

    func doGET(url: String, completionHandler: ([String]?, NSError?) -> ()) {
        // do a GET HTTP request and call the completion handler when receiving the response
    }
    

    This is a pretty common pattern that you see with most networking libraries as well. We can call it with some url and do a bunch of things in the completion handler:

    doGET("http://someurl.com/items?all=true", completionHandler: { results, error in
        self.results = results
        self.resultLabel.text = "Got all items"
        self.tableView.reloadData()
    })
    

    The completion handler can become a lot more complex and you might want to reuse it in different places. Therefore you can extract that logic into a separate function. Luckily with Swift, functions are just closures so we can immediately pass a completion handler function to the doGET function:

    func completionHandler(results: [String]?, error: NSError?) {
        self.results = results
        self.resultLabel.text = "Got all items"
        self.tableView.reloadData()
    }
    
    func getAll() {
        doGET("http://someurl.com/items?all=true", completionHandler)
    }
    
    func search(search: String) {
        doGET("http://someurl.com/items?q=" + search, completionHandler)
    }
    

    This works well, as long as the completion handler should always do exactly the same thing. But in reality, that's usually not the case. In the example above, the resultLabel will always display "Got all items". Lets change that into "Got searched items" for the search request:

    func search(search: String) {
        doGET("http://someurl.com/items?q=" + search, {results, error in
            self.completionHandler(results, error: error)
            self.resultLabel.text = "Got searched items"
        })
    }
    

    This will work, but it doesn't look very nice. What we actually want is to have this dynamic behaviour in the completionHandler function. We can change the completionHandler in such a way that it accepts the text for the resultLabel as a parameter and then returns the actual completion handler as a closure.

    func completionHandler(text: String) -> ([String]?, NSError?) -> () {
        return {results, error in
            self.results = results
            self.resultLabel.text = text
            self.tableView.reloadData()
        }
    }
    
    func getAll() {
        doGET("http://someurl.com/items?all=true", completionHandler("Got all items"))
    }
    
    func search(search: String) {
        doGET("http://someurl.com/items?q=" + search, completionHandler("Got searched items"))
    }
    

    And as it turns out, this is exactly what we can also do using currying. We just need to add the parameters of the actual completion handler as a second parameters group to our function:

    func completionHandler(text: String)(results: [String]?, error: NSError?) {
        self.results = results
        self.resultLabel.text = text
        self.tableView.reloadData()
    }
    

    Calling this with the first text parameter will not yet execute the function. Instead it returns a new function with the [String]?, NSError? as parameters. Once that function is called the completionHandler function is finally executed.

    You can create as many levels of this currying as you want. And you can also leave the last parameter group empty just to get a reference to the fully applied function. Let's look at another example. We have a simple function that sets the text of the resultLabel:

    func setResultLabelText(text: String) {
        resultLabel.text = text
    }
    

    And for some reason, we need to call this method asynchronously. We can do that using the Grand Central Dispatch functions:

    dispatch_async(dispatch_get_main_queue(), {
        self.setResultLabelText("Some text")
    })
    

    Since the dispatch_async function only accepts a closure without any parameters, we need to create an inner closure here. If the setResultLabelText was a curried function, we could fully apply it with the parameter and get a reference to a function without parameters:

    func setResultLabelText(text: String)() { // now curried
        resultLabel.text = text
    }
    
    dispatch_async(dispatch_get_main_queue(), setResultLabelText("Some text"))
    

    But you might not always have control over such functions, for example when you're using third party libraries. In that case you cannot change the original function into a curried function. Or you might not want to change it since you already using it at many other places and you don't want to break anything. In that case we can achieve something similar by creating a function that creates the curried function for us:

    // defined in global scope
    func curry<T>(f: (T) -> (), arg: T)() {
        f(arg)
    }
    

    We can now use it as following:

    func setResultLabelText(text: String) {
        resultLabel.text = text
    }
    
    dispatch_async(dispatch_get_main_queue(), curry(setResultLabelText, "Some text"))
    

    Probably in this example it might be just as easy to go with the inner closure, but being able to pass around partial applied functions is very powerful and used in many programming languages already.

    Unfortunately the last example also show a great drawback about the way currying is implemented in Swift: you cannot simply curry normal functions. It would be great to be able to curry any function that takes multiple parameters instead of having to explicitly create curried functions. Another drawback is that you can only curry in the defined order of parameters. That doesn't allow you to do reverse currying (e.g. apply only the last parameter) or even apply just any parameter you want, regardless of its position. Hopefully the Swift language will evolve in this and get more powerful currying features.

    Categories: Companies

    [POSTPONED] Portland, Oregon: Agile Smack-Down (Debate)

    James Shore - Thu, 11/06/2014 - 10:02
    06 Nov 2014 James Shore/Calendar

    This event is postponed due to inclement weather. I will update the listing when a new date is chosen.

    The Technology Association of Oregon (TAO) is hosting a debate about Agile between me and Frank D'Andrea next week. Adam Light is hosting. It's going to be a fun, lively event with plenty of thought-provoking moments. Here's the blurb:

    Join us for the TAO Agile Smack-Down, a thought-provoking discussion and debate of the good, bad, and ugly ways to implement Agile methodologies. Adam Light will moderate a rousing dialog between Frank D'Andrea, VP Software Development at Tater Tot Designs, and James Shore, author of The Art of Agile Development.

    Our panel will discuss real-world lessons from applying and misapplying Agile to application development, with time set aside for Q&A from the audience.

    The event is 5:30-8:30 on Thursday, November 13th at Urban Airship in downtown Portland. It's $25 for TAO members and $45 for non-members. Registration and details here.

    Categories: Blogs

    Reddit "Ask Me Anything"

    James Shore - Thu, 11/06/2014 - 10:01
    06 Nov 2014 James Shore/In-the-News

    The Agile subreddit on Reddit invited me to do an "Ask Me Anything" (AMA) earlier this year. We had a great turnout and discussed many interesting topics.

    Read it here.

    Categories: Blogs

    Knowledge Sharing


    SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.