Skip to content

Feed aggregator

Testomato monitors your website and alerts you if anything important breaks

TestDriven.com - Fri, 04/17/2015 - 23:00
Monitor both your staging and production environments to catch problems immediately. Instead of worrying, know when something is broken so you can fix it fast. http://www.testomato.com/
Categories: Communities

Topics for Lunch-N-Learn

Agile Complexification Inverter - Fri, 04/17/2015 - 19:57


Brainstorming a list of topics for a Scrum/Agile lunch-N-learn session.


Slicing Stories – resources to slice vertical stories of valueStory Writing techniques:  w/ Q&A based upon participants real storiesEstimation techniques:  Affinity Estimation; T-shirt sizing -> converting to numbers; Planning Poker (the rule book)Team building tools:  Infinite Loops; Helium Stick; Warp Speed;  Pair Drawing, etc.Definition of Done/Ready exerciseRelease Planning   How to derive duration with a complicated backlogAgile Library Initiation  Bring books, make the rules, get funding, 1,2,3, GO!Management 3.0 Book Club - join a group reading the best Agile book written.Making Visual Information Radiators - define Radiator/Cooler;  elements of a Scrum boardAspects of an effective Product BacklogAgile Portfolio Planning - tools and techniques; estimation, cost of delay, prioritization, deciding what NOT to doThe principle of TDD via LEGO building;  anyone can learn the power of test first developmentDoes you development rest on a SOLID foundation - an overview of the SOLID principlesCollaboration Games to understand the customer;   12 Innovation Games;  Other resourcesUser Story Maps technique to achieve higher level understanding of the backlogLaunching a Team;  what’s required, best practices, examples and techniquesTeam Practices:  a collection of quick tools to increase team work and collaboration
Learn Backlog Prioritization techniques:  Cost of Delay,  Perceived ROI,  Gut Feeling, Loudest Yeller

see also:
http://www.improvingenterprises.com/services/applied-training/lunch-n-learns/
Categories: Blogs

Saga Implementation Patterns: Singleton

Jimmy Bogard - Fri, 04/17/2015 - 17:38

NServiceBus sagas are great tools for managing asynchronous business processes. We use them all the time for dealing with long-running transactions, integration, and even places we just want to have a little more control over a process.

Occasionally we have a process where we really only need one instance of that process running at a time. In our case, it was a process to manage periodic updates from an external system. In the past, I’ve used Quartz with NServiceBus to perform job scheduling, but for processes where I want to include a little more information about what’s been processed, I can’t extend the Quartz jobs as easily as NServiceBus saga data. NServiceBus also provides a scheduler for simple jobs but they don’t have persistent data, which for a periodic process you might want to keep.

Regardless of why you’d want only one saga entity around, with a singleton saga you run into the issue of a Start message arriving more than once. You have two options here:

  1. Create a correlation ID that is well known
  2. Force a creation of only one saga at a time

I didn’t really like the first option, since it requires whomever starts to the saga to provide some bogus correlation ID, and never ever change that ID. I don’t like things that I could potentially screw up, so I prefer the second option. First, we create our saga and saga entity:

public class SingletonSaga : Saga<SingletonData>,
    IAmStartedByMessages<StartSingletonSaga>,
    IHandleTimeouts<SagaTimeout>
{
    protected override void ConfigureHowToFindSaga(
    	SagaPropertyMapper<SingletonData> mapper)
    {
    	// no-op
    }

    public void Handle(StartSingletonSaga message)
    {
        if (Data.HasStarted)
        {
            return;
        }

        Data.HasStarted = true;
        
        // Do work like request a timeout
        RequestTimeout(TimeSpan.FromSeconds(30), new SagaTimeout());
    }
    
    public void Timeout(SagaTimeout state)
    {
    	// Send message or whatever work
    }
}

Our saga entity has a property “HasStarted” that’s just used to track that we’ve already started. Our process in this case is a periodic timeout and we don’t want two sets of timeouts going. We leave the message/saga correlation piece empty, as we’re going to force NServiceBus to only ever create one saga:

public class SingletonSagaFinder
    : IFindSagas<SingletonSagaData>.Using<StartSingletonSaga>
{
    public NHibernateStorageContext StorageContext { get; set; }

    public SingletonSagaData FindBy(StartSingletonSaga message)
    {
        return StorageContext.Session
            .QueryOver<SingletonSagaData>()
            .SingleOrDefault();
    }
}

With our custom saga finder we only ever return the one saga entity from persistent storage, or nothing. This combined with our logic for not kicking off any first-time logic in our StartSingletonSaga handler ensures we only ever do the first-time logic once.

That’s it! NServiceBus sagas are handy because of their simplicity and flexibility, and implementing something a singleton saga is just about as simple as it gets.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Is LeSS meer dan SAFe?

Xebia Blog - Fri, 04/17/2015 - 15:48

(Grote) Nederlandse bedrijven die op zoek zijn naar een oplossing om de voordelen die hun Agile teams brengen op te schalen, gebruiken vooral het Scaled Agile Framework (SAFe) als referentiemodel. Dit model is -ook voor managers- zeer toegankelijke opgezet en trainingen en gecertificeerde consultants zijn beschikbaar. Al in 2009 beschreven Craig Larman en Bas Vodde hun ervaringen met de toepassing van Scrum in grote organisaties (onder andere Nokia) in hun boeken 'Scaling Lean & Agile Development' en 'Practices for Scaling Lean & Agile Development'. De methode noemden ze Large Scale Scrum, afgekort LeSS.
LeSS heeft de afgelopen jaren een onopvallend bestaan geleid. Onlangs is besloten dit waardevolle gedachtengoed meer in de spotlights te zetten. Er komt in de zomer een derde boek, de site less.works is gelanceerd, er is een trainingen tournee gestart en Craig en Bas geven acte de presence op de toonaangevende conferenties. Zo zal Bas 4 juni als keynote optreden tijdens Xebicon 2015, in Amsterdam. Is LeSS meer of minder dan SAFe? Of min of meer SAFe?

Wat is LeSS?
Less is dus een methode om een grote(re) organisatie met Agile teams als basis in te richten. Zoals de naam verraadt, Scrum is daarbij het uitgangspunt. Er zijn 2 smaken: ‘gewoon’ LeSS, tot 8 teams en Less Huge, vanaf 8 teams. LeSS bouwt op verplichte regels (rules), bijvoorbeeld "An Overall Retrospective is held after the Team Retrospectives to discuss cross-team and system-wide issues, and create improvement experiments. This is attended by Product Owner, ScrumMasters, Team Representatives, and managers (if there are any).” Daarnaast kent LeSS principles (ontwerp criteria). De principes vormen het referentie raamwerk op basis waarvan je de juiste ontwerp besluiten neemt. Tenslotte zijn er de Guidelines en Experiments, de dingen die in de praktijk bij organisaties succesvol of juist niet zijn gebleken. LeSS gaat naast het basis framework verder dieper in op:

  • Structure (de organisatie structuur)
  • Management (de -veranderende- rol van management)
  • Technical Excellence (sterk gebaseerd op XP en Continuous Delivery)
  • Adoption (de transformatie naar de LeSS organisatie).

LeSS in een notendop
De basis van LeSS is dat Large Scale Scrum = Scrum! Net als SAFe wordt in LeSS gezocht naar hoe Scrum toegepast kan worden op een groep van zeg 100 man. LeSS blijft het dichtst bij Scrum: er is 1 sprint, met 1 Product Owner, 1 product backlog, 1 planning en 1 sprint review, waarin 1 product wordt gerealiseerd. Dit is dus anders dan in SAFe, waarin een opgeblazen sprint is gedefinieerd (de Product Increment). Om deze 1 sprint implementatie te kunnen waarmaken is naast een hele sterke whole product focus, bijvoorbeeld ook een technisch platform nodig, dat dit ondersteunt. Waar SAFe pragmatisch een geleidelijke invoering van Agile at Scale toestaat, is LeSS strenger in de klaar-voor-de-start eisen. Er moet een structuur worden neergezet die de cultuur van de 'contract game’ doorbreekt. De cultuur van overvragen, druk, onduidelijkheid, verrassingen, en afrekenende verantwoordelijkheid.

LeSS is meer en minder SAFe
De recente inspanning om LeSS toegankelijk te maken gaan ongetwijfeld leiden tot een sterk toenemende aandacht voor deze aansprekende benadering voor de inrichting van Agile at Scale. LeSS is anders dan SAFe, al hebben beide modellen vooral in hun inspiratiebronnen ook veel gemeen.
Beide modellen kiezen een andere insteek, bijvoorbeeld mbt:

  • hoe Scrum toe te passen op een cluster van teams
  • de benadering van de transformatie naar Agile at Scale
  • hoe oplossingen worden gebracht: SAFe geeft de oplossing, LeSS de voors en tegens van keuzes

Opvallend is verder dat SAFe (met het portfolioniveau) uitlegt hoe de verbinding tussen strategie en backlogs gelegd moet worden. LeSS besteedt daarentegen meer aandacht aan de transformatie (Adoption) en Agile op hele grote schaal (LeSS Huge).

Of een organisatie kiest voor LeSS of SAFe, zal afhangen wat het best past bij de organisatie. Past bij de veranderambitie en bij de ‘agility’ op moment van starten. Sterk ‘blauwe’ organisaties zullen kiezen voor SAFe, organisaties die een overtuigende stap richting een Agile organisatie durven te zetten, zullen eerder kiezen voor LeSS. In beide gevallen loont het om kennis te nemen van de oplossingen die de andere methode biedt.

Categories: Companies

Clickbait is evil!

Scrum Breakfast - Fri, 04/17/2015 - 09:47
Anyone who has taken one of my Scrum classes knows that I believe that multitasking is evil! I have come to realize that clickbait is evil too.

Why? For the same reason. Clickbait, like multitasking, destroys productivity.  At least for my own purposes, I have decided do something about it, and I am wondering if other people feel the same way.

What is clickbait? Let's say you an article open a reputable site, like CNN.com. See all those links on the right side, like Opinion, More Top Stories, Promoted Stories, More from CNN? That's click bait. My guess is 1/3rd of any given web pages consists of catchy headlines whose sole purpose is to get you to spend more time on their site (or maybe, to cash in on Cost-per-Click syndication schemes, to get you go to some other site). By the time you get 2/3rd down the page, 100% of the content is usually clickbait.
What is evil?What do I mean by evil? Evil things are bad for you. Like weeds in the garden or corrupt politicians, you'll never get rid of evil entirely, but if you don't keep the weeds under control, you won't have a garden any more. So we need to keep evil things in check, lest you suffer the consequences. In this case the consequences is massive amounts of wasted time (at least for me it is)!
Why is Multitasking Evil?I have long known that if you have two goals, working them in parallel slows you down. If goal A takes a month, and goal B takes a month, then if you work on A and B in parallel, it will take you at least 2 months before either goal is finished and probably longer. So focusing on one thing at a time usually gives you better results. This is why focus is a core value of Scrum. 
It turns out the situation with multitasking is much worse than I thought.
I recently attended a talk by Prof Lutz Jäncke, the neuropsychologist at the ETH, on why teenagers are the way the are. (The short answer: they are not evil, they are drawn that way. They will be people again when their brains have finished developing -- sometime around 20 years old. But I digress.)
Listening to a neuropsychologist for an hour was very challenging! My brain was very tired after his talk, but one point really stuck out:Multitasking makes you worse at multitasking!To process information effectively, we need to filter irrelevant information. By responding to every stimulus that comes in, we lose the ability to filter junk.
He also asked, have you ever gone to do something on the Internet, lost track of what you are doing and then wasted a tremendous amount of time? You bet! Every day! Why is that? Clickbait. Catchy headlines and dramatic pictures pique my curiosity to send me to the next page.
I realized this was true, and I am now trying to turn down the interruptions on my computer and other devices.Using Adblock Plus to fight clickbaitI have used ABP for a long time to block most ads. But the standard filters only target ads, not clickbait. I discovered you can not only block links, but you can block specific HTML elements. After a bit of experimenting with the block element feature, I was able to filter the clickbait sections of the news and entertainment sites I visit most. 
I was amazed at the difference in how much less clutter and fewer distractions I encountered!
Do you have this problem? Would you like to use my filter list? I don't know if it is worth packaging these filters for distribution. Or of there isn't a filter set somewhere that addresses this problem. So I have simply published this list and installation instructions on as a google doc: http://tinyurl.com/clickbait-filter. It's still pretty short, but if you send me additions, I will integrate them. 
Clickbait is evil. I believe reducing clickbait will be good for my performance, and it probably will be for yours as well. If you install it, please put out a tweet like this:
"Just installed @peterstev's clickbait filter! #clickbaitisevil! http://tinyurl.com/clickbait-filter"






Categories: Blogs

R: Think Bayes – More posterior probability calculations

Mark Needham - Thu, 04/16/2015 - 22:57

As I mentioned in a post last week I’ve been reading through Think Bayes and translating some of the examples form Python to R.

After my first post Antonios suggested a more idiomatic way of writing the function in R so I thought I’d give it a try to calculate the probability that combinations of cookies had come from each bowl.

In the simplest case we have this function which takes in the names of the bowls and the likelihood scores:

f = function(names,likelihoods) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors,likelihoods)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

We assume a prior probability of 0.5 for each bowl.

Given the following probabilities of of different cookies being in each bowl…

mixes = {
  'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
  'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}

…we can simulate taking one vanilla cookie with the following parameters:

Likelihoods = c(0.75,0.5)
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.6
> res$posteriors[res$name == "Bowl 2"]
[1] 0.4

If we want to simulate taking 3 vanilla cookies and 1 chocolate one we’d have the following:

Likelihoods = c((0.75 ** 3) * (0.25 ** 1), (0.5 ** 3) * (0.5 ** 1))
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.627907
> res$posteriors[res$name == "Bowl 2"]
[1] 0.372093

That’s a bit clunky and the intent of ‘3 vanilla cookies and 1 chocolate’ has been lost. I decided to refactor the code to take in a vector of cookies and calculate the likelihoods internally.

First we need to create a data structure to store the mixes of cookies in each bowl that we defined above. It turns out we can do this using a nested list:

bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
 
> Mixes
$`Bowl 1`
  vanilla chocolate 
     0.75      0.25 
 
$`Bowl 2`
  vanilla chocolate 
      0.5       0.5

Now let’s tweak our function to take in observations rather than likelihoods and then calculate those likelihoods internally:

likelihoods = function(names, observations) {
  scores = c(1,1)
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
 
f = function(names,mixes,observations) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors)
 
  dt$likelihoods = likelihoods(Names, Observations)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

And if we call that function:

Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
 
res=f(Names,Mixes,Observations)
 
> res$posteriors[res$names == "Bowl 1"]
[1] 0.627907
 
> res$posteriors[res$names == "Bowl 2"]
[1] 0.372093

Exactly the same result as before! #win

Categories: Blogs

Join Me in Australia in June!

Agile Product Owner - Thu, 04/16/2015 - 17:56

Hi,

This June, I’ll be busy in Australia presenting a SAFe SPC certification class, and speaking at Agile Australia as an invited spearker where I’ll lead workshops, and present “Nine Immutable Principles of Lean-Agile Development.”

Agile Australia 2015

Invited Speaker Track: Nine Immutable Principles of Lean-Agile Development

Half-day Workshops: Foundations of the Scaled Agile Framework

Sydney 16 June (two workshops); Melbourne 19 June 2015 (AM only).

In this half-day workshop, you’ll learn the foundations of the Scaled Agile Framework (SAFe), its values and underlying principles of Agile, Lean and Product Development Flow. This course will enable you to leave with an understanding of how the principles and practices of SAFe support large scale Agile Software Programs,  Lean Systems Engineering and Agile Portfolio Management. You’ll also learn about effective strategies for implementing SAFe, including the critical role that Leadership plays. These rare half-day workshops provide a personal setting that helps assure effective knowledge exchange.

See detailed descriptions and register at www.agileaustralia.com.au/2015/workshops. Earlybird registration closes on Friday 24 April 2015, and you can get a further discount using this promo code, AA15-SFND

SAFe Program Consultant Certification June 22-25, 2015

Logo_ContextMatters_DirectoryThis is the official 4-day SAFe Certification Program which results in the SAFe Program Consultant (SPC) Certification. I don’t get a chance to deliver this workshop myself that often anymore, especially in Asia-Pacific, so I encourage you to attend this unique event. We’ll be working with our local Gold Partners Context Matters, and you can contact Mark or Em for more information. Early bird pricing is available until 22, May 2015.

I hope to see you in Australia!

—Dean

Categories: Blogs

How We Can Inspire a Million Children to Become Engineers

Agile Management Blog - VersionOne - Thu, 04/16/2015 - 14:37

CoderDojo

We can all agree that inspiring children to become engineers and scientists is of utter importance. However making a difference on a local level seems intimidating. But it doesn’t have to be so difficult.

Learn how you can help us inspire a million children to become engineers by providing just a few hours a month and a safe, collaborative meeting space.

The Challenge

A few years ago Robert Holler, the president and CEO of VersionOne, challenged VersionOne employees to come up with an idea that would help children in our local community learn about programming and technology. This seemed like an exciting, though daunting, community service project.

At VersionOne we feel it is an important responsibility to help the community. That doesn’t mean just the agile community, but also the local community. In fact, Gartner recently recognized our strong community presence in the Magic Quadrant for Application Development Lifecycle Management report.

Typically when we do local community projects they are hosted by charities that manage projects. This project, on the other hand, would be completely managed by VersionOne employees. At first, this seemed like it might take a lot more time and effort than any of us really had. Nonetheless, we were very excited to try to make it work.event_258537472

There were a lot of ideas that would need varying degrees of resources, but after a little research we discovered the global CoderDojo movement. It was a movement started in Ireland in 2011 by an eighteen-year-old student and a serial entrepreneur. They developed a vision for creating a safe and collaborative environment in which experienced adult mentors help students who want to learn about technology and programming. Their model was fairly lean, making it easy to launch. Parents bring their kids and their own laptops, so we just needed space and mentors to get started.

Since VersionOne is an agile lifecycle management company, we were attracted to the lean nature of this program. Soon after, CoderDojo Ponce Springs was born!

How It Works

The way it works is that parents bring their kids, ages 7 through 17, with laptops in hand to a meeting place. (In our case, we also have a limited number of laptops that have been donated by VersionOne for kids who don’t have a laptop). Volunteers help the students learn a programming language or other creative tools.

event_258537462

There are tons of great free resources like TeachingKidsProgramming.com, Khan Academy, Codecademy, CODE.org, Scratch, Blockly Games, and more. This makes it less burdensome for new volunteers to help because they don’t need to spend hours and hours creating their own resources.

However, a number of our volunteers have devoted additional time to creating step-by-step tutorials and interactive tools tailored to the needs of students who have been through the beginner materials online and want to more challenging things like building plugins for Minecraft or learning about building HTML5 JavaScript games.

Student-Driven Learning

We should stress, however, that the bulk of the work is on the students themselves! Mentors are there to assist and inspire, but not to provide long, drawn-out lectures. Students rapidly get hands on with the technologies and help each other learn. It’s a theme that’s woven throughout the CoderDojo movement. One of its own mentors is Sugata Mitra, who has conducted some amazing experiments in child-driven learning. Check out his TED talks to see what he discovered about the innate curiosity and capacity for learning and teaching that children possess.

Want to Start Your Own CoderDojo?

We share code and resources in GitHub in this open source and forkable CoderDojoPonceSprings repository. Feel free to create a copy of it and start one in your own community! Our Dojos take place in downtown Atlanta and in Alpharetta, Georgia, but one of our volunteers cloned our content and started a brand new CoderDojo in Henry County, about 30 minutes south of Atlanta.

Impact

It has been exciting to see the program still going strong for more than two years. The majority of the students are returning students, a good indication of the value they are getting from the program. In fact, many of the students have been participating for the entire program, and are becoming quite advanced. These are the students who have encouraging parents and peers outside of the Dojo as well, because it takes more just attending a Dojo to become really advanced.

What a CoderDojo is best at is providing the safe, collaborative environment for students who are ready and willing to learn to meet other enthusiastic peers with whom they can collaborate and increase their knowledge. Research has shown that when someone is learning something new, they often learn best from peers who are just slightly ahead. A CoderDojo also provides students who want to help others an opportunity to start giving back immediately. In one particular case, we had a student serve as a mentor to much younger students. He is thirteen and participated in a special event with students from an Atlanta elementary school.

A Million Children

Making a difference in the world can seem like a daunting feat, but the greatest lesson that I think has come out of our CoderDojo project is that by simply providing some space and time, we can inspire the next generation to get excited about programming and technology.

We probably have 300 different children come to our program each year. Over the next five years we hope to inspire 1,500 children in our program. If each of the three chapters that launched after ours has the same results, together we will inspire 4,500 children. And if 223 companies are willing to join us, we all can inspire 1,000,000 children over the next five years.

Volunteers in our Dojo are currently collaborating on tools and content to make starting a new CoderDojo even easier, if you’re interested to learn more or start your own CoderDojo, email us at coderdojo@versionone.com.

So what do you have to say, will you help us inspire the next generation of software programmers?

Categories: Companies

Experimenting with Swift and UIStoryboardSegues

Xebia Blog - Wed, 04/15/2015 - 22:58

Lately I've been experimenting a lot with doing things differently in Swift. I'm still trying to find best practices and discover completely new ways of doing things. One example of this is passing objects from one view controller to another through a segue in a single line of code, which I will cover in this post.

Imagine two view controllers, a BookViewController and an AuthorViewController. Both are in the same storyboard and the BookViewController has a button that pushes the AuthorViewController on the navigation controller through a segue. To know which author we need to show on the AuthorViewController we need to pass an author object from the BookViewController to the AuthorViewController. The traditional way of doing this is giving the segue an identifier and then setting the object:

class BookViewController: UIViewController {

    var book: Book!

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowAuthor" {
            let authorViewController = segue.destinationViewController as! AuthorViewController
            authorViewController.author = book.author
        }
    }
}

class AuthorViewController: UIViewController {

    var author: Author!
}

And in case we would use a modal segue that shows a AuthorViewController embedded in a navigation controller, the code would be slightly more complex:

if segue.identifier == "ShowAuthor" {
  let authorViewController = (segue.destinationViewController as! UINavigationController).viewControllers[0] as! AuthorViewController
  authorViewController.author = book.author
}

Now let's see how we can add an extension to UIStoryboardSegue that makes this a bit easier and works the same for both scenarios. Instead of checking the segue identifier we will just check on the type of the destination view controller. We assume that based on the type the same object is passed on, even when there are multiple segues going to that type.

extension UIStoryboardSegue {

    func destinationViewControllerAs<T>(cl: T.Type) -> T? {
        return destinationViewController as? T ?? (destinationViewController as? UINavigationController)?.viewControllers[0] as? T
    }
}

What we've done here is add the method destinationViewControllerAs to UIStoryboardSegue that checks if the destinationViewController is of the generic type T. If it's not, it will check if the destinationViewController is a navigation controller and if it's first view controller is of type T. If it finds either one, it will return that instance of T. Since it can also be nil, the return type is an optional T.

It's now incredibly simple to pass on our author object to the AuthorViewController:

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  segue.destinationViewControllerAs(AuthorViewController.self)?.author = book.author
}

No need to check any identifiers anymore and less code. Now I'm not saying that this is the best way to do it or that it's even better than the traditional way of doing things. But it does show that Swift offers us new ways of doing things and it's worth to experiment to find best practices.

The source code of the samples and extension is available on https://github.com/lammertw/StoryboardSegueExtension.

Categories: Companies

11 years at ThoughtWorks

thekua.com@work - Wed, 04/15/2015 - 22:45

I had planned to write a 10 years at ThoughtWorks post but was busy on a sabbatical learning a real language (German!) This year, I decided to get around to writing an anniversary post. One of the current impressive employee benefits for long-time employees is a 12-week paid break (it’s mandatory law in Australia but not true around the world).

When I think about my time here at ThoughtWorks, I can’t believe that I have been here so long. I still remember, after graduating from University, thinking how unlikely I would stay with a company for more than two years because I wanted to learn, change and grow. I thought that would be difficult in a permanent position in any other company. I wanted to stay in the zone but also find an opportunity to do interesting work. Consulting proved to be a good middle ground for what I was looking for.

What a ride it has been. Oh, and it’s still going, too :)

Facts

Like most companies, ThoughtWorks has changed and evolved over time.

  • When I started, we had (I’m guessing) about 10 offices in four countries. As of this post, we have 30 offices in 12 countries (Australia, Brazil, Canada, China, Ecuador, Germany, India, Singapore, South Africa, Uganda, the United Kingdom, and the United States) in some places I would never have guessed we would have had offices.
  • When I started, we had maybe 500 employees worldwide. We now have around between 2500-3000 people.
  • When I started, we were pretty much a consulting firm doing software development projects. Since then we now have a product division, a highly integrated UX capability, and are influencing companies at the CxO level which means a different type of consulting (whilst still keeping to our core offering of delivering effective software solutions).

We don’t always get things right, but I do see ThoughtWorks takes risks. In doing so, that means trying things and being prepared to fail or succeed. Although we have grown, I have found it interesting to see how our culture remains, in some ways very consistent, but adapts to the local market cultures and constraints of where we are operating. When I have visited our Brazilian offices, I felt like it was TW but with a Brazilian flavour, likewise when I visit our German offices.

Observations

I find it constantly interesting talking to alumni or people who have never worked for ThoughtWorks to see what their perceptions are. With some alumni, they have a very fixed perception of what the company is like (based on their time with the company) and it’s interesting to contrast that view with my own, given that the company constantly changes.

We are still (at least here in the UK) mostly a consulting firm, and so some of the normal challenges with running a consulting business still apply, both from an operational perspective, and being a consultant out on the field. Working on client site often means travel, and still affected by the ebbs and flows of customer demand around client budgeting cycles.

Based on my own personal observations (YMMV) we, as a company, have got a lot better about leadership development and support (although there is always opportunity to improve). I also find that we tend, on average, to get more aligned clients and have opportunities to have a greater impact. We have become better at expressing our values (The Three Pillars) and finding clients where we can help them and they are ready for that help.

It is always hard to see colleagues move on as it means building new relationships but that is always a reality in all firms, and occurs often even more so in consulting firms. After coming back from sabbatical I had to deal with quite a bit of change. Our office had moved, a significant part of our management team had changed, and of course there were lots of new colleagues who I hadn’t met. At the same time, I was surprised to see how many long-time employees (and not just operational people) were still around and it was very comforting to reconnect with them and renew those relationships.

Highlights

I’ve been particularly proud of some of the impact and some of the opportunities I have had. Some of my personal highlights include:

  • Being the keynote speaker for the 2000-attendee Agile Brazil conference.
  • Publishing my first book, The Retrospective Handbook, a book that makes the agile retrospective practice even more effective.
  • Publishing my second book, Talking with Tech Leads, a book that collects the experiences of Tech Leads around the world aimed at helping new or existing Tech Leads improve their skills.
  • Developing a skills training course for Tech Leads that we run internally. It’s a unique experiential course aimed at raising awareness of and developing the skills developers need when they play the Architect or Tech Lead roles. I may even have an opportunity of running it externally this year.
  • Being considered a role model for how ThoughtWorks employees can have impact on both clients, the industry and within our own company.
Categories: Blogs

Laloux Cultural Model and Agile Adoption

Agile For All - Bob Hartman - Wed, 04/15/2015 - 19:10
Laloux and Agile Adoption My Story

I had invested years of my life in a ground up, large-scale agile adoption. The early years of the adoption seemed to go at breakneck speed. Teams were adopting scrum with great success. People were feeling more engaged, products were getting better, and the company was benefiting. And then it felt like we hit a wall. Despite what felt to me like a groundswell of support from teams, managers, and directors, we were struggling to make the leap to real organizational agility.

The Breakthrough

While reviewing a draft of a good friend’s upcoming book, a single reference leaped off the page:

“There is … evidence that the developmental stage of the CEO determines the success of large-scale transformation programs.” (Tolbert, cited by Laloux, 2014)

I immediately bought and read Frederic Laloux’s book Reinventing Organizations, which provides a comprehensive overview of how humans have organized in groups over the centuries. The prevailing perspective today (what Laloux labels “orange”) seemed to describe my organization in an almost clairvoyant way. It helped me make sense of what my organization valued the most, how I could continue to be effective in my role as agile transformation leader, and what was likely possible given our cultural values. Keep reading to learn more…

Laloux’s Culture Model

I created the following video overview of Laloux’s cultural model and how it applies to Agile adoption in various types of organizations. It’s kind of a whirlwind tour, but I wanted to cover the basics in as succinct a way as possible. Feel free to pause and ponder as you digest the information.

 

 

The Rest of the Story…

Did one of the descriptions/colors stick out to you as the prevailing perspective at your organization? How about for you personally? My story takes an interesting twist after reading Laloux’s book. The prevailing perspective at the executive level seemed firmly rooted in Orange. I felt like I was somewhere between green and teal, personally. The difference in what I valued the most, and what the organization valued most, helped me understand why I had been so frustrated with the wall it seemed we had reached at the organizational level.

For me, it seemed I had three options:

  1. Acknowledge the value in an Orange perspective (there is value in every perspective), and work hard to help my organization be a shining example of Orange at its most vibrant.
  2. Seek out leaders with a Green perspective and work with them to try to expand the influence of Green values in the organization.
  3. Leave the organization and seek out Green or Teal organizations where I could grow personally at a faster pace.

Option one had been my journey for the first several years of my work leading our agile transformation. Option two had been my approach for the previous 18 months, but seemed to stall out when we needed executive level support for the types of changes required for a vertical transformation from Orange to Green. For option two, it felt like a waiting game – I could work with Green leaders in the hopes that at some point, the current CEO would either evolve personally, or, as happens frequently in large organizations, a new CEO would eventually come along, and all of the Green level cultural work would be unlocked and begin to flourish. This felt like a crapshoot as to what perspective that new CEO might have, and how long it might be before such a change might occur.

This left me with Option three, and that’s the option I took. While I think I could have provided value helping the organization be the best version of Orange it could, for my own personal growth, I really wanted to advance what’s possible and see how I can add value in a Green or Teal organization. I joined Agile for All knowing that they had been doing some really cool work with organizations adopting a Green/Teal set of practices, and I’m excited to see where we can go with such an approach.

So Now What?

First of all, definitely check out Laloux’s book. He provides fantastic details of how Teal organizations do awesome things.

If you are in a predominantly Amber or Orange organization, we’ve been there! We’ve seen Agile help these organizations get better at what they care about most, be that stability and predictability (Amber), or innovation and competitive advantage (Orange). An Agile mindset and practices will help achieve awesome results, and in a way that is more engaging and fulfilling for the people doing the work.

If you are in a predominantly Green or even a Teal organization, or one interested in moving in that direction, please get in touch! We’d love to hear about how it’s working for you. Whether we can help you out or not, we want to learn more about and help connect organizations taking this approach.

No matter your organization’s primary perspective, if you are interested in learning more about deep, long-lasting organizational transformation then let’s talk! Email me at peter.green@agileforall.com or just respond to this blog post.

If you think others would be interested in this topic, please share it using one or more of the buttons below.

The post Laloux Cultural Model and Agile Adoption appeared first on Agile For All.

Categories: Blogs

Summary of User Stories: The Three “C”s and INVEST

Learn more about our Scrum and Agile training sessions on WorldMindware.comUser Stories Learning Objectives

Becoming familiar with the “User Story” approach to formulating Product Backlog Items and how it can be implemented to improve the communication of user value and the overall quality of the product by facilitating a user-centric approach to development.

Consider the following

User stories trace their origins to eXtreme Programming, another Agile method with many similarities to Scrum. Scrum teams often employ aspects of eXtreme Programming, including user stories as well as engineering practices such as refactoring, test-driven development (TDD) and pair programming to name a few. In future modules of this program, you will have the opportunity to become familiar enough with some of these practices in order to understand their importance in delivering quality products and how you can encourage your team to develop them. For now, we will concentrate on the capability of writing good user stories.

The Three ‘C’si

A User Story has three primary components, each of which begin with the letter ‘C':

Card

  • The Card, or written text of the User Story is best understood as an invitation to conversation. This is an important concept, as it fosters the understanding that in Scrum, you don’t have to have of the Product Backlog Items written out perfectly “up front”, before you bring them to the team. It acknowledges that the customer and the team will be discovering the underlying business/system needed as they are working on it. This discovery occurs through conversation and collaboration around user stories.
  • The Card is usually follows the format similar to the below

As a <user role> of the product,

I can <action>

So that <benefit>.

  • In other words, the written text of the story, the invitation to a conversation, must address the “who”, “what” and “why” of the story.
  • Note that there are two schools of thought on who the <benefit> should be for. Interactive design specialists (like Alan Cooper) tell us that everything needs to be geared towards not only the user but a user Persona with a name, photo, bio, etc. Other experts who are more focused on the testability of the business solution (like Gojko Adzic) say that the benefit should directly address an explicit business goal. Imagine if you could do both at once! You can, and this will be discussed further in more advanced modules.

Conversation

  • The collaborative conversation facilitated by the Product Owner which involves all stakeholders and the team.
  • As much as possible, this is an in-person conversation.
  • The conversation is where the real value of the story lies and the written Card should be adjusted to reflect the current shared understanding of this conversation.
  • This conversation is mostly verbal but most often supported by documentation and ideally automated tests of various sorts (e.g. Acceptance Tests).

Confirmation

  • The Product Owner must confirm that the story is complete before it can be considered “done”
  • The team and the Product Owner check the “doneness” of each story in light of the Team’s current definition of “done”
  • Specific acceptance criteria that is different from the current definition of “done” can be established for individual stories, but the current criteria must be well understood and agreed to by the Team.  All associated acceptance tests should be in a passing state.
INVESTii

The test for determining whether or not a story is well understood and ready for the team to begin working on it is the INVEST acronym:

I – Independent

  • The solution can be implemented by the team independently of other stories.  The team should be expected to break technical dependencies as often as possible – this may take some creative thinking and problem solving as well as the Agile technical practices such as refactoring.

N – Negotiable

  • The scope of work should have some flex and not be pinned down like a traditional requirements specification.  As well, the solution for the story is not prescribed by the story and is open to discussion and collaboration, with the final decision for technical implementation being reserved for the Development Team.

V – Valuable

  • The business value of the story, the “why”, should be clearly understood by all. Note that the “why” does not necessarily need to be from the perspective of the user. “Why” can address a business need of the customer without necessarily providing a direct, valuable result to the end user. All stories should be connected to clear business goals.  This does not mean that a single user story needs to be a marketable feature on its own.

E – Estimable

  • The team should understand the story well enough to be able estimate the complexity of the work and the effort required to deliver the story as a potentially shippable increment of functionality.  This does not mean that the team needs to understand all the details of implementation in order to estimate the user story.

S – Small

  • The item should be small enough that the team can deliver a potentially shippable increment of functionality within a single Sprint. In fact, this should be considered as the maximum size allowable for any Product Backlog Item as it gets close to the top of the Product Backlog.  This is part of the concept of Product Backlog refinement that is an ongoing aspect of the work of the Scrum Team.

T – Testable

  • Everyone should understand and agree on how the completion of the story will be verified. The definition of “done” is one way of establishing this. If everyone agrees that the story can be implemented in a way that satisfies the current definition of “done” in a single Sprint and this definition of “done” includes some kind of user acceptance test, then the story can be considered testable.

Note: The INVEST criteria can be applied to any Product Backlog Item, even those that aren’t written as User Stories.

Splitting Stories:

Sometimes a user story is too big to fit into a Sprint. Some ways of splitting a story include:

  • Split by process step
  • Split by I/O channel
  • Split by user options
  • Split by role/persona
  • Split by data range

WARNING: Do not split stories by system, component, architectural layer or development process as this will conflict with the teams definition of “done” and undermine the ability of the team to deliver potentially shippable software every Sprint.

Personas

Like User Stories, Personas are a tool for interactive design. The purpose of personas is to develop a precise description of our user and so that we can develop stories that describe what he wishes to accomplish. In other words, a persona is a much more developed and specific “who” for our stories. The more specific we make our personas, the more effective they are as design tools.iii

Each of our fictional but specific users should have the following information:

  • Name
  • Occupation
  • Relationship to product
  • Interest & personality
  • Photo

Only one persona should be the primary persona and we should always build for the primary persona. User story cards using personas replace the user role with the persona:

<persona>

can <action>

so that <benefit>.

 

i The Card, Conversation, Confirmation model was first proposed by Ron Jeffries in 2001.

ii INVEST in Good Stories, and SMART Tasks. Bill Wake. http://xp123.com/articles/invest-in-good-stories-and-smart-tasks/

iii The Inmates are Running the Asylum. Alan Cooper. Sams Publishing. 1999. pp. 123-128.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Coaching intervention during a team conflict

Agile World - Venkatesh Krishnamurthy - Wed, 04/15/2015 - 14:42

Every team goes through some stages of conflict before they stabilize. Leaders need to be conscious of intervening during such conflicts.  The knee-jerk reaction of a typical leader observing a conflict is to jump immediately to “fix” the problem. It is highly recommended that they avoid it and take a step back to monitor the situation first.

The leaders need to find the appropriate time and context to intervene for coaching. Here are some examples and contexts.

1. Self-organizing teams are in the process of learning. They are trying to check the boundaries and positioning themselves in the team. Conflicts in such situations are imminent. The leader or a coach assigned to the team should avoid intervening in such contexts. 

image These teams are like butterflies emerging from pupa. Yes, there is a bit of process, pain and time involved to emerge out of pupa, and one needs a bit of patience. Trying to expedite the process could actually kill the butterfly. 

2. Research suggests that creative and innovative work actually needs healthy debate and conflict. Intervention is needed to help the in understanding the differences. There are many times a person or a small group within a large group might be thinking tangentially. This could lead to conflicts and it does not mean that there is anything wrong here. In fact, such conflicts avoid groupthink.

For example, an engineer embedded in a marketing team obviously think differently. The engineer could be considered as a trouble maker as he/she is different than the rest of the marketing team. In contexts like this, the leadership or coach intervention is essential to assist the group.

As a leader one needs to drill down a little bit, get rid of the noise and study the type of work before intervening. The diversity of the team needs to be taken into account while dealing with a creative team as well.

image

3. If the team is working on a routine and repetitive kind of work, coaching intervention trying to facilitate discussions could backfire. Instead, a root-cause analysis with a management intervention is critical for smooth functioning.  

To conclude, if you see or hear a conflict, don’t jump to fix the problem. Try to understand the context first. Many a times, the conflict is actually good for the team, and the organization in the long run.

Categories: Blogs

Slashdot quote of the year

Indefinite Articles - John Brothers - Wed, 04/15/2015 - 13:46

In the context of this story about Neal Stephenson’s novel Seveneves (neither block-quote nor italicized follow-up are mine):

 

Authors improve with age?

Some do. For example, in many years time, Stephenie Meyer will be dead.

 

Golf clap

Categories: Blogs

April Newsletter: Measuring Predictability, Lean Decision Filter, Flow-driven Product Development

Here’s the April 2015 edition of the LeanKit monthly newsletter. Make sure you catch the next issue in your inbox and subscribe today. Kanban for DevOps: 3 Reasons IT Ops Uses Lean Flow (part 3 of 3) In the final post of this three-part series, Dominica DeGrandis explores why product support needs a voice in product development […]

The post April Newsletter: Measuring Predictability, Lean Decision Filter, Flow-driven Product Development appeared first on Blog | LeanKit.

Categories: Companies

day in, day out

Derick Bailey - new ThoughtStream - Wed, 04/15/2015 - 12:00

You know what I really wanted to do, today? Work on SignalLeaf. It’s not that it *needs* any work right now… it’s been running in auto-pilot for several months now, while I’ve had other more urgent things to take care of. But I *wanted* to work on SignalLeaf.   And yet, I didn’t get too. There were too many other things that were more urgent.

rolling-rock-uphill

Motivation Can Be Distraction

People talk about motivation all the time – myself included. We talk about how rare it is and how you should work your butt off when you find yourself motivated.

But this isn’t usually where I find myself, honestly.

I usually find find motivation for the thing that I’m not working on,  instead. Frankly, it’s easier to be motivated to work on the thing that I’m not doing. I don’t have to actually do the work. I can just sit back, think about how great it’s going to be when I’m done and get excited about it. That’s motivation for me to work on it.

But when I get down to actually doing it – to the point where I have to put in the time, the effort and figure out how to make it work… well, the motivation to do the work is usually fleeting for me. I would rather think about something else that I want to do instead.

The Daily Grind

Motivation is especially difficult when I’ve been on a project for a long time – when the work is constant, and never ending.

When I look at WatchMeCode screencasts, for example, I see nothing but an endless amount of work. I see endless hours spent trying to figure out what would make a good screencast. I see the practice, the setup, the additional practice to make sure I understand what I want to say. I see the more than 100 episodes of video content that I have produces in the last 4 years and the effort that has gone in to it.

And as predictable as the changing seasons, I don’t want to work on screencasts anymore.

But I do it anyways. I record yet another screencast. I put my thoughts and understanding in to something that will hopefully help other people learn.

It becomes a grind… a never ending effort to push an ever growing ball up a hill.

A Tipping Point?

They say it gets easier after some time – that there will be some magical tipping point where the ball starts to roll downhill.

Maybe that’s true. Maybe there will be some point in the future at which WatchMeCode will mostly take care of itself and I won’t have to worry about it.

But not everything will have that tipping point.

Just Showing Up

Sometimes the work we do is never ending. Sometimes the best we can do it just show up to work, and slog through it. That in itself can be a victory on it’s own.

There isn’t always a light at the end of the tunnel. We hope there is, or we eventually figure out that there isn’t and we move on to another tunnel.

For me, I’m going through yet another round of wondering if another tunnel is the better option. But then I think about how little effort WatchMeCode takes, in comparison to other things I could be doing. I compare the income from screencasts vs the income from the other products and services I have, and there’s no way I can leave it – not yet, at least.

In To The Future

There’s no way I can know what the future will bring for me. I don’t have any plans on leaving my screencasting efforts behind. I need the income, and the work is relatively easy compared to other sources of income that I have.

I may look back on this time, in a month or two, and wonder what I was complaining about. I probably will do that. It’s just another cycle of me doubting what I’m doing and coming to terms with the endless grind that is screencasting.

I Do What I Have To Do

I really don’t want to work on screencasts right now. But I need to. I have to keep the content flowing so my income continues. So, I clock in. I prepare, I record and I release another episode on the weekly schedule that I set for myself. Some times it seems worth it. Sometimes I get tired of it, like today.

I grind the wheel of another day, hoping I can build up enough of a buffer to give myself a few weeks off. I’d like to find that tipping point, but I don’t see it yet. So I grind on.

Day, in. Day out. Week after week. Month after month. Cause sometimes, that’s what you have to do.

– Derick

Categories: Blogs

Ist Work-Life-Balance für Frauen in der Unternehmensberatung eine Farce?

Scrum 4 You - Wed, 04/15/2015 - 07:53

Im Allgemeinen heißt es, die Arbeit in einer Top-Unternehmensberatung oder Anwaltskanzlei sei ein knochenharter Job für alle, die ganz oben mitspielen wollen. Diese Meinung wird fleißig genährt: In der Anwaltsserie „Suits“ wird mehr als deutlich, wie hart junge Anwälte arbeiten müssen. Also: Karriere macht, wer exzessiv arbeitet. 16-Stunden-Tage, Übernachten in der Kanzlei und das ständige Leben auf Abruf gehören dazu. Und ja: Das ist die Realität in einigen Kanzleien und Unternehmensberatungen. Auch in meinem eigenen Beratungsunternehmen gibt es in Projekten viele intensive Arbeitseinheiten. Oft sind wir mit unseren Gedanken mehr als 12 Stunden am Tag in irgendeiner Form bei der Arbeit.

Nicht jeder will diese Arbeitsbelastung mitmachen, das ist vollkommen klar. Eine meiner Dozentinnen im Lehrgang „Leading Professional Service Firms“ an der Harvard Business School erzählte uns, dass sie sich von ihrem tollen Job bei McKinsey verabschiedet hatte, weil sie auch noch ein Familienleben haben wollte. Sie möchte mit Freunden arbeiten, also mit Menschen, von denen sie mehr weiß als den Namen und ihr berufliches Fach. Die Karriere bei McKinsey hat sie für eine Assistenz-Professur an der Harvard Business School aufgegeben und sie war klar in ihrer Aussage: Der Job bei McKinsey war toll. Sie hatte ihn gerne gemacht, aber sie wollte auch noch etwas anderes. Und leider war das bei McKinsey nicht möglich.

Die Komplexität der Möglichkeiten

Bis vor ein paar Jahren war das eigentlich gar kein Problem. Im traditionellen Rollenverständnis der 1980er und 1990er war das nämlich ganz einfach: Der Mann machte Karriere, die Frau gab ihren Beruf zugunsten der Kinder auf. Selbst wenn sie sich am Arbeitsplatz kennengelernt hatten und beide erfolgreich waren, war für sie meistens mit dem ersten Kind die Karriere zu Ende. Aus der Fern- und Wochenendbeziehung, die die beiden als Berater gelebt hatten, wurde eine Wochenendbeziehung, in der er völlig fertig abends von ihr vom Flughafen abgeholt wurde und selbst am Wochenende noch arbeiten musste. War sie auch Beraterin (was – siehe oben – nicht ungewöhnlich war), blieb sie bei den Kindern und passte ihre Karriere an. Oft auch zuungunsten ihres Lebens-Gesamteinkommens.

Doch dieses Schema löst sich allmählich auf. In den Unternehmen von heute arbeiten immer mehr gut ausgebildete Frauen mit Magisterium oder Doktorat und sehen überhaupt nicht ein, warum sie ihre Karriere aufgeben sollten. Eine meiner Kolleginnen sagt ganz offen: „Ich will mit meinem Partner Zeit verbringen und Karriere machen.“ Geld verdienen kann sie genau so gut wie er. Gleichzeitig fragen sich vor allem Frauen aber: Wie kann das funktionieren? Reisen gehört im Consulting Business nun einmal dazu und mal davon abgesehen, dass es Spaß macht: Weltweit wächst der Bedarf nach den fahrenden Spezialisten. Die Aufträge werden nicht weniger, sondern zahlreicher.

Wir bei Boris Gloger Consulting werden uns dieser Herausforderung, Beruf und Familie vereinen zu können, sogar noch extremer stellen müssen als viele andere Unternehmensberatungen. Unser Consulting- und Assistentinnen-Team besteht zu 80 Prozent aus Frauen und sie halten 80 Prozent der Führungspositionen in unserem Unternehmen. Sie bilden die neuen Berater_innen aus, sie sind unverzichtbar für unseren Verkauf und sie sind am deutschen Markt die Top-Expertinnen für die Skalierung von Scrum, Scrum im ERP-Umfeld und Scrum in der Hardwareentwicklung. Viele Firmen setzen beim Thema „Agiles Management“ auf ihre Kompetenz. Ich bin jedes Mal wieder auf ihre überragenden Auftritte bei Konferenzen stolz. Ihr Engagement ist beispiellos, kurz: Wir können auf sie nicht verzichten. Gleichzeitig gilt für uns: Wir wollen selbst vorleben, was wir unseren Kunden täglich erklären. Das bedeutet auch, dass wir selbst anders wirksam und agil arbeiten. Wir nutzen Scrum, um uns zu organisieren. Wir streben es an, nur die Dinge zu tun, die wir machen wollen. Dabei ist uns wichtig, selbst als Team aufzutreten. Wir nehmen uns daher viel Zeit dafür, den Teamgeist zu stärken, indem wir auch als Team arbeiten. Alle zwei Monate organisieren wir einen Company Retreat, der uns auf den neuesten Stand bringt.

Pixabay

Pixabay

Jobsharing, Kinder im Büro – warum nicht?

Doch bei all dem steht der Kunde im Fokus. Scrum zwingt uns förmlich dazu, uns immer wieder darauf zu fokussieren, was dem Kunden nutzt. Unsere Kunden fordern von uns Höchstleistungen. Und das bedeutet oft, dass auch bei uns die Work-Life-Balance kurzfristig nicht immer ausgewogen ist. Deshalb habe ich mir als Unternehmer – und der bin ich nun einmal auch – schon vor einigen Jahren die Frage gestellt: Wie erhalten wir die Gesundheit unseres Teams und wie gehen Familie und Beruf zusammen? Was machen wir, wenn unsere genialen Frauen Kinder bekommen wollen? Und nebenbei gesagt: Ich freue mich sehr auf diesen Moment. Werde ich als Arbeitgeber dann ganz auf sie verzichten müssen, oder noch schlimmer: Werden sie überhaupt gehen, weil sie sich eine Familie neben dem Job als Beraterin nicht vorstellen können? Denn ehrlichgesagt gibt es auch in den Köpfen meiner Kolleginnen noch oft die alten Rollenvorstellungen. Aus Sicht des Unternehmens wäre das eine Katastrophe. Nicht nur, dass es extrem schwer ist, für Ersatz zu sorgen und dass es viel Zeit und Geld gekostet hat, sie auszubilden – nein, ein Unternehmen kann nur groß werden, wenn die Gemeinschaft, die es aufbaut, eine gewisse Beständigkeit hat. Ein Team braucht Jahre, um wirklich gut zu werden und die Kultur eines Unternehmens entsteht nicht in zwei Wochen.

Was also tun? Die Diskussion darüber haben wir bereits begonnen. Eine unserer Assistentinnen hat letztens spaßhalber gesagt: „Hunde können wir ins Büro ja mitnehmen, aber die Kinder wohl nicht.“ Darauf antwortete ich: „Und weshalb nicht? Was hindert uns denn daran, die Kleinen mitzunehmen?“ Ich war es, der nun ungläubiges Staunen erntete. Es braucht Fantasie, um in unserem Beruf neue Formen des Arbeitens zu erfinden. Diese Fantasie braucht es nicht nur von mir als Firmenchef, sondern von uns allen.
Wir könnten zum Beispiel Arbeiten völlig anders verteilen. Wer sagt denn, dass sich die Mutter-Beraterin nicht im Pairing mit einer anderen die Aufträge teilen kann? Vielleicht arbeitet die Jüngere bzw. noch kinderlose Consultant beim Kunden und hat einerseits einen Coach und andererseits jemanden im eigenen Office, der versteht, was draußen passiert. Die Mutter kann im Büro Meetings und Workshops vorbereiten oder die Dokumentation fertigstellen. Logischerweise kann man die Kleinen auch mal in den Kindergarten bringen und vielleicht für ein oder zwei Tage eine Kinderbetreuung engagieren, damit die Mutter auch mal wieder selbst zu einem Workshop fahren kann. Das geht alles – wenn man nur will!

Als jungem und erfolgreichem Consulting-Unternehmen stellt sich uns die Frage, wie „Das Neue Arbeiten“ als Consultant aussehen kann. Für viele Aspekte haben wir die Antworten noch nicht gefunden, aber wir sind auf dem Weg.

Categories: Blogs

Spark: Generating CSV files to import into Neo4j

Mark Needham - Wed, 04/15/2015 - 00:56

About a year ago Ian pointed me at a Chicago Crime data set which seemed like a good fit for Neo4j and after much procrastination I’ve finally got around to importing it.

The data set covers crimes committed from 2001 until now. It contains around 4 million crimes and meta data around those crimes such as the location, type of crime and year to name a few.

The contents of the file follow this structure:

$ head -n 10 ~/Downloads/Crimes_-_2001_to_present.csv
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,"(41.75017626412204, -87.55494559131228)"
9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,"(41.729576153145636, -87.57568059471686)"
9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,"(41.884543798701515, -87.72803579358926)"
9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,"(41.785633535413176, -87.74148516669783)"
9460355,HX113738,01/14/2014 04:21:00 AM,070XX S PEORIA ST,0820,THEFT,$500 AND UNDER,STREET,true,false,0733,007,17,68,06,1171480,1858195,2014,01/16/2014 12:40:00 AM,41.766348042591375,-87.64702037047671,"(41.766348042591375, -87.64702037047671)"
9461140,HX113909,01/14/2014 03:17:00 AM,016XX W HUBBARD ST,0610,BURGLARY,FORCIBLE ENTRY,COMMERCIAL / BUSINESS OFFICE,false,false,1215,012,27,24,05,1165029,1903111,2014,01/16/2014 12:40:00 AM,41.889741146006095,-87.66939334853973,"(41.889741146006095, -87.66939334853973)"
9460361,HX113731,01/14/2014 03:12:00 AM,022XX S WENTWORTH AVE,0820,THEFT,$500 AND UNDER,CTA TRAIN,false,false,0914,009,25,34,06,1175363,1889525,2014,01/20/2014 12:40:05 AM,41.85223460427207,-87.63185047834335,"(41.85223460427207, -87.63185047834335)"
9461691,HX114506,01/14/2014 03:00:00 AM,087XX S COLFAX AVE,0650,BURGLARY,HOME INVASION,RESIDENCE,false,false,0423,004,7,46,05,1195052,1847362,2014,01/17/2014 12:40:17 AM,41.73607283858007,-87.56097809501115,"(41.73607283858007, -87.56097809501115)"
9461792,HX114824,01/14/2014 03:00:00 AM,012XX S CALIFORNIA BLVD,0810,THEFT,OVER $500,STREET,false,false,1023,010,28,29,06,1157929,1894034,2014,01/17/2014 12:40:17 AM,41.86498077118534,-87.69571529596696,"(41.86498077118534, -87.69571529596696)"

Since I wanted to import this into Neo4j I needed to do some massaging of the data since the neo4j-import tool expects to receive CSV files containing the nodes and relationships we want to create.

Spark logo 192x100px

I’d been looking at Spark towards the end of last year and the pre-processing of the big initial file into smaller CSV files containing nodes and relationships seemed like a good fit.

I therefore needed to create a Spark job to do this. We’ll then pass this job to a Spark executor running locally and it will spit out CSV files.

2015 04 15 00 51 42

We start by creating a Scala object with a main method that will contain our processing code. Inside that main method we’ll instantiate a Spark context:

import org.apache.spark.{SparkConf, SparkContext}
 
object GenerateCSVFiles {  
    def main(args: Array[String]) {    
        val conf = new SparkConf().setAppName("Chicago Crime Dataset")    
        val sc = new SparkContext(conf)  
    }
}

Easy enough. Next we’ll read in the CSV file. I found the easiest way to reference this was with an environment variable but perhaps there’s a more idiomatic way:

import java.io.File
import org.apache.spark.{SparkConf, SparkContext}
 
object GenerateCSVFiles {
  def main(args: Array[String]) {
    var crimeFile = System.getenv("CSV_FILE")
 
    if(crimeFile == null || !new File(crimeFile).exists()) {
      throw new RuntimeException("Cannot find CSV file [" + crimeFile + "]")
    }
 
    println("Using %s".format(crimeFile))
 
    val conf = new SparkConf().setAppName("Chicago Crime Dataset")
 
    val sc = new SparkContext(conf)
    val crimeData = sc.textFile(crimeFile).cache()
}

The type of crimeData is RDD[String] – Spark’s way of representing the (lazily evaluated) lines of the CSV file. This also includes the header of the file so let’s write a function to get rid of that since we’ll be generating our own headers for the different files:

import org.apache.spark.rdd.RDD
 
// http://mail-archives.apache.org/mod_mbox/spark-user/201404.mbox/%3CCAEYYnxYuEaie518ODdn-fR7VvD39d71=CgB_Dxw_4COVXgmYYQ@mail.gmail.com%3E
def dropHeader(data: RDD[String]): RDD[String] = {
  data.mapPartitionsWithIndex((idx, lines) => {
    if (idx == 0) {
      lines.drop(1)
    }
    lines
  })
}

Now we’re ready to start generating our new CSV files so we’ll write a function which parses each line and extracts the appropriate columns. I’m using Open CSV for this:

import au.com.bytecode.opencsv.CSVParser
 
def generateFile(file: String, withoutHeader: RDD[String], fn: Array[String] => Array[String], header: String , distinct:Boolean = true, separator: String = ",") = {
  FileUtil.fullyDelete(new File(file))
 
  val tmpFile = "/tmp/" + System.currentTimeMillis() + "-" + file
  val rows: RDD[String] = withoutHeader.mapPartitions(lines => {
    val parser = new CSVParser(',')
    lines.map(line => {
      val columns = parser.parseLine(line)
      fn(columns).mkString(separator)
    })
  })
 
  if (distinct) rows.distinct() saveAsTextFile tmpFile else rows.saveAsTextFile(tmpFile)
}

We then call this function like this:

generateFile("/tmp/crimes.csv", withoutHeader, columns => Array(columns(0),"Crime", columns(2), columns(6)), "id:ID(Crime),:LABEL,date,description", false)

The output into ‘tmpFile’ is actually 32 ‘part files’ but I wanted to be able to merge those together into individual CSV files that were easier to work with.

I won’t paste the the full job here but if you want to take a look it’s on github.

Now we need to submit the job to Spark. I’ve wrapped this in a script if you want to follow along but these are the contents:

./spark-1.1.0-bin-hadoop1/bin/spark-submit \
--driver-memory 5g \
--class GenerateCSVFiles \
--master local[8] \ 
target/scala-2.10/playground_2.10-1.0.jar \
$@

If we execute that we’ll see the following output…”

Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Crimes_-_2001_to_present.csv
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/04/15 00:31:44 INFO SparkContext: Running Spark version 1.3.0
...
15/04/15 00:47:26 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool
15/04/15 00:47:26 INFO DAGScheduler: Stage 8 (saveAsTextFile at GenerateCSVFiles.scala:51) finished in 2.702 s
15/04/15 00:47:26 INFO DAGScheduler: Job 4 finished: saveAsTextFile at GenerateCSVFiles.scala:51, took 8.715588 s
 
real	0m44.935s
user	4m2.259s
sys	0m14.159s

and these CSV files will be generated:

$ ls -alh /tmp/*.csv
-rwxrwxrwx  1 markneedham  wheel   3.0K 14 Apr 07:37 /tmp/beats.csv
-rwxrwxrwx  1 markneedham  wheel   217M 14 Apr 07:37 /tmp/crimes.csv
-rwxrwxrwx  1 markneedham  wheel    84M 14 Apr 07:37 /tmp/crimesBeats.csv
-rwxrwxrwx  1 markneedham  wheel   120M 14 Apr 07:37 /tmp/crimesPrimaryTypes.csv
-rwxrwxrwx  1 markneedham  wheel   912B 14 Apr 07:37 /tmp/primaryTypes.csv

Let’s have a quick check what they contain:

$ head -n 10 /tmp/beats.csv
id:ID(Beat),:LABEL
1135,Beat
1421,Beat
2312,Beat
1113,Beat
1014,Beat
2411,Beat
1333,Beat
2521,Beat
1652,Beat
$ head -n 10 /tmp/crimes.csv
id:ID(Crime),:LABEL,date,description
9464711,Crime,01/14/2014 05:00:00 AM,SIMPLE
9460704,Crime,01/14/2014 04:55:00 AM,ARMED: HANDGUN
9460339,Crime,01/14/2014 04:44:00 AM,TO PROPERTY
9461467,Crime,01/14/2014 04:43:00 AM,$500 AND UNDER
9460355,Crime,01/14/2014 04:21:00 AM,$500 AND UNDER
9461140,Crime,01/14/2014 03:17:00 AM,FORCIBLE ENTRY
9460361,Crime,01/14/2014 03:12:00 AM,$500 AND UNDER
9461691,Crime,01/14/2014 03:00:00 AM,HOME INVASION
9461792,Crime,01/14/2014 03:00:00 AM,OVER $500
$ head -n 10 /tmp/crimesBeats.csv
:START_ID(Crime),:END_ID(Beat),:TYPE
5896915,0733,ON_BEAT
9208776,2232,ON_BEAT
8237555,0111,ON_BEAT
6464775,0322,ON_BEAT
6468868,0411,ON_BEAT
4189649,0524,ON_BEAT
7620897,0421,ON_BEAT
7720402,0321,ON_BEAT
5053025,1115,ON_BEAT

Looking good. Let’s get them imported into Neo4j:

$ ./neo4j-community-2.2.0/bin/neo4j-import --into /tmp/my-neo --nodes /tmp/crimes.csv --nodes /tmp/beats.csv --nodes /tmp/primaryTypes.csv --relationships /tmp/crimesBeats.csv --relationships /tmp/crimesPrimaryTypes.csv
Nodes
[*>:45.76 MB/s----------------------------------|PROPERTIES(2)=============|NODE:3|v:118.05 MB/]  4M
Done in 5s 605ms
Prepare node index
[*RESOLVE:64.85 MB-----------------------------------------------------------------------------]  4M
Done in 4s 930ms
Calculate dense nodes
[>:42.33 MB/s-------------------|*PREPARE(7)===================================|CALCULATOR-----]  8M
Done in 5s 417ms
Relationships
[>:42.33 MB/s-------------|*PREPARE(7)==========================|RELATIONSHIP------------|v:44.]  8M
Done in 6s 62ms
Node --> Relationship
[*>:??-----------------------------------------------------------------------------------------]  4M
Done in 324ms
Relationship --> Relationship
[*LINK-----------------------------------------------------------------------------------------]  8M
Done in 1s 984ms
Node counts
[*>:??-----------------------------------------------------------------------------------------]  4M
Done in 360ms
Relationship counts
[*>:??-----------------------------------------------------------------------------------------]  8M
Done in 653ms
 
IMPORT DONE in 26s 517ms

Next I updated conf/neo4j-server.properties to point to my new database:

#***************************************************************
# Server configuration
#***************************************************************
 
# location of the database directory
#org.neo4j.server.database.location=data/graph.db
org.neo4j.server.database.location=/tmp/my-neo

Now I can start up Neo and start exploring the data:

$ ./neo4j-community-2.2.0/bin/neo4j start
MATCH (:Crime)-[r:CRIME_TYPE]->() 
RETURN r 
LIMIT 10
Graph  15

There’s lots more relationships and entities that we could pull out of this data set – what I’ve done is just a start. So if you’re up for some more Chicago crime exploration the code and instructions explaining how to run it are on github.

Categories: Blogs

Lean Metrics: Measure Predictability with Facts over Estimates

A predictable outcome is one of the most sought-after goals in any business or initiative. It’s easy to see why. We often correlate predictability with attractive benefits like lower risk, higher business value, and maybe even less stress. So with every new project, we dutifully gather time, effort, and resource estimates from all involved — […]

The post Lean Metrics: Measure Predictability with Facts over Estimates appeared first on Blog | LeanKit.

Categories: Companies

Product Backlog is DEEP; INVEST Wisely and DIVE Carefully

Agile Management Blog - VersionOne - Tue, 04/14/2015 - 14:30

A product backlog stores, organizes and manages all work items that you plan to work on in the future. While providing agile training, consulting and coaching engagements at VersionOne, our clients often ask how to logically structure, organize and manage their product backlog. Clients also want to know how to prioritize or rank work items.

Here is a simple and easy-to-remember phrase that captures the key characteristics of a well-managed product backlog: Product backlog is DEEP; INVEST wisely and DIVE carefully … otherwise, by implication, you may sink (just kidding, but only slightly).

The key characteristics of a well-organized and managed product backlog are summarized in the image below. DEEP, INVEST and DIVE are meaningful words. They can be used as very useful acronyms to help us remember the key characteristics. In this blog, I will explain how to manage a DEEP product backlog well by INVESTing wisely and DIV[E]ing carefully.

Figure1

Figure 1: Logical Structure and Key Characteristics of a
Well-Managed Product Backlog

The granularity or size of work items should be determined based on how far into the future you are planning a product, i.e., the planning horizon. The longer or shorter the planning horizon, the larger or smaller the work items. This makes sense as it takes a lot more effort to develop, specify and maintain a large number of small-grain work items compared to developing, specifying and maintaining a small number of large-grain work items. Smaller work items, stories, are typically developed by breaking down larger work items, epics. Stories are the unit of software design, development and value delivery.

DEEP product backlog

A product backlog may have several hundred or more work items, hence the acronym DEEP. Work items can be comprised of stories, defects and test sets. DEEP is also an interesting acronym capturing the essence of the logical structure of a product backlog.

  • Detailed appropriately: Workitems in the backlog are specified at an appropriate level of detail as summarized in Figure 1 and explained below.
  • Estimated appropriately: workitems in the product backlog are estimated appropriately as explained below.
  • Emergent: Product backlog is not frozen or static; it evolves or emerges on an on-going basis in response to product feedback, and changes in competitive, market and business. New backlog items are added, existing items are groomed (revised, refined, elaborated) or deleted or re-prioritized.
  • Prioritized as needed: Workitems in the backlog are linearly rank-ordered as needed, as explained below.

Sprint planning horizon, workitem granularity, estimation and rank order

If the planning horizon is the next, i.e., upcoming sprint or iteration (typically 2 to 4 weeks), each workitem is small enough to fit in a single sprint, and is 100% ready (“ready-ready”) to be worked on, as indicated in Figure 1 – see the top red-color region.  A ready-ready story has already been analyzed with clear definition (User Role, Functionality, and Business Value) and associated Acceptance Criteria.    Workitems planned for the next sprint are stories, defects and test sets.  The workitems in the next sprint have the highest rank order compared to workitems in later sprints or later release cycles.  I will soon explain how this rank ordering is done.   The rank order information is used to decide the order in which the team will undertake work on workitems in a sprint backlog, and also decide which incomplete workitems to push out to the release or product backlog at the end of a sprint time-box.

Workitems in the next sprint collectively satisfy the well-known INVEST criteria; it is a meaningful English word, as well as an interesting acronym coined by Bill Wake (see his blog Invest in Stories and Smart Tasks).  Its letters represent important characteristics of workitems in the next sprint backlog.   I will now elaborate on the letters in INVEST acronym.  Stories in the next sprint backlog should be:

  • Independent of each other: At the specification level stories are independent; they offer distinctly different functionality and don’t overlap. Moreover, at the implementation level these stories should also be as independent of each other as possible.  However, sometimes certain implementation-level dependencies may be unavoidable.
  • Negotiable: Stories in the next sprint are always subject to negotiations and clarifications among product owner (business proxy) and the members of agile development team.
  • Valuable: Each story for the next sprint offers clear value or benefit to either external users or customers (outside the development team), or to the team itself, or to a stakeholder. For most products and projects, most stories offer value to external users or customers.
  • Estimable: From the specification of story itself, an agile team should be able to estimate the effort needed to implement the story; this estimate is in relative size terms (story points), and optionally, it can also be in time units (such as ideal staff-hours or staff-days for the whole team). Thus, stories are estimated in story points, and also often in ideal time units.
  • Sized Appropriately: A simpler interpretation of this criterion is that each story is Small enough to be completed and delivered in a single sprint. The letter “S” can be taken to mean Sized Appropriately; specifically, each story should take no more than N/4 staff-weeks of team effort for an N-week long sprint (see “Scaling Lean and Agile Development” by Larman & Vodde, 2009, page 120.).  Thus, for a 2-week sprint, each story should take no more than 2/4 staff-week = 0.5 staff-week = 20 staff-hours of effort.  A story substantially larger than 20 staff-hours of total effort should be treated as an epic and be broken down into smaller stories.  For a 4-week sprint, each story should take no more than 4/4 staff-week = 1 staff-week = 40 staff-hours of effort.   If a sprint backlog has a mix of stories that are small, medium or large size stories (their average far exceeds N/4 staff-weeks), the average cycle time across all stories will increase dramatically reducing the team velocity.
  • Testable: Each story specification is very clear to be able to develop all test cases from its acceptance criteria (which is part of the specification).

Stories may be broken down into implementation tasks, such as Analysis, Design, Code Development, Unit Testing, Test Case Development, On-line Help, etc.  These tasks need to be SMART:

  • S: Specific
  • M: Measurable
  • A: Achievable
  • R: Relevant
  • T: Time-boxed (typically small enough to complete in a single day)

If a story needs to take no more than N/4 staff-week of team effort (ex. 20 staff-hours for 2-week sprints), all SMART tasks in a story should add up to no more than N/4 staff-week of team effort.  If you have 5 tasks, each task on an average should take 4 hours of ideal time effort or less.  Stories and its SMART tasks for the next sprint are worth INVESTing in, as the return on that INVESTment is high because they are scheduled to be worked on and delivered as working software in the next sprint itself.

Release planning horizon, workitem granularity, estimation and rank order

If the planning horizon is an upcoming release cycle (typically 8 to 26 weeks, or 2 to 6 months long – consisting of several sprints), workitems are “medium-grain” as shown in the middle yellow color region of Figure 1.  Typically, many of these workitems are epics; however, they should be still small enough to fit in a release cycle and can be completed over two or more sprints in a release cycle.  These epics are typically called features or feature-epics.  These feature-epics should still be specified with User Role, Action, Value and Acceptance Criteria formalism that is often used for specifying stories, but now you are capturing a larger functionality represented by a feature-epic.   Feature-epics are divided into stories – small enough to fit in a sprint – before the sprint in which a story will be implemented.

Over the time horizon of an entire release cycle, INVESTing in stories for an entire release cycle has poor returns, because it takes a lot of effort to ensure that the INVEST criteria is being satisfied correctly for a large number of stories covering an entire release cycle, and those stories are much more likely to change over the release cycle spanning several sprints; so this kind of INVESTment may not yield expected results as stories will very likely change during an entire release cycle after they have been specified.

Feature-epics in a release cycle can and should be estimated in relative size terms, but without expending the effort needed to break down all feature-epics in a release cycle into individual stories.   This epic-level estimation can be done by comparing relative sizes of epics.  I have presented a detailed approach for doing so in Part 5 of my 5-part blog series on Scalable Agile Estimation: Normalization of Story Points.  This method ensures that all epics and stories are estimated in a common currency of “normalized story point” which represents the same scale for an entire organization across all projects, sprints, release cycles, and teams.  There is no need to estimate epics in “swags” or “bigness numbers” which are entirely unrelated to story points.

It still makes sense to rank order feature-epics in a release cycle to decide which ones will be scheduled in Sprint 1, 2, 3, and so on.  However, this assignment may change as each sprint is completed and more information and learning emerge.

Product planning horizon, workitem granularity, estimation and rank order

If the product planning horizon is over multiple release cycles (typically 6 to 24 months) going beyond the current release cycle, workitems are “coarse-grain” as shown in the bottom gray color region of Figure 1.  These large epics or super epics require two or more release cycles to complete.  These super epics may be described in plain English (bulleted text) or with screen mock-up or video or prototype or with any form of expression suitable to express the intent and value of super epics.  These super epics are divided into feature-epics – small enough to fit in a single release cycle – before the release cycle in which that feature-epic will be implemented.

Over the time horizon of multiple release cycles, INVESTing in stories has even poorer returns compared to INVESTing in stories for a single release cycle.  This kind of INVESTment will not yield expected results as stories are very likely to change over much longer duration of multiple release cycles.

Large epics or super epics that need multiple release cycles to be implemented can and should be estimated in relative size terms, but without expending the effort needed to break down large epics into feature-epics, and breaking those, in turn, into stories.   This estimation can be done by comparing relative sizes of large epics.  I have presented a detailed approach for doing so in the same Part 5 of my 5-part blog series on Scalable Agile Estimation: Normalization of Story Points, as mentioned above.

It may not make much sense to rank order large epics over a multiple release cycle product planning horizon, as this assignment very likely will change over a larger time horizon; besides it does not matter if a large epic which is six to 24 months out in the future is rank-ordered 125th or 126th.  That level of rank order precision is not required.

I use the strategy of INVESTing in stories and SMART tasks only for the next sprint backlog, but not doing so at the release or product backlog levels. INVEST just-in-time in the next sprint as you plan it. INVESTing in stories and tasks over a longer time horizon will yield poor returns.

DIVE the product backlog carefully

There is rarely enough time or resources to do everything.  Therefore, agile teams must prioritize (rank-order, to be more precise) which stories to focus on and which lowest rank-order stories could be pushed out of scope when close to the end of a sprint.  For agile development projects, you should linearly rank-order the backlog, rather than do coarse-grain prioritization where stories and epics are lumped into a small number of priority buckets, such as Low, Medium, High, Critical priorities.  Linear rank ordering (i.e., 1, 2, 3, 4 ….n) avoids inflation of priority, keeps everyone honest, and forces decisions on what is really important.  It discourages the “kid-in-a-candy-shop” behavior when the business side clamors that everything is of high-priority or of equal importance.

Note that epics and stories are conceptually different, and should not be mixed or aggregated while developing a rank order.  An epic rank order is separate from a story rank order.

The responsibility of agile rank ordering is shared among all members of a team; however, the rank ordering effort is led by the product owner.   Similar to DEEP, INVEST and SMART,  DIVE is a meaningful English word, and also an acronym.    Product backlog items should be linearly ordered based on the DIVE criteria, which requires careful consideration of all four factors captured in the DIVE acronym:

  • Dependencies: Even after minimizing the dependencies among stories or epics (which is always a good thing to do), there may still be few unavoidable dependencies and they will have an impact on rank ordering.  If workitem A depends on B, B needs to be rank-ordered higher than A.
  • Insure against Risks: Business as well as technical risks
  • Business Value
  • Estimated Effort

In my blog post on Agile Prioritization: A Comprehensive and Customizable, Yet Simple and Practical Method, I have presented a simple but fairly comprehensive method for linearly rank ordering a product backlog (both stories as well as epics).  The blog explains how to model and quantify value, risk and effort for the purpose of rank ordering workitems in a backlog.   I will not repeat those details here. The method is extensible, customizable and very practical.   The Agile Prioritizer template makes the rank ordering effort easy.

Table 1 summarized how to manage DEEP product backlog with wise INVESTing and careful DIV(E)ing.

Table 1: Summary for managing a DEEP Product Backlogs with
wise INVESTing and careful DIV[E]Ing

Table1

I hope you find the statement “Product backlog is DEEP; INVEST wisely and DIVE carefully” a useful mnemonic to remember key characteristics of a well-managed product backlog.  I would love to hear feedback from you on this blog here, or by email (Satish.Thatte@VersionOne.com), or on Twitter@smthatte.

Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.