Skip to content

Feed aggregator

XP2015, Helsinki, Finland, May 25-29 2015

Scrum Expert - Tue, 04/28/2015 - 14:02
XP2015 is a four-day conference on extreme programming and Agile software development that takes place in Helsinki, Finland. This is the event where Agile and Lean practitioners and researchers should meet. In the agenda of the XP2015 conference you can find topics like “Continuous Delivery with Docker and Jenkins Job Builder”, “Collaborative Exploratory and Unit Testing”, “Fearless Change: Patterns for Introducing New Ideas”, “Agile methods applied to development and certification of safety-critical software (ASCS)”, “Keynote: New Directions for Software Development Process”, “Fun Retrospectives: Activities and ideas for making agile retrospectives more ...
Categories: Communities

does it hurt when you stop?

Derick Bailey - new ThoughtStream - Tue, 04/28/2015 - 12:00

Hold your cell phone up to your face, like you’re talking to someone… keep it there for 20 or 30 minutes, constantly holding your arm tight as you do. When you finally decide to put the phone down, your arm is going to be tired and sore. It’s going to hurt just to put the phone down.

The same thing is happening with my eyes and my new glasses. I’ve had these glasses for about 2 weeks now, and it hurts my eyes to use them. But I keep wearing my new glasses, forcing myself to adjust to them. I do this for the same reason that I put down the phone after an extended conversation; in spite of how much it hurts my arm to put it down, it is better to deal with that pain now than the constant and enduring pain of always holding my arm tightly closed.

Living With Pain

There are some schools of thought that say pain never really goes away. Instead, we learn to live with it. The pain that we experience becomes the new normal, a baseline of where we are at any given time.

I think this is true in some cases – like the way I used to focus my eyes, prior to having these new glasses.

I don’t think this school of thought is true in all cases, though. Putting down the phone allows my arm to relax and heal itself, for example. Once my arm has rebuilt it’s supply of energy and released all the things that had built up while hold the phone, the pain does go away.

Crash

Sometimes we don’t realize we’re living with pain, because it has become the new normal. It’s surprising how quickly things become “normal”, as well.

If the pain we feel is slowly onset, it can be like boiling a frog. We don’t realize we’re being cooked until it’s too late. There may be a sharp end or a crash coming up, but we don’t know it because we don’t recognize the pain.

I go through cycles on a fairly regular basis, with various points of stress in my life. When I work extra hours toward a project deadline, for example, I carry a heavy burden of stress. It is physically and mentally taxing. I quickly become accustomed to it, though and it becomes my new normal. I forget that the pain, the stress and the issues that result from these, are there.

At the project end, once the deadline is passed, or when the situation finally resolves itself, I crash. Hard. My body shuts down. My brain won’t think. I often get sick with flu-like symptoms, general fatigue and aches, and have to spend a week or more in recovery mode.

Recovery

When I’m in recovery mode, I’ll stay home from work. I’ll watch movies and take naps during the day. I’ll eat extra food and let my work go unattended. I won’t answer emails often enough. I won’t attend meetings. I’ll call in sick, essentially. Because if I don’t, I put myself at even greater risk of an even larger and more dangerous crash.

Much like the time that someone spends in a hospital, after surgery or other medical procedures, the time I spend in recovery is important. It is time that my body, my mind, my motivation and my desire to continue working, needs.

What Doesn’t Kill You, Only … ???

This isn’t good for me. Not by any means. These cycles of stress – mentally, physically and otherwise – take a toll on me. Every cycle I go through leaves me with less than I had before. Less time, less motivation, less ability to care and do my job.

I used to tell myself that it was making me stronger. Maybe it was when I was a kid – late teens, early twenties. But the reality of what this is doing to me is sinking in, slowly but surely, year after year. And it’s not good.

No one should have to go through cycles of pain like this; building up, crashing, recovering and starting over again. It is a terrible way to live.

Feeling Pain

If I hold the phone to my face for too long, my arm hurts. I know the stress I am putting my arm through. When I am finally done holding the phone, my arm hurts more. It has to recover. This is pain that should be prevented. I should switch hands to hold the phone, use a hands-free head set or find another alternative.

I’ve spent the last however-many years straining my eyes to see. My eyes became accustomed to the stress, the pain and the difficulty of focusing. Now that I have my glasses, I have to accept this pain of allowing my eyes to relax. It’s like I’ve held my phone in my arm for 20 or 30 years, and now I have to put it down. But I know that if I allow my eyes to recover through the use of these glasses, that my eyes will experience less strain and pain over-all.

Recognize The Difference

Learning to live with pain is something that we have to deal with far too often. The death of a loved one, a friend moving away, relationships that end – there are some pains that will never go away, or take a tremendously long time to go away.

But that doesn’t mean we should learn to always live with every pain. There are some pains that can and should go away. We need to learn to recognize the difference in the pains we are feeling.

– Derick

Categories: Blogs

Software Development is Not a Form of Construction

Notes from a Tool User - Mark Levison - Tue, 04/28/2015 - 09:18

original graphic design by FreepikFor years the software industry has used an analogy, with construction as its defining metaphor. The comparison is applied throughout the language of software: architecture, foundations, constructor, projects, building code. The language is so pervasive that it affects our thinking around software development, but unfortunately the metaphor is fundamentally broken and the flaws have led us down a number of bad paths.

In construction, a lot of emphasis is placed on predictability­, getting the requirements correct up front, and cost reduction. These are all signs of a mature industry. We run into problems when we try to apply the same emphasis in software.

Rules of Thumb, Construction Codes, and Materials

Modern construction can trace its routes back hundreds or thousands of years, depending on where you put the starting line. As a result of all this history, there is a great deal of expertise codified in rules of thumb:

–       In most areas the construction costs per square foot are a well-known constant. For instance, we recently did some home renovations and were warned by friends in the industry that the typical renovation in Ottawa costs from $35-50/sq ft. They were bang on.

–       A good estimate for the depth of a concrete floor slab is the same as a 1/180 of its unsupported perimeter.
(The last from: Designing With Your Thumb – Thomas Michael Wallace)

Software, on the other hand, is at best 70 years old. Its rules of thumb don’t have the same solid history to warrant unwavering application.

Eventually rules of thumb are codified and fixed as building codes. When constructing houses, building codes determine everything from how far apart the studs in the wall are, to the amount of insulation in the walls and roof. These codes mean that all houses meet a minimum standard and greatly increase the predictability of cost.

It is possible to have these construction codes because there are limited sets of building materials (wood, steel, etc) and tools (hammer, saw, etc). The properties of the materials and their failure modes are predictable, and the toolset that is used to work with the materials is small and well understood. Sure, materials and tools continue to evolve in the construction industry, but at nowhere near the same rate as evolution in software.

It’s much harder to keep up with the list of new materials and tools in software. Programming languages, libraries, and supporting tools appear and evolve every year, as does their content. And even if we just stick to our existing languages and libraries, it may take years to explore all of their details and nuances to the extent that would be required for standardized codes.

It’s the well-understood, stable materials and tools that make building construction codes possible. The instability of the software world guarantees that we will never have construction codes in our field.

There are no useful Rules of Thumb or Construction Codes in the software industry. Physical Constraints and Stable Requirements

Buildings, bridges, and other construction works are governed by well-known physical limits. These limits dictate the size, shape, and use of a structure depending on the materials used. For example, wood framed buildings are limited in height from four to six stories. Bridge spans are limited in length by the materials used and how the properties of those materials relate to the physics involved.

The construction of buildings and bridges represents a problem domain that has been studied and tested for generations. As a result, the questions that have to be asked of the client are predictable and the range of possible answers is constrained.

Construction design has to fit into the constraints of site and function. As much fun as it might be to imagine an office building that spins around a single point like a gyroscope, it’s both physically impossible and wouldn’t meet the functional need. When building bridges or roads, there are clear standards for each jurisdiction based on the type and size of vehicle that you need to support.

Software isn’t subject to these same constraints. If the customer really wants the equivalent of a gyroscope, we can probably deliver it. The types of users – and uses – that we need to support are far more widely varied than those in construction.

Once a building has been started and the foundation has been poured, you can’t easily change the size or location on the site. Once the internal structure of a building has been started, you can’t just decide to add a new elevator shaft or new wing to the building. When the footings of a bridge are in place it can’t be moved 20m because the customer decides the bridge was in the wrong place. (Okay, you can, but effectively it requires you to throw away all existing work and start again from scratch).

With software we can make almost any change we want – from the simple to the complex, such as increase the number of supported users from 100 to 1000; change the product direction (Yelp – started life as a tool to send friends recommendations for restaurants, doctors etc. It took on life as a review site only when the original functionality flopped.), change the programming language (I’ve worked projects that moved from Java to .NET and back to Java) – all for much less cost than starting again from scratch.

Because we have much greater flexibility in software, we are also able to accept changing requirements throughout the development process. Requirements that are discovered early in the development process often change a number of times before they’re finally implemented.

In the world of construction, the architect can hand the builders a set of blueprints with fair confidence that the builder will interpret them correctly. While there will still be dialogue and a need for changes, the degree of change is nothing like the world of software. In software we have no effective way (even UML) to hand a blueprint to the developers and walk away. Instead of a blueprint, we have a series of ongoing conversations between the customer and the people building the software.

Software is open to far greater change than construction. People

In construction, tradespeople are generally considered interchangeable and replaceable. It’s assumed that if you change carpenters while framing a house, the results of their work will generally be the same.

In the game of software this is clearly not true. Because of the complexity and variance in both the tools (programming language and libraries) and problem domain developers, business analysts, testers and UX designers can’t just be moved from one area to another.

People who see a relationship between software and construction often assume that people are replaceable and interchangeable. That is far from the truth. All substantial pieces of software are built by teams of people, so if you interchange or replace one team member with another, it costs a team in three major ways:

–       They lose the tacit knowledge that their former team member had.

–       They have to train the new team member on what they’re building and what they have built so far.

–       They have to spend time establishing an effective working relationship with the new person.

As a result, replacing or adding a new person slows the whole team down for at least 3-4 months. Individually, the new team member often takes even longer than that to become fully productive. While construction also suffers slow downs when people are changed, it will be to nowhere near the same degree as a software project.

Over 40 years after it was first written, the old Fred Brooks quote still applies: “Adding more people to a late project just makes it later”.

Conclusion

The construction metaphor that is often used to describe software is wrong. Sadly, because of its implications, we put a lot of emphasis in the wrong places:

·      Getting the requirements right upfront instead of accepting that change is the norm

·      Emphasizing the importance of architecture and architects instead of accepting that software is adaptable and can be change by anyone on the team

·      Assuming people are replaceable and that problems related to time can be solved by adding more people instead of accepting that people are unique

·      Seeking predictability instead of accepting our domain that isn’t well understood

Software is in no way related to construction. We’re not building, we’re exploring.

We’re exploring the problem space of our customers. We’re creating new ideas that happen to be expressed in code. So let’s leave the old construction metaphors behind, because they’re crumbling the foundation of the roads we’re travelling together.

I’m not the first to explore this vein. Other views:

Martin Fowler: The New Methodology
StackOverflow: What’s wrong with the software construction analogy
Thomas Guest: Why Software Development isn’t Like Construction
Mishkin Berteig: The software construction analogy is broken

 

Categories: Blogs

Woran ScrumMaster ihre Effektivität erkennen

Scrum 4 You - Tue, 04/28/2015 - 08:13

Jeder ScrumMaster stellt sich irgendwann diese Fragen:

  • Wie effektiv bin ich als ScrumMaster?
  • Welchen Mehrwert schaffe ich für das Unternehmen?
  • Wann ist meine Rolle als ScrumMaster im Team erfüllt?
  • Woran erkenne ich, dass wir als ScrumMaster unser Unternehmen voranbringen?
  • Wann sind wir als ScrumMaster-Team effektiv?
  • Woran erkenne ich, dass meine ScrumMaster einen guten Job machen?

Gibt es auf diese Fragen überhaupt eine auf messbaren Kriterien beruhende Antwort? Sehen wir uns die Fragen mal aus unterschiedlichen Blickwinkeln an. Setzen wir zunächst die Brille des einzelnen ScrumMasters auf, der in einem Entwicklungsteam arbeitet. Es gibt nur wenige absolut eindeutige und messbare Ergebnisse, die auf ein produktives Team und somit die effektive Arbeit des ScrumMasters schließen lassen:

Das Scrum-Team liefert!
Kontinuierlich produziert das Team in kurzen Iterationen Produktinkremente und liefert sie unabhängig aus. An den regelmäßigen Releases in kurzen Abständen wird dieser Rhythmus ersichtlich. Die Kunden erhalten meist schon nach zwei Wochen neue Funktionalitäten, die sie bereits nutzen können.

Geliefert wird auf hohem Qualitätsniveau
Das gesamte Team ist für eine hohe und beständige Qualität verantwortlich. Die hohe Kundenzufriedenheit lässt auf eine gute Zusammenarbeit und eine qualitativ hochwertige Produktentwicklung schließen. Ein sicheres Zeichen für eine gute Arbeit des Entwicklungsteams ist es auch, wenn der Kunde nur wenige Fehler zurückmeldet – insgesamt ist ist das auch ein Verdienst der effektiven Arbeit des ScrumMasters.

Pixabay CC0

Pixabay CC0

Neben diesen zwei direkten Anzeichen guter ScrumMaster-Arbeit gibt es einige indirekte Indikatoren, die eine hohe Effektivität in der Vorgehensweise vermuten lassen:

Das Team erkennt Impediments eigenständig
Das Aufdecken von Hürden und Blockaden im Team trägt zu einem besseren Verständnis darüber bei, was die Arbeit eines Entwicklers erschwert. Genau so wichtig ist es aber, für alle Beteiligten transparent zu machen, wie diese Hürden bearbeitet und gelöst werden. So wird das Team für das Erkennen aufkommender Impediments sensibilisiert und im besten Fall lernt es sogar, Impediments selbst zu thematisieren und zu lösen.

Das Team verbessert sich kontinuierlich
In den Retrospektiven hilft der ScrumMaster dem Team, über die eigene Arbeit zu reflektieren. Sind positive und verbesserungswürdige Punkte erst einmal identifiziert, lassen sich Maßnahmen ableiten. Auch hier arbeitet der ScrumMaster aktiv mit dem Team zusammen und versucht, das Commitment jedes Teammitgliedes zu wecken, sich selbst und somit die Teamleistung zu verbessern. Wenn der ScrumMaster das schafft, macht es sich teilweise als höhere Produktivität bemerkbar – wieder ein messbares Ergebnis.

Das Team vertraut seinem ScrumMaster
Feedback wird offen gegeben, angehört und auch angenommen. Dass das Feedback außerdem die Leistung und nicht die Person betrifft, ist ein Zeichen dafür, dass im Team Sicherheit, Respekt und Vertrauen herrschen. Das Team steht füreinander ein und sieht in jedem Mitglied den Mehrwert, den es für das Gesamtprojekt leistet. Diese Stimmung aus gegenseitiger Unterstützung, gemeinsamer Verantwortung und Austausch untereinander zu schaffen, ist eine der wesentlichsten Aufgaben eines ScrumMasters und damit ein guter Gradmesser für wirksame Arbeit.

Der ScrumMaster reflektiert und erkennt Verbesserungsmöglichkeiten an sich selbst
Ein ScrumMaster ist auf Dauer nur dann eine Bereicherung für sein Team, wenn er auch die Verbesserungspotenziale an sich selbst erkennt und nutzt. Durch ständige Reflexion der eigenen Tätigkeit versucht der ScrumMaster, sich selbst und somit seinen Mehrwert für das Team zu identifizieren und gegebenenfalls zu erhöhen.

Bewegen wir uns von der Perspektive des einzelnen ScrumMasters auf die skalierte Ebene und nehmen wir die Rolle des ScrumMasters of ScrumMaster (SoS) ein. Natürlich möchte auch er seinen Wert für die Organisation messen und bewerten können. Woran könnte er die Effektivität seiner Arbeit festmachen?

Die einzelnen Scrum-Teams liefern
Ein SoS hat seine Aufgabe erfüllt und somit einen guten Job gemacht, wenn die einzelnen ScrumMaster die Impediments ihrer Teams lösen und ihnen so beim Liefern helfen. Die Unterstützung in der Bearbeitung von Impediments gewährleistet, dass die Projekte des Unternehmens vorankommen und abgeschlossen werden können.

Teamübergreifende Impediments werden gelöst
Betrifft ein Hindernis mehr als ein Entwicklungsteam, sollten die ScrumMaster sinnvollerweise gemeinsam an einer Lösung arbeiten – der SoS unterstützt sie dabei tatkräftig. Er ermutigt seine ScrumMaster, mit offenen Augen an die Arbeit zu gehen, damit auch verdeckte Hürden aufgezeigt und beseitigt werden können.

Das ScrumMaster-Team hat eine vielfältige Methodenkompetenz
Jeder ScrumMaster sollte mehrere unterschiedliche Moderations-, Arbeits- und Führungstechniken beherrschen, um sein Team effektiv bei der Arbeit unterstützen. Der SoS sorgt für Wissensaustausch, Dokumentation und die Ausbildung solcher Kompetenzen. Somit kann er seine Effektivität an der Vielfalt der Methoden bzw. ihrer Wirksamkeit in der Umsetzung messen.

Als Team sind wir stark!
ScrumMaster haben meistens ähnliche Schwierigkeiten in ihren Teams. Umso wichtiger ist die Unterstützung und der Austausch zwischen den ScrumMastern, um nicht die Orientierung auf das übergeordnete Ziel zu verlieren. Die ScrumMaster eines Unternehmens gestalten gemeinsam die Organisation und verbreiten das agile Mindset.

Setzen wir abschließend noch die Brille des Managements auf. Lassen wir stellvertretend einen Manager sprechen, der unsere Gedanken wunderbar zu Ende bringt:

„Mit dem Scrum of Scrums möchte ich für meine Mitarbeiter die Möglichkeit zum Austausch eröffnen; einen Austausch über empfehlenswerte Arbeitsweisen im Team, aber auch über schiefgegangene Versuche, das eigene Team produktiv zu machen. Das wichtigste Ziel ist und bleibt, dass die einzelnen Teams liefern und zwar in hochwertiger Qualität und zur Zufriedenheit unserer Kunden. Ist dieses Ziel erreicht, hat der ScrumMaster seine Arbeit effektiv erledigt.“

Entstanden in Zusammenarbeit mit Cathleen Spröte und Lisa Zenker

Categories: Blogs

R: dplyr – Error in (list: invalid subscript type ‘double’

Mark Needham - Tue, 04/28/2015 - 00:34

In my continued playing around with R I wanted to find the minimum value for a specified percentile given a data frame representing a cumulative distribution function (CDF).

e.g. imagine we have the following CDF represented in a data frame:

library(dplyr)
df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))

and we want to find the minimum value for the 0.05 percentile. We can use the filter function to do so:

> (df %>% filter(percentile > 0.05) %>% slice(1))$score
[1] 7

Things become more tricky if we want to return multiple percentiles in one go.

My first thought was to create a data frame with one row for each target percentile and then pull in the appropriate row from our original data frame:

targetPercentiles = c(0.05, 0.2)
percentilesDf = data.frame(targetPercentile = targetPercentiles)
> percentilesDf %>% 
    group_by(targetPercentile) %>%
    mutate(x = (df %>% filter(percentile > targetPercentile) %>% slice(1))$score)
 
Error in (list(score = c(5, 7, 8, 10, 12, 20), percentile = c(0.05, 0.1,  : 
  invalid subscript type 'double'

Unfortunately this didn’t quite work as I expected – Antonios pointed out that this is probably because we’re mixing up two pipelines and dplyr can’t figure out what we want to do.

Instead he suggested the following variant which uses the do function

df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))
targetPercentiles = c(0.05, 0.2)
 
> data.frame(targetPercentile = targetPercentiles) %>%
    group_by(targetPercentile) %>%
    do(df) %>% 
    filter(percentile > targetPercentile) %>% 
    slice(1) %>%
    select(targetPercentile, score)
Source: local data frame [2 x 2]
Groups: targetPercentile
 
  targetPercentile score
1             0.05     7
2             0.20    12

We can then wrap this up in a function:

percentiles = function(df, targetPercentiles) {
  # make sure the percentiles are in order
  df = df %>% arrange(percentile)
 
  data.frame(targetPercentile = targetPercentiles) %>%
    group_by(targetPercentile) %>%
    do(df) %>% 
    filter(percentile > targetPercentile) %>% 
    slice(1) %>%
    select(targetPercentile, score)
}

which we call like this:

df = data.frame(score = c(5,7,8,10,12,20), percentile = c(0.05,0.1,0.15,0.20,0.25,0.5))
> percentiles(df, c(0.08, 0.10, 0.50, 0.80))
Source: local data frame [2 x 2]
Groups: targetPercentile
 
  targetPercentile score
1             0.08     7
2             0.10     8

Note that we don’t actually get any rows back for 0.50 or 0.80 since we don’t have any entries greater than those percentiles. With a proper CDF we would though so the function does its job.

Categories: Blogs

Empathy Driven Development: Rescuing Value from the Bermuda Triangle

Agile Management Blog - VersionOne - Mon, 04/27/2015 - 19:58

Slide02Embarrassing Discovery

True story from when I was an agile coach for a multi-billion dollar, Fortune 15 giant…

It was a large agile program and we had new team members joining the program in waves. Not everyone was familiar with agile and we did not have money for in-person training. So we had to do the next best thing – remote agile training. Ugh!

Anyway, so I designed five 90-minute modules and as I was introducing the concept of optimizing value and minimizing waste, I asked the attendees who our customers and end users were and how our program helped them be more successful.

Nothing. Crickets.

I made an embarrassing discovery – most of the attendees were unaware of the “who our end customers and end users were.” Application architects, ScrumMasters, developers. This made it hard to talk about value and waste.

Course Correction

I was disappointed in myself. I had let my team down. So we worked with our product owner to shoot a series of videos that answered key questions about our business, our customers, our end users, their pain, what made them successful, and where our program fit in. We made these videos available on our team Sharepoint and made it mandatory viewing as part of onboarding.

A few months later, as I was walking around the office, chatting with some new team members, I asked the same questions – who were our customers and users, what were their pain points, where did our program fit in? I got great answers. No more crickets.

Tram-Scrum

This got me thinking about how frequently we get sucked into the mechanics of Scrum – the events and artifacts, without reflecting on the business value we were chartered to create. I call this TRAM-SCRUM where TRAM stands for:

  • T-actical
  • R-itualistic
  • Am-ateur

This was not why Jeff Sutherland and Ken Schwaber created Scrum. They wanted us to use Scrum strategically and professionally. I learned more about professional Scrum in the Scrum.org Scaled Professional Scrum course in Boston last year, but that is another topic for another blog.

I began wondering about some of the most common obstacles that prevented teams from making the shift from TRAM-SCRUM to PROFESSIONAL SCRUM. One common pattern kept niggling away at me – the lack of stakeholder empathy.

The Bermuda Triangle 

Our industry still suffers from the curse of the Bermuda Triangle – the place where all stakeholder value goes to die. This triangle is a crafty chameleon and seems to have changed forms over the years, but you know what Shakespeare said –

“A stinky diaper-genie by any other name would smell just as stinky.”
– William Shakespeare

Or something to that effect, anyway. English literature never was my strong point.
But I digress. Even though we have migrated to the rituals of Scrum, in many cases, we still labor under the tyranny of the Iron Triangle of staff, schedule and scope, we just rename the sides to be more “agile”…

Slide04

 

 

 

 

 

 

 

Cost Thinking vs. Value Thinking

So how do we stop getting more efficient at delivering waste and get more efficient at delivering value? This is another lesson I learned in the Scrum.org Scaled Professional Scrum Course

Slide05

 

 

 

 

 

 

 

STEP 1: Let go off cost thinking – How can I relentlessly cut costs, without factoring in the unintended destruction of value.

STEP 2: Take baby steps towards value thinking – how can I increase delivery of stakeholder value at the lowest cost, to generate sustainable stakeholder value?
I think one of the barriers to making this shift is lack of stakeholder empathy. Which brings us to the question what is empathy…?

Empathy

Empathy can be defined as…

“The action of understanding, being aware of and being sensitive to the feelings, thoughts, and experiences of another.”

Slide08

 

 

 

 

 

 

 

It requires us to walk in another’s shoes…

Current State

So this might open up an avenue of exploration for you – what is the current state of empathy in your teams, when it comes to your stakeholders?

We must begin by creating a shared understanding of who your stakeholders are. They might fall into different buckets…

  1. SPONSORS: Fund the Scrum team
  2. END USERS: Use the increments
  3. END CUSTOMERS: Served by end users via the increment
  4. COLLEAGUES: Impacted by the Scrum team
  5. EMPLOYERS: Writing the paycheck
  6. COMMUNITY: In which the team works
  7. OTHER: …?

What indicators might you use to assess this state? Are you satisfied by the current state, or would you like to make any adaptations? And if you would like to make some adaptations, what might you do…?

A Fresh Approach

I humbly offer to you an idea that has been evolving in my mind for about a year or so – drum-roll please….

Slide10

 

 

 

 

 

 

 

Empathy Driven Development

An approach to developing software that relies on team members making decisions based on empathy towards impacted stakeholders.

This approach requires development teams to creatively self-organize within the constraints of their organizations, to work around the barriers that isolate them from their stakeholders.

Empathy Driven Development (EDD) is complementary to agile software delivery with Scrum and is key to Scrum activities and events like backlog management and refining, sprint planning, daily Scrum and sprint review.

Common Barriers to EDD

As you think about using this approach, chances are that you will encounter some common obstacles…

  • Stakeholders inaccessible to development team
  • Un-validated assumptions about stakeholder needs
  • Layers of proxies between stakeholders and development team
  • Distrust between stakeholder proxies and development team
  • Cynicism / apathy towards stakeholders
  • No time / money to connect with stakeholders

If you are not ready to give up, here is a place to start…

Stakeholder Empathy Map

Get your team together in a room and…

  1. Create a grid with flip charts and tape.
  2. In the first column, ask your team to put post-it’s for all your stakeholders. Review and refine as a group.
  3. Now, ask your team to put up post-it’s to capture in 140 characters or less, each stakeholder’s…
  • ACCOUNTABILITY: What outcome are they responsible for?
  • MOST VALUABLE: What do they consider to be most valuable in the software they use, to help them deliver on their accountability?
  • MOST PAINFUL: What do they find most painful and frustrating in the software they use to deliver on their accountability

Review and refine as a group.

For instance, if we were doing this exercise in a group that develops patient care software used in a hospital, the outcome might look a little bit like this…

Slide13

 

 

 

 

 

 

 

Desired Outcomes

Whenever I have facilitated this exercise, it has generated tremendous conversation among the team members, which is the desired outcome. We want this exercise to result in…

  • Good conversations
  • Identifying Un-validated assumptions
  • Head scratching
  • Curiosity
  • Action items to connect with stakeholders
  • Many follow-up actions and conversations

But most importantly… an increase in stakeholder empathy!

Head vs. Heart

As you explore this further, an approach that might help you facilitate the enquiry is a pattern described by Dr. John Kotter, international change leadership guru, Harvard Business School professor, and founder of Kotter International. In his book – The Heart of Change -Dr. Kotter illustrates one of his Six Key Principles

Head and Heart. Dr. Kotter’s research demonstrates that successful large-scale change requires engaging not just the minds of those we lead, but, more importantly, their hearts. Creating a vivid picture of opportunities ahead is vital. A heartfelt passion and commitment enables companies to overcome the inevitable barriers and obstacles encountered along the way to success.

Slide16

 

 

 

 

 

 

 

Try to apply Dr. Kotter’s principle to establish an emotional connection between your team members and your stakeholders.

Self-Organization 

Challenge your teams to self-organize within the constraints of your organization to increase stakeholder empathy. Here are some initial ideas to get the ball rolling…

  • Try to get your developers to customer site…? (Make sure your most influential / cynical team members participate.)
  • Try to get customers to developer sites.
  • If you don’t have money, video customers using product and share it on your team site.
  • Maybe you can Skype / GotoMeeting with Webcam and watch your customers use your product and get frustrated or delighted with it.
  • Maybe you can include all these videos in new hire training / onboarding.

No matter what your team does, it must capture the smiles and frowns of your customers and stakeholders so it tugs at the hearts of your teams.

Call to Action

So here is my call to action – begin using EDD right now….

  1. Apply empiricism
  2. Create an empathy map
  3. Interact with stakeholders face to face or webcam. Make sure to talk to them and they are talking to each other.

Walk in their shoes. Self-organize and figure it out…!

  • TRANSPARENCY: Current state of stakeholder empathy
  • INSPECTION: Is it where you would like it to be?
  • ADAPTATION: Self-organize to make it better!

Rescue value from the Bermuda Triangle of cost thinking and value destruction!

And don’t forget to let me know how it goes.

Keep calm and scrum on!

Slide20

Categories: Companies

Cycle Time report—an example script using Tracker’s API

Pivotal Tracker Blog - Mon, 04/27/2015 - 19:36

Tracker’s API can provide information that’s not currently available through the Tracker web application. Read on to learn how to use the API to compute cycle time.

Cycle time defined for Tracker

Lean software development has given us some useful metrics to help identify problem areas in our development flow. Lead time is the time from when a story is first created to its final acceptance. Mary and Tom Poppendieck call this “Concept to Cash” in their book, Implementing Lean Software Development.

Cycle time measures only the part of lead time from when work actually begins on the story to the final acceptance. Since Tracker’s workflow allows story state to be changed at any time to any state, a story might be rejected and restarted, unstarted and put back in the Icebox, or even changed from accepted to unstarted. Therefore, for Tracker stories, cycle time begins the very first time a story was started, and ends at the last time it was accepted.

IMG_4099

Cycle Time

 

 

 

 

 

 

 

 

It’s helpful to identify stories with a cycle time significantly longer than what’s usual in the project. For example, if a story was delivered and rejected multiple times, it might be a sign that the delivery team and product stakeholders didn’t share the same understanding of how the feature should work at the time the developers started it. If that happens often, the team might try experimenting with techniques to improve shared understanding, such as story mapping.

Another reason for a long cycle time could be that the team was using brand-new technology with a steep learning curve, so it took a long time to finish one or more stories. Once they’ve mastered it, that probably won’t happen again, so there may be no need to try anything different.

Computing cycle time for your project’s stories

Because of Tracker’s built-in flexibility to cycle through different story states multiple times if needed, calculating cycle time for Tracker stories is not straightforward. We’re planning new reports that will show these types of metrics. Until then, you can calculate cycle times for stories in your project using Tracker’s API.

You can check out our example Ruby script from our repository of API examples. The script uses the activity endpoint to retrieve all the activity for the specified project. (Note that only the most recent six months of activity are available, and it’s returned in reverse order, with the most recent activity first.) The response to this endpoint is paginated, so our example shows how to page through all the entries, 100 items at a time. The script loops through all the activity events, looking for story state changes. It stores the earliest started date and latest accepted-at date for each accepted story, using the information returned in the activity resource.

Next, the script uses the API to look up the story name and type of each active story, drops the release-type stories, and then drops all stories that didn’t have at least one started-at and accepted-at time in the last six months. The remaining stories are sorted by cycle time in ascending order, and a report is printed out with the story ID, cycle time, and story name (since story names can be long, we’ve ellipsified anything over 40 characters).

To run the script, set environment variables for your Tracker API TOKEN and PROJECT_ID. Depending on the size of your project, it can take a few minutes to run.

Here are a few pointers to help you use this example:

  • If any stories that were accepted in your project were subsequently deleted or moved to another project, the report will show “deleted” instead of the story name.
  • Since you can currently only retrieve around the last six months of activity, the script can’t calculate cycle time for stories that were started more than six months ago.
  • It is possible in Tracker to put a story directly into a different active state than started (e.g., a story could go directly from the unstarted to the finished state). This script won’t compute cycle times for those, but of course you can write your own script to accommodate those differences.
  • Release-type stories and stories that are not (yet) in the accepted state will not appear on the report.
  • If a story was started more than once, and the first time was before the available activity for the project, the resulting cycle time would be incorrect.

We hope this example will help you use Tracker’s API for your own reports of metrics such as cycle time. For bonus points, update the script to also display the number of times each story was rejected. There’s an example of that in the repository, too.

Please see our API help to learn more about the API.

The post Cycle Time report—an example script using Tracker’s API appeared first on Pivotal Tracker.

Categories: Companies

10 Benefits of Agile You Definitely Don’t Want to Miss Out On

Agile Management Blog - VersionOne - Mon, 04/27/2015 - 19:33

PrintAre you sure you’re receiving the most benefits of agile possible? Do you know the top ten benefits of agile?

Read this article to learn what nearly 4,000 of your agile peers said were the benefits of agile that their organizations are receiving.

 

#1 Ability to Manage Changing Priorities

According to the 9th annual State of Agile™ survey, 87% of the respondents said agile improved their ability to manage changing priorities. If you’re familiar with the Agile Manifesto, this likely resonates with one of the values: Responding to change over following a plan. This value was, in fact, probably the foundational reason for the agile movement, and its top ranking in the State of Agile survey reinforces the value of agile.

There’s no question about whether adopting agile practices will enable you to better manage changing priorities. The fact that you have product backlogs that are being ranked by product owners as information becomes known improves your ability to manage changing priorities. Planning is continuous and each sprint is an opportunity to revisit priorities based on feedback and insight gained.

#2 Increased Team Productivity

Approximately 84% of the survey respondents stated that agile increased team productivity. Increasing team productivity has a lot to do with getting employees engaged and focused. Employee engagement comes from having a sense of purpose. Fostering employee engagement and team productivity are a natural byproduct of adopting agile.

When practicing agile the product owner works from a set of overarching initiatives and a vision for the product. The product owner communicates the vision and value-based decisions to the team on a regular cadence as they plan each sprint cycle. This makes it very clear to a team practicing agile that what they’re producing in every sprint cycle is always of the highest value. The team is protected from outside interference to minimize context switching and multi-tasking, so they can remain focused on completing the sprint plan. Agile retrospectives and continuous improvement initiatives further improve team performance.

#3 Improved Project Visibility

Another 82% of the respondents answered that agile improved project visibility. Personally, as a former manager, improving project visibility would be a top agile benefit for me. Agile provides all stakeholders, team members, the product owners, and management real-time access to the health and status of all of projects. There is no need to wait on formal weekly, monthly, or quarterly status updates. It’s simple to instead check the project or sprint level burndown and burnup charts and you can apply velocity for project forecasting analysis.

As agile teams are updating the work they’re doing on a daily basis, the entire enterprise is aware of the accurate, current status of all the work across all of their projects. It’s simple and easy, with the added benefit that the story and task boards provide the teams with a visible information radiator that promotes team collaboration.

#4 Increased Team Morale/Motivation

The survey found that 79% of respondents to the 9th annual State of Agile survey increased team morale/motivation through agile. I think that increased team morale is tied to many of the same factors that help with improving productivity. Team members feel more pride and satisfaction in knowing that they’re delivering valuable quality work that somebody wants and appreciates.

Increased transparency in agile reduces a lot of stress and pressure. Agile allows organizations to break some of the political dysfunctions. As self-managing teams, members have a direct voice in planning and can take ownership of their commitments, thus making team members feel more connected. As an added benefit, working with a happy team is fun and promotes growth and skill development.

#5 Better Delivery Predictability

In the survey, 79% of the respondents stated that agile provided better delivery predictability. This emerges as organizations become more experienced at practicing agile. It’s important to realize that you don’t achieve better delivery predictability from day one. You have to be practicing agile for a few sprint cycles, and there has to be a certain maturity that’s established across the project teams. Over time, as your teams continue to practice agile and begin to stabilize, their velocity metrics are going to emerge and stabilize.

Velocity is what enables agile organizations to better deliver predictability. In the past, project managers tried to forecast and lay out plans in an attempt to predict the future. As hard as we might try, we can’t control the future, so often those results were less than effective. An agile organization is going to use an actual proven measure, such as velocity, and they’re going to apply that to relative size estimates of their backlogs.

#6 Enhanced Software Quality

Another 78% of the respondents answered that agile enhanced software quality. We coach agile teams not to compromise quality in order to make up for time or scope. An organization can never really achieve the optimum benefits of agility if they’re not addressing the underlying quality of their product or service, as well as actively managing technical debt. It’s instilled into the agile principles and values that teams estimate and plan accordingly to build a quality product.

Oftentimes, with traditional waterfall approaches, there are schedule pressures that can lead teams to feeling that they have to compromise quality. If the team has a fixed scope and is pressured to deliver by an unrealistic fixed date, the team doesn’t have a choice but to compromise quality.

An agile team, however, isn’t pressured to make those choices. Quality is a recognized commitment by these teams. While the results may not be visible immediately, over time, as the culture changes, inevitably teams will produce higher quality product leading to customer satisfaction and more sustainable scalable products.

#7 Faster Time to Market

The survey results reflect that 77% of the respondents said that agile provided faster time to market. A desire to get to market faster is a very common reason that businesses initially decide to adopt agile. Organizations are feeling competitive pressures and the need to improve their product faster to stay relevant. If another company gets to market with something better, they can lose significant customer share.

I’m a little surprised that faster time to market isn’t higher up on the list. I suspect that maybe this is because it’s something that is not necessarily inherent in agile planning practices alone and may not be fully realized initially. Yes, our teams are going to focus on delivering working software on a short cadence and getting feedback early and often, but there may some additional factors that come into play before software can be deployed into the marketplace.

#8 Reduced Project Risk

According to the 9th annual State of Agile survey, 76% of the respondents said that agile reduced project risk. Practicing agile reduces project risk by the very fact that agile organizations are conducting very short feedback loops, typically every two weeks or less. Agile teams are presenting results and getting feedback from the stakeholders in short sprints. That in itself reduces risk because unforeseen issues are discovered early when they can be addressed with less impact.

Additionally, in some cases teams are encouraged to rank known high-risk items so that they can work these items sooner rather than later, to address what they learn earlier in the project. This helps teams evaluate the risks earlier and discover whether or not the project is going be viable and deliver the expected value. If needed, you can redeploy these teams to work on something else that will deliver better value.

#9 Improved Business/IT Alignment

Among survey respondents, 75% of the respondents said that agile improved business/IT alignment. I often hear improve business and IT alignment cited as a main reason to adopt agile. This is realized through the closer collaboration that is inherent in the agile principles and values, primarily through the transparency and the feedback from short inspect and adapt cycles. I think this is a clear benefit that’s easily realized as teams are aligned to have more open and transparent collaboration between their product owner, who serves as proxy for the customer, and business stakeholders. Defining and communicating clear project vision drives this alignment.

#10 Improved Engineering Discipline

Another 72% of the respondents said agile improved engineering discipline. Improved engineering discipline is, again, not something that’s inherently obtained just by the fact that you adopt agile planning practices. When the expectations and the cultural underlying principles of agile are taken to heart, team members are empowered to address the delivery of quality work as opposed to just getting work done. The ultimate foundation of a good solid product is going to be inherent with a good scalable design and architecture.

Once agile organizations come to embrace the agile principles with a goal of delivering high product quality, they must also embrace sound engineering discipline. Effective design, configuration management and testing strategies are essential to maximize agility.

Conclusion

These results show that there isn’t just a single benefit of agile. Different organizations and teams are gaining different agile benefits.

Would you like to find out even more information about the benefits your peers are receiving? Check out the 9th annual State of Agile survey.

State of Agile is a trademark of VersionOne, Inc.

versionone-coaches-jo-hollenAbout the Author
Jo Hollen
SAFe Agilist, CSM, CSP
Agile Coach and Product Consultant, VersionOne

Jo Hollen has more than 30 years of experience in the aerospace industry. Her career began with training Space Shuttle astronauts. Jo later moved into software engineering management where she introduced agile practices for Mission Control Center applications, including software currently onboard the International Space Station. With a passion for continuous improvement and challenging the status quo, Jo believes that servant leadership and agile principles and values just make sense and welcomes the opportunity to help organizations optimize their effectiveness.

Categories: Companies

Flow Thinking

TV Agile - Mon, 04/27/2015 - 17:41
Learn how to shift your focus from keeping people and equipment busy to having work flowing to your customers without unwanted waiting time and how that new focus will affect your meetings, process management, and metrics. Video producer: http://aceconf.com/
Categories: Blogs

Pair Programming Considered Bad

Scrum Expert - Mon, 04/27/2015 - 17:35
This presentation examines the latest theories from psychology and how they throw new, interesting and sometimes frightening light on the tools, techniques, process and practices that are often the dogma of modern software development.Video producer: http://aceconf.com/
Categories: Communities

Two New Case Studies: NAPA Group & Travis Perkins

Agile Product Owner - Mon, 04/27/2015 - 16:02

Hi,

We are always interested in hearing about SAFe transformations in the marketplace.  Here are two more recently published case studies that we’d like to share, both from Europe.

The NAPA Group

Finland-based NAPA Group—a leading software house in eco-efficient ship design—joined with our Gold Partner, Nitor Delta, for a 2-year journey of dramatic organizational change.

They began delivering value on a 3-month cycle, a huge turnaround from the year-long cycle they had before, and they increased their predictability to 92%. There were notable improvements in other areas as well, but the real story here is that these successes were the fruits borne of a combination of Scrum integrated with a full SAFe implementation. It demonstrates how Scrum can provide a great foundation for the individual teams, while SAFe provides the framework to support multiple teams within the same value stream.

Travis Perkins

In another study, Gold Partner, Rally Software, teamed up with Travis Perkins, UK’s leading building materials supplier. They put a 3-year adoption plan into place designed to transform a 200+ year-old legacy-burdened bureaucracy into a nimble, 21st Century organization. Their initial goal was to eliminate wasted work while accelerating ROI. Change happens slowly in any enterprise, but within a year, they completed their first ART—a huge feat for such an enterprise, and pointed to SAFe as making it “… easier for us to focus on what has the most business value. Instead of delivery perceived value, we’re now delivering actual value.”

You can read these case studies, and more at: scaledagileframework.com/case-studies/

Stay SAFe,
–Dean

Categories: Blogs

March Dallas Recap: Building Continuous Delivery

DFW Scrum User Group - Mon, 04/27/2015 - 15:34
Continuing on a theme of QA/testing, in March we had DFW Scrummer Allen Moore share his experience in implementing continuous delivery.  The full title of the session was “Building Continuous Delivery: A Retrospective (or how a QA Strategy succeeds without … Continue reading →
Categories: Communities

MVF, MMF, WTF

TargetProcess - Edge of Chaos Blog - Mon, 04/27/2015 - 12:05

The concept of Minimal Viable Feature (MVF) is a significant milestone in product development. Release something that you think solve a problem and listen for the feedback. While the concept itself is great, the devil is in the details. In general, it is quite hard to learn how to define MVF. It is an art, not a science. In this article I will share my experience and some patterns I learned.

#1 Cut. The. Scope.

The most common mistake in MVF definition is bloated scope. There is always a desire to put one more thing into the feature and make it more useful and more appealing to the end user. No Product Owner can resist this intention and in almost every single feature I lead I added some unplanned user stories. In fact there is nothing bad with this approach, you discover new information and change plans. However, there is a real danger to push feature beyond MVF and delay its release. 9 months feature? We had it in our company.

Now we have a 3-months rule for every MVF. There should be serious reasons to spend more than 3 months on MVF implementation. MVF goal is to prove concepts and discover “unknowns”, don’t blow it to MMF.

#2 Feature kick start

Everybody should understand why we add this feature into a product. There is only one reason to add a new feature — it solves some important problem that quite many users face quite often. In the simplest case you have hundreds of requests that picture problem in bright colors and all you have to do is invent a solution. In a more complex case users don’t fully understand the problem and throw out various solutions that in fact don’t solve this particular problem. Only experience and system-level thinking can help to spot such cases. In the worst case you have no feedback and rely on your intuition to define the problem. This is a dangerous practice that can lead to extreme results: genius insights or total fuck-ups.

On a Feature Kick Start meetings we have people from marketing, development, testing, design and sales. All bring valuable information about various facets of the problem.

Product Board Meeting

These meetings have several goals:

  1. Clearly define the problem we solve.
  2. Define a scope of MVF.
  3. Decide what we don’t know and what feedback we will accumulate now and after MVF release.
  4. Bring development team and sales people to a common understanding about the feature.
#3 Huge upfront UX is bad

We tend to have UX and Development as a completely separate activities. Sometimes we spent months on a feature UX, built prototypes, tested them and then boom… priorities changed and feature is no longer needed so much. Almost all the time we spent was just a waste. Let’s say we get back to the feature next year, but now we have new information and UX we did a year ago is obsolete now, so we have to start over again.

UX and Development phases

Now we start UX when feature is actually starts. It means Feature Team almost completed previous feature, so it has some spare time to dig into the new feature and discuss its design, flows, etc. It may happen that developers don’t do much for some time, since UX is not ready. However, there are often many things they can do: fix some old bugs in background, prototype new feature, try some technical and architectural ideas, implement something we 100% certain about. So while in theory it looks like you are not “utilising 100% of resources capacity”, in practice single-feature flow for a Feature Team shortens cycle time and reduced waste activities.

#4 MMF is inevitable

Usually MVF solves a part of a problem and some cases are missing. Usually the solution itself is not the best. In most cases MVF is not a “complete” feature and you should finalize it. We call this finalization Minimal Marketable Feature (MMF). At this point feature provides a complete solution to a problem, the solution itself is beautifully designed, it is on par or outperform similar solutions in competitive products.

MVF and MMF flows

It may happen that new feedback will reveal that MMF is still not enough, so then you can iterate and release as many improved features as you need. This process is not formalized in our company.

#5 Real feedback is slow

We hoped to have 2-3 weeks delay between MVF and MMF and we thought it will be enough to accumulate enough feedback. We wanted to have process like that:

MVF - MMF - MVF - MMF

However, in most cases it takes 2-4 months to generate good amount of relevant feedback. It takes time to accumulate and reveal common patterns. For example, we re-designed navigation and allowed users to create their own Groups and Views inside these Groups (BTW, it was the case when we throw out UX done 9 months before feature start). It took us 5 months to actually understand what mistakes were made and what problems most companies have with this new flexible navigation. It appeared, the navigation was too flexible, so now we are reducing this flexibility and add some restrictions.

We changed the process and now we plan for 3 months delay between MVF and MMF. This gap is used by another MVF, so on a high level we release first minimal feature, that release second minimal feature and then based on generated feedback we complete both features.

MVF  - MVF - MMF - MMF

#6 Real feedback comes from real usage

Don’t get me wrong, I’m not against prototypes, wireframes sharing, surveys, etc. However, my experience showed that the only way to get a really relevant feedback is to release something in a product. All other ways of feedback generation are flawed. Real users bring so many unexpected and interesting insights.

When we started UX process adoption on our company 6 years ago, we tried almost any possible way of feedback gathering. We created several solutions to a single problem and run usability tests, ran surveys, interviewed customers, ran UX groups and shared concepts and ideas. We still use most of the methods from time to time, but our current approach is extremely lightweight and balanced. In general, we build light prototypes only when we just can’t select a solution from two options (50/50 votes inside our team). We never build full interactive prototypes that replicates future system behavior, but just share sketches and main concepts. With right questions asked this simple method allow us to get a very nice preliminary feedback.

TL;DR
  1. Set clear MVF goal and cut MVF scope that beyond this goal.
  2. Bring sales, marketing, development and design people together to define MVF.
  3. Huge UX phase upfront is bad and usually leads to waste.
  4. MVF is not a full feature. Embrace it and don’t rush.
  5. Real feedback comes from a delivered feature only.
  6. It takes several months to accumulate information to reveal patterns and decide how to improve feature.
Categories: Companies

Abstractions Save Testing Time in TDD

NetObjectives - Mon, 04/27/2015 - 09:20
In my Prefactoring book, I have a guideline “When You’re Abstract, Be Abstract All the Way”. The guideline recommends never using a primitive (e.g. int or double) in a parameter list, except as a parameter to a constructor. Although the book was primarily focused on creating high quality code in general and not specifically on test-driven development (TDD) , it turns out this guideline can...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Agiles Anforderungsmanagement mit dem Poke-Prinzip

Scrum 4 You - Mon, 04/27/2015 - 08:39

Marco Ley, Leiter eEntwicklung bei CosmosDirekt, sprach auf den Softwareforen Leipzig letzte Woche über “Agiles Anforderungsmanagement: Das Poke-Prinzip – von harten Anforderungen zu kleinen Experimenten.“ Ich muss von diesem Vortrag erzählen, weil ich so stolz auf dieses CosmosDirekt Team bin. Ich habe nichts dafür getan, dort kennt man mich nicht, und ich will gar keine Lorbeeren einheimsen, die Marco Ley gehören, aber ich bin einfach vollkommen fasziniert.

Kennt ihr das, wenn ihr hart für etwas arbeitet und dann feststellt, dass all das, worüber ihr nachgedacht und ständig gesprochen habt, plötzlich Realität wird? Nun ja – so fühlte ich mich an diesem Morgen beim Vortrag von Marco Ley. Er sprach davon, dass seine Entwicklungsteams vollständig crossfunktional aufgestellt sind – UX, RE, Tester, Developer. Diese Teams arbeiten nicht etwa Anforderungen ab, sondern erarbeiten die User Storys selbstständig, basierend auf groben Vorgaben. Und es sind nicht etwa klassische User Storys, sondern Hypothesen, die in mehr oder weniger aufwendigen A/B Tests auf der Produktivumgebung geprüft werden,so dass die Funktionalitäten fÃür das gesamte CosmosDirekt-Portal live gestellt werden. Die durch die Implementierung gewonnenen Daten beweisen, ob sie wirklich einen Return On Invest bringen.

Damit zeigt uns Herr Ley, dass es ihm gelungen ist, die Rolle des Product Owners so zu leben, wie es meiner bescheidenen Meinung nach sein sollte. Er kümmert sich darum, Ideen zu finden, diese daraufhin zu bewerten, ob man damit Geld verdienen kann und macht dann diese Ideen zu Hypothesen, die er von seinen Kollegen durch Implementierung überprüfen lässt. Was funktioniert, wird behalten, der Rest wieder entsorgt. Einfach toll!

Wir haben ihn natürlich gefragt, wie er erkennt, ob er Erfolg mit der Funktionalität hatte, und er sagte: „Weil wir eine Datenbasis haben.” Er trifft Entscheidungen auf Basis von Daten, die er durch Ausprobieren gewinnt und nicht durch politische Überlegungen. Chapeau!Wenn man ihm so zuhört, tut mir die übrige Online Direktversicherungsbranche leid. Sie kann sich warm anziehen, sollte sich sein Vorgehen bei ComosDirekt weiter durchsetzen. Sein Team wird allen anderen einfach davonlaufen.

Vielen Dank für diesen tollen Vortrag!

An dieser Stelle noch etwas Werbung: Die Softwareforen in Leipzig haben mit der Agilen User Group, die sich dort zwei Mal im Jahr trifft, eine wirklich tolle Veranstaltung ins Leben gerufen. Ich bin sehr dankbar, dass ich dabei sein darf. Mehr Infos hier

Categories: Blogs

Deliberate Practice: Watching yourself fail

Mark Needham - Sun, 04/26/2015 - 00:26

Think bayes cover medium

I’ve recently been reading the literature written by K. Anders Eriksson and co on Deliberate Practice and one of the suggestions for increasing our competence at a skill is to put ourselves in a situation where we can fail.

I’ve been reading Think Bayes – an introductory text on Bayesian statistics, something I know nothing about – and each chapter concludes with a set of exercises to practice, a potentially perfect exercise in failure!

I’ve been going through the exercises and capturing my screen while I do so, an idea I picked up from one of the papers:

our most important breakthrough was developing a relatively inexpensive and efficient way for students to record their exercises on video and to review and analyze their own performances against well-defined criteria

Ideally I’d get a coach to review the video but that seems too much of an ask of someone. Antonios has taken a look at some of my answers, however, and made suggestions for how he’d solve them which has been really helpful.

After each exercise I watch the video and look for areas where I get stuck or don’t make progress so that I can go and practice more in that area. I also try to find inefficiencies in how I solve a problem as well as the types of approaches I’m taking.

These are some of the observations from watching myself back over the last week or so:

  • I was most successful when I had some idea of what I was going to try first. Most of the time the first code I wrote didn’t end up being correct but it moved me closer to the answer or ruled out an approach.

    It’s much easier to see the error in approach if there is an approach! On one occasion where I hadn’t planned out an approach I ended up staring at the question for 10 minutes and didn’t make any progress at all.

  • I could either solve the problems within 20 minutes or I wasn’t going to solve them and needed to chunk down to a simpler problem and then try the original exercise again.

    e.g. one exercise was to calculate the 5th percentile of a posterior distribution which I flailed around with for 15 minutes before giving up. Watching back on the video it was obvious that I hadn’t completely understood what a probability mass function was. I read the Wikipedia entry and retried the exercise and this time got the answer.

  • Knowing that you’re going to watch the video back stops you from getting distracted by email, twitter, Facebook etc.
  • It’s a painful experience watching yourself struggle – you can see exactly which functions you don’t know or things you need to look up on Google.
  • I deliberately don’t copy/paste any code while doing these exercises. I want to see how well I can do the exercises from scratch so that would defeat the point.

One of the suggestions that Eriksson makes for practice sessions is to focus on ‘technique’ during practice sessions rather than only on outcome but I haven’t yet been able to translate what exactly that would involved in a programming context.

If you have any ideas or thoughts on this approach do let me know in the comments.

Categories: Blogs

Its high time that we stop using Velocity

Agile World - Venkatesh Krishnamurthy - Sat, 04/25/2015 - 05:17

Velocity in an Agile environment is the most misused-used and misinterpreted word/metric.  The key issue is, teams and stakeholders interpret Velocity as a productivity measurement rather than capacity of the team.

I don’t blame the team or the managers. But the “word” itself. If we look at the synonyms for Velocity(see the screenshot below), all of them point to quickness, momentum, acceleration, which naturally encourages people to connect this with “productivity”.  

 Velocity

Google for Acceleration or Velocity and one would find following images… These images push people to think more of a competition, race and winning rather than a team work or a capacity.

image image image

I think we should stop using the word velocity and start using the word that creates some mental image to show the “team’s capacity”.

image

Do you think this is a fair call ?

Categories: Blogs

R: Think Bayes Locomotive Problem – Posterior probabilities for different priors

Mark Needham - Sat, 04/25/2015 - 01:53

In my continued reading of Think Bayes the next problem to tackle is the Locomotive problem which is defined thus:

A railroad numbers its locomotives in order 1..N.

One day you see a locomotive with the number 60. Estimate how many loco- motives the railroad has.

The interesting thing about this question is that it initially seems that we don’t have enough information to come up with any sort of answer. However, we can get an estimate if we come up with a prior to work with.

The simplest prior is to assume that there’s one railroad operator with between say 1 and 1000 railroads with an equal probability of each size.

We can then write similar code as with the dice problem to update the prior based on the trains we’ve seen.

First we’ll create a data frame which captures the product of ‘number of locomotives’ and the observations of locomotives that we’ve seen (in this case we’ve only seen one locomotive with number ’60′:)

library(dplyr)
 
possibleValues = 1:1000
observations = c(60)
 
l = list(value = possibleValues, observation = observations)
df = expand.grid(l) 
 
> df %>% head()
  value observation
1     1          60
2     2          60
3     3          60
4     4          60
5     5          60
6     6          60

Next we want to add a column which represents the probability that the observed locomotive could have come from a particular fleet. If the number of railroads is less than 60 then we have a 0 probability, otherwise we have 1 / numberOfRailroadsInFleet:

prior = 1  / length(possibleValues)
df = df %>% mutate(score = ifelse(value < observation, 0, 1/value))
 
> df %>% sample_n(10)
     value observation       score
179    179          60 0.005586592
1001  1001          60 0.000999001
400    400          60 0.002500000
438    438          60 0.002283105
667    667          60 0.001499250
661    661          60 0.001512859
284    284          60 0.003521127
233    233          60 0.004291845
917    917          60 0.001090513
173    173          60 0.005780347

To find the probability of each fleet size we write the following code:

weightedDf = df %>% 
  group_by(value) %>% 
  summarise(aggScore = prior * prod(score)) %>%
  ungroup() %>%
  mutate(weighted = aggScore / sum(aggScore))
 
> weightedDf %>% sample_n(10)
Source: local data frame [10 x 3]
 
   value     aggScore     weighted
1    906 1.102650e-06 0.0003909489
2    262 3.812981e-06 0.0013519072
3    994 1.005031e-06 0.0003563377
4    669 1.493275e-06 0.0005294465
5    806 1.239455e-06 0.0004394537
6    673 1.484400e-06 0.0005262997
7    416 2.401445e-06 0.0008514416
8    624 1.600963e-06 0.0005676277
9     40 0.000000e+00 0.0000000000
10   248 4.028230e-06 0.0014282246

Let’s plot the data frame to see how the probability varies for each fleet size:

library(ggplot2)
ggplot(aes(x = value, y = weighted), data = weightedDf) + 
  geom_line(color="dark blue")

2015 04 25 00 25 47

The most likely choice is a fleet size of 60 based on this diagram but an alternative would be to find the mean of the posterior which we can do like so:

> weightedDf %>% mutate(mean = value * weighted) %>% select(mean) %>% sum()
[1] 333.6561

Now let’s create a function with all that code in so we can play around with some different priors and observations:

meanOfPosterior = function(values, observations) {
  l = list(value = values, observation = observations)   
  df = expand.grid(l) %>% mutate(score = ifelse(value < observation, 0, 1/value))
 
  prior = 1  / length(possibleValues)
  weightedDf = df %>% 
    group_by(value) %>% 
    summarise(aggScore = prior * prod(score)) %>%
    ungroup() %>%
    mutate(weighted = aggScore / sum(aggScore))
 
  return (weightedDf %>% mutate(mean = value * weighted) %>% select(mean) %>% sum()) 
}

If we update our observed railroads to have numbers 60, 30 and 90 we’d get the following means of posteriors assuming different priors:

> meanOfPosterior(1:500, c(60, 30, 90))
[1] 151.8496
> meanOfPosterior(1:1000, c(60, 30, 90))
[1] 164.3056
> meanOfPosterior(1:2000, c(60, 30, 90))
[1] 171.3382

At the moment the function assumes that we always want to have a uniform prior i.e. every option has an equal opportunity of being chosen, but we might want to vary the prior to see how different assumptions influence the posterior.

We can refactor the function to take in values & priors instead of calculating the priors in the function:

meanOfPosterior = function(values, priors, observations) {
  priorDf = data.frame(value = values, prior = priors)
  l = list(value = priorDf$value, observation = observations)
 
  df = merge(expand.grid(l), priorDf, by.x = "value", by.y = "value") %>% 
    mutate(score = ifelse(value < observation, 0, 1 / value))
 
  df %>% 
    group_by(value) %>% 
    summarise(aggScore = max(prior) * prod(score)) %>%
    ungroup() %>%
    mutate(weighted = aggScore / sum(aggScore)) %>%
    mutate(mean = value * weighted) %>%
    select(mean) %>%
    sum()
}

Now let’s check we get the same posterior means for the uniform priors:

> meanOfPosterior(1:500,  1/length(1:500), c(60, 30, 90))
[1] 151.8496
> meanOfPosterior(1:1000, 1/length(1:1000), c(60, 30, 90))
[1] 164.3056
> meanOfPosterior(1:2000, 1/length(1:2000), c(60, 30, 90))
[1] 171.3382

Now if instead of a uniform prior let’s use a power law one where the assumption is that smaller fleets are more likely:

> meanOfPosterior(1:500,  sapply(1:500,  function(x) x ** -1), c(60, 30, 90))
[1] 130.7085
> meanOfPosterior(1:1000, sapply(1:1000, function(x) x ** -1), c(60, 30, 90))
[1] 133.2752
> meanOfPosterior(1:2000, sapply(1:2000, function(x) x ** -1), c(60, 30, 90))
[1] 133.9975
> meanOfPosterior(1:5000, sapply(1:5000, function(x) x ** -1), c(60, 30, 90))
[1] 134.212
> meanOfPosterior(1:10000, sapply(1:10000, function(x) x ** -1), c(60, 30, 90))
[1] 134.2435

Now we get very similar posterior means which converge on 134 and so that’s our best prediction.

Categories: Blogs

New Editions to Support your Lean Journey

In case you hadn’t noticed, LeanKit recently introduced new product editions — Lite, Standard, Select, Advanced and Premium.  These new editions take into consideration the diverse needs of teams of different sizes and at different stages of their lean journey. Quick Summary of Each Edition If you’re interested in using LeanKit for your own personal […]

The post New Editions to Support your Lean Journey appeared first on Blog | LeanKit.

Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.