Skip to content

Blogs

Two New Case Studies: NAPA Group & Travis Perkins

Agile Product Owner - Mon, 04/27/2015 - 16:02

Hi,

We are always interested in hearing about SAFe transformations in the marketplace.  Here are two more recently published case studies that we’d like to share, both from Europe.

The NAPA Group

Finland-based NAPA Group—a leading software house in eco-efficient ship design—joined with our Gold Partner, Nitor Delta, for a 2-year journey of dramatic organizational change.

They began delivering value on a 3-month cycle, a huge turnaround from the year-long cycle they had before, and they increased their predictability to 92%. There were notable improvements in other areas as well, but the real story here is that these successes were the fruits borne of a combination of Scrum integrated with a full SAFe implementation. It demonstrates how Scrum can provide a great foundation for the individual teams, while SAFe provides the framework to support multiple teams within the same value stream.

Travis Perkins

In another study, Gold Partner, Rally Software, teamed up with Travis Perkins, UK’s leading building materials supplier. They put a 3-year adoption plan into place designed to transform a 200+ year-old legacy-burdened bureaucracy into a nimble, 21st Century organization. Their initial goal was to eliminate wasted work while accelerating ROI. Change happens slowly in any enterprise, but within a year, they completed their first ART—a huge feat for such an enterprise, and pointed to SAFe as making it “… easier for us to focus on what has the most business value. Instead of delivery perceived value, we’re now delivering actual value.”

You can read these case studies, and more at: scaledagileframework.com/case-studies/

Stay SAFe,
–Dean

Categories: Blogs

Agiles Anforderungsmanagement mit dem Poke-Prinzip

Scrum 4 You - Mon, 04/27/2015 - 08:39

Marco Ley, Leiter eEntwicklung bei CosmosDirekt, sprach auf den Softwareforen Leipzig letzte Woche über “Agiles Anforderungsmanagement: Das Poke-Prinzip – von harten Anforderungen zu kleinen Experimenten.“ Ich muss von diesem Vortrag erzählen, weil ich so stolz auf dieses CosmosDirekt Team bin. Ich habe nichts dafür getan, dort kennt man mich nicht, und ich will gar keine Lorbeeren einheimsen, die Marco Ley gehören, aber ich bin einfach vollkommen fasziniert.

Kennt ihr das, wenn ihr hart für etwas arbeitet und dann feststellt, dass all das, worüber ihr nachgedacht und ständig gesprochen habt, plötzlich Realität wird? Nun ja – so fühlte ich mich an diesem Morgen beim Vortrag von Marco Ley. Er sprach davon, dass seine Entwicklungsteams vollständig crossfunktional aufgestellt sind – UX, RE, Tester, Developer. Diese Teams arbeiten nicht etwa Anforderungen ab, sondern erarbeiten die User Storys selbstständig, basierend auf groben Vorgaben. Und es sind nicht etwa klassische User Storys, sondern Hypothesen, die in mehr oder weniger aufwendigen A/B Tests auf der Produktivumgebung geprüft werden,so dass die Funktionalitäten fÃür das gesamte CosmosDirekt-Portal live gestellt werden. Die durch die Implementierung gewonnenen Daten beweisen, ob sie wirklich einen Return On Invest bringen.

Damit zeigt uns Herr Ley, dass es ihm gelungen ist, die Rolle des Product Owners so zu leben, wie es meiner bescheidenen Meinung nach sein sollte. Er kümmert sich darum, Ideen zu finden, diese daraufhin zu bewerten, ob man damit Geld verdienen kann und macht dann diese Ideen zu Hypothesen, die er von seinen Kollegen durch Implementierung überprüfen lässt. Was funktioniert, wird behalten, der Rest wieder entsorgt. Einfach toll!

Wir haben ihn natürlich gefragt, wie er erkennt, ob er Erfolg mit der Funktionalität hatte, und er sagte: „Weil wir eine Datenbasis haben.” Er trifft Entscheidungen auf Basis von Daten, die er durch Ausprobieren gewinnt und nicht durch politische Überlegungen. Chapeau!Wenn man ihm so zuhört, tut mir die übrige Online Direktversicherungsbranche leid. Sie kann sich warm anziehen, sollte sich sein Vorgehen bei ComosDirekt weiter durchsetzen. Sein Team wird allen anderen einfach davonlaufen.

Vielen Dank für diesen tollen Vortrag!

An dieser Stelle noch etwas Werbung: Die Softwareforen in Leipzig haben mit der Agilen User Group, die sich dort zwei Mal im Jahr trifft, eine wirklich tolle Veranstaltung ins Leben gerufen. Ich bin sehr dankbar, dass ich dabei sein darf. Mehr Infos hier

Categories: Blogs

Deliberate Practice: Watching yourself fail

Mark Needham - Sun, 04/26/2015 - 00:26

Think bayes cover medium

I’ve recently been reading the literature written by K. Anders Eriksson and co on Deliberate Practice and one of the suggestions for increasing our competence at a skill is to put ourselves in a situation where we can fail.

I’ve been reading Think Bayes – an introductory text on Bayesian statistics, something I know nothing about – and each chapter concludes with a set of exercises to practice, a potentially perfect exercise in failure!

I’ve been going through the exercises and capturing my screen while I do so, an idea I picked up from one of the papers:

our most important breakthrough was developing a relatively inexpensive and efficient way for students to record their exercises on video and to review and analyze their own performances against well-defined criteria

Ideally I’d get a coach to review the video but that seems too much of an ask of someone. Antonios has taken a look at some of my answers, however, and made suggestions for how he’d solve them which has been really helpful.

After each exercise I watch the video and look for areas where I get stuck or don’t make progress so that I can go and practice more in that area. I also try to find inefficiencies in how I solve a problem as well as the types of approaches I’m taking.

These are some of the observations from watching myself back over the last week or so:

  • I was most successful when I had some idea of what I was going to try first. Most of the time the first code I wrote didn’t end up being correct but it moved me closer to the answer or ruled out an approach.

    It’s much easier to see the error in approach if there is an approach! On one occasion where I hadn’t planned out an approach I ended up staring at the question for 10 minutes and didn’t make any progress at all.

  • I could either solve the problems within 20 minutes or I wasn’t going to solve them and needed to chunk down to a simpler problem and then try the original exercise again.

    e.g. one exercise was to calculate the 5th percentile of a posterior distribution which I flailed around with for 15 minutes before giving up. Watching back on the video it was obvious that I hadn’t completely understood what a probability mass function was. I read the Wikipedia entry and retried the exercise and this time got the answer.

  • Knowing that you’re going to watch the video back stops you from getting distracted by email, twitter, Facebook etc.
  • It’s a painful experience watching yourself struggle – you can see exactly which functions you don’t know or things you need to look up on Google.
  • I deliberately don’t copy/paste any code while doing these exercises. I want to see how well I can do the exercises from scratch so that would defeat the point.

One of the suggestions that Eriksson makes for practice sessions is to focus on ‘technique’ during practice sessions rather than only on outcome but I haven’t yet been able to translate what exactly that would involved in a programming context.

If you have any ideas or thoughts on this approach do let me know in the comments.

Categories: Blogs

Its high time that we stop using Velocity

Agile World - Venkatesh Krishnamurthy - Sat, 04/25/2015 - 05:17

Velocity in an Agile environment is the most misused-used and misinterpreted word/metric.  The key issue is, teams and stakeholders interpret Velocity as a productivity measurement rather than capacity of the team.

I don’t blame the team or the managers. But the “word” itself. If we look at the synonyms for Velocity(see the screenshot below), all of them point to quickness, momentum, acceleration, which naturally encourages people to connect this with “productivity”.  

 Velocity

Google for Acceleration or Velocity and one would find following images… These images push people to think more of a competition, race and winning rather than a team work or a capacity.

image image image

I think we should stop using the word velocity and start using the word that creates some mental image to show the “team’s capacity”.

image

Do you think this is a fair call ?

Categories: Blogs

R: Think Bayes Locomotive Problem – Posterior probabilities for different priors

Mark Needham - Sat, 04/25/2015 - 01:53

In my continued reading of Think Bayes the next problem to tackle is the Locomotive problem which is defined thus:

A railroad numbers its locomotives in order 1..N.

One day you see a locomotive with the number 60. Estimate how many loco- motives the railroad has.

The interesting thing about this question is that it initially seems that we don’t have enough information to come up with any sort of answer. However, we can get an estimate if we come up with a prior to work with.

The simplest prior is to assume that there’s one railroad operator with between say 1 and 1000 railroads with an equal probability of each size.

We can then write similar code as with the dice problem to update the prior based on the trains we’ve seen.

First we’ll create a data frame which captures the product of ‘number of locomotives’ and the observations of locomotives that we’ve seen (in this case we’ve only seen one locomotive with number ’60′:)

library(dplyr)
 
possibleValues = 1:1000
observations = c(60)
 
l = list(value = possibleValues, observation = observations)
df = expand.grid(l) 
 
> df %>% head()
  value observation
1     1          60
2     2          60
3     3          60
4     4          60
5     5          60
6     6          60

Next we want to add a column which represents the probability that the observed locomotive could have come from a particular fleet. If the number of railroads is less than 60 then we have a 0 probability, otherwise we have 1 / numberOfRailroadsInFleet:

prior = 1  / length(possibleValues)
df = df %>% mutate(score = ifelse(value < observation, 0, 1/value))
 
> df %>% sample_n(10)
     value observation       score
179    179          60 0.005586592
1001  1001          60 0.000999001
400    400          60 0.002500000
438    438          60 0.002283105
667    667          60 0.001499250
661    661          60 0.001512859
284    284          60 0.003521127
233    233          60 0.004291845
917    917          60 0.001090513
173    173          60 0.005780347

To find the probability of each fleet size we write the following code:

weightedDf = df %>% 
  group_by(value) %>% 
  summarise(aggScore = prior * prod(score)) %>%
  ungroup() %>%
  mutate(weighted = aggScore / sum(aggScore))
 
> weightedDf %>% sample_n(10)
Source: local data frame [10 x 3]
 
   value     aggScore     weighted
1    906 1.102650e-06 0.0003909489
2    262 3.812981e-06 0.0013519072
3    994 1.005031e-06 0.0003563377
4    669 1.493275e-06 0.0005294465
5    806 1.239455e-06 0.0004394537
6    673 1.484400e-06 0.0005262997
7    416 2.401445e-06 0.0008514416
8    624 1.600963e-06 0.0005676277
9     40 0.000000e+00 0.0000000000
10   248 4.028230e-06 0.0014282246

Let’s plot the data frame to see how the probability varies for each fleet size:

library(ggplot2)
ggplot(aes(x = value, y = weighted), data = weightedDf) + 
  geom_line(color="dark blue")

2015 04 25 00 25 47

The most likely choice is a fleet size of 60 based on this diagram but an alternative would be to find the mean of the posterior which we can do like so:

> weightedDf %>% mutate(mean = value * weighted) %>% select(mean) %>% sum()
[1] 333.6561

Now let’s create a function with all that code in so we can play around with some different priors and observations:

meanOfPosterior = function(values, observations) {
  l = list(value = values, observation = observations)   
  df = expand.grid(l) %>% mutate(score = ifelse(value < observation, 0, 1/value))
 
  prior = 1  / length(possibleValues)
  weightedDf = df %>% 
    group_by(value) %>% 
    summarise(aggScore = prior * prod(score)) %>%
    ungroup() %>%
    mutate(weighted = aggScore / sum(aggScore))
 
  return (weightedDf %>% mutate(mean = value * weighted) %>% select(mean) %>% sum()) 
}

If we update our observed railroads to have numbers 60, 30 and 90 we’d get the following means of posteriors assuming different priors:

> meanOfPosterior(1:500, c(60, 30, 90))
[1] 151.8496
> meanOfPosterior(1:1000, c(60, 30, 90))
[1] 164.3056
> meanOfPosterior(1:2000, c(60, 30, 90))
[1] 171.3382

At the moment the function assumes that we always want to have a uniform prior i.e. every option has an equal opportunity of being chosen, but we might want to vary the prior to see how different assumptions influence the posterior.

We can refactor the function to take in values & priors instead of calculating the priors in the function:

meanOfPosterior = function(values, priors, observations) {
  priorDf = data.frame(value = values, prior = priors)
  l = list(value = priorDf$value, observation = observations)
 
  df = merge(expand.grid(l), priorDf, by.x = "value", by.y = "value") %>% 
    mutate(score = ifelse(value < observation, 0, 1 / value))
 
  df %>% 
    group_by(value) %>% 
    summarise(aggScore = max(prior) * prod(score)) %>%
    ungroup() %>%
    mutate(weighted = aggScore / sum(aggScore)) %>%
    mutate(mean = value * weighted) %>%
    select(mean) %>%
    sum()
}

Now let’s check we get the same posterior means for the uniform priors:

> meanOfPosterior(1:500,  1/length(1:500), c(60, 30, 90))
[1] 151.8496
> meanOfPosterior(1:1000, 1/length(1:1000), c(60, 30, 90))
[1] 164.3056
> meanOfPosterior(1:2000, 1/length(1:2000), c(60, 30, 90))
[1] 171.3382

Now if instead of a uniform prior let’s use a power law one where the assumption is that smaller fleets are more likely:

> meanOfPosterior(1:500,  sapply(1:500,  function(x) x ** -1), c(60, 30, 90))
[1] 130.7085
> meanOfPosterior(1:1000, sapply(1:1000, function(x) x ** -1), c(60, 30, 90))
[1] 133.2752
> meanOfPosterior(1:2000, sapply(1:2000, function(x) x ** -1), c(60, 30, 90))
[1] 133.9975
> meanOfPosterior(1:5000, sapply(1:5000, function(x) x ** -1), c(60, 30, 90))
[1] 134.212
> meanOfPosterior(1:10000, sapply(1:10000, function(x) x ** -1), c(60, 30, 90))
[1] 134.2435

Now we get very similar posterior means which converge on 134 and so that’s our best prediction.

Categories: Blogs

Thinking About #NoEstimates?

Johanna Rothman - Fri, 04/24/2015 - 13:32

I have a new article up on agileconnection.com called The Case for #NoEstimates.

The idea is to produce value instead of spending time estimating. We have a vigorous “debate” going on in the comments. I have client work today, so I will be slow to answer comments. I will answer as soon as I have time to compose thoughtful replies!

This column is the follow-on to How Do Your Estimates Provide Value?

If you would like to learn to estimate better or recover from “incorrect” estimates (an oxymoron if I ever heard one), see Predicting the Unpredictable. (All estimates are guesses. If they are ever correct, it’s because we got lucky.)

Categories: Blogs

The Need for Continuous Improvement in Agile

Ben Linders - Fri, 04/24/2015 - 02:50
I gave a well received keynote at the QCon Beijing conference, in which I explained why continuous improvement is essential to deliver value with agile. QCon Beijing was the largest QCon conference so far with over 1600 attendants. Continue reading →
Categories: Blogs

Manager’s Journey: Awareness, Epiphany, & Choice

Agilitrix - Michael Sahota - Thu, 04/23/2015 - 17:33

Delighted to share the slides from my and Soo Kim’s presentation at Spark The Change.

Summary

An insider’s account of a manager’s journey of cultural transformation. How our beliefs and assumptions radically shifted. How we found the courage to fully see what is there and accept it.  Being vulnerable enough to speak our truth to allow new options to emerge. Developing the boldness to choose them.

The Journey Awareness, Epiphany, & Choice Slides

Manager's journey from Michael Sahota

The post Manager’s Journey: Awareness, Epiphany, & Choice appeared first on Catalyst - Agile & Culture.

Related posts:

  1. VAST – Virtuous Cycle for Connection It’s all about how we show up. If we show...
  2. Organizational Debt Cycle Many consider the modern workplace inhumane and uninhabitable. People are...
  3. Whole Agile – Unleash People & Organizations Agile is incomplete. We need to augment it to create...

YARPP powered by AdBistroPowered by

Categories: Blogs

What Is The Goal?

Leading Agile - Mike Cottmeyer - Thu, 04/23/2015 - 15:03

What is the goal?

I seem to lead with that question a lot these days. Is the goal to practice Scrum? Is the goal to apply SAFe? Is the goal to use some other Agile delivery framework? Is the goal to uphold the values and principles of the Agile Manifesto?

They are all means to an end. Your goal depends on your organization. Fundamentally, every for-profit organization I’ve come in contact with has pretty much the same primary goal. Make money!

Before committing budget for that next project, let’s first ask ourselves if we know our core business drivers.

Common Business Drivers
  • Predictability
  • Higher Quality
  • Shorter time to market
  • Lower Costs

But let’s look at this again. What is the primary goal? Make money!

How do we achieve the goal?
  • Through predictability, we get better at forecasting sales and delivery (lead times)
  • Through higher quality, we lower costs of rework and increase customer satisfaction
  • With shorter time to market, we can get an earlier ROI and increase cash flow
  • With lower costs, we free up capital for other areas of our organization
Answer these questions:
  1. What is your primary organizational goal?
  2. What are your core business drivers, relative to your primary organizational goal?
  3. If you don’t know the goal, how do you know where to spend your time or money?
  4. How do you know where to start?

The post What Is The Goal? appeared first on LeadingAgile.

Categories: Blogs

R: Replacing for loops with data frames

Mark Needham - Thu, 04/23/2015 - 00:18

In my last blog post I showed how to derive posterior probabilities for the Think Bayes dice problem:

Suppose I have a box of dice that contains a 4-sided die, a 6-sided die, an 8-sided die, a 12-sided die, and a 20-sided die. If you have ever played Dungeons & Dragons, you know what I am talking about.

Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?

To recap, this was my final solution:

likelihoods = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        if(name < observation) {
          scores[paste(name)]  = 0
        } else {
          scores[paste(name)] = scores[paste(name)] *  (1.0 / name)
        }        
      }
    }  
  return(scores)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods(dice, c(6))
 
> l1 / sum(l1)
        4         6         8        12        20 
0.0000000 0.3921569 0.2941176 0.1960784 0.1176471

Although it works we have nested for loops which aren’t very idiomatic R so let’s try and get rid of them.

The first thing we want to do is return a data frame rather than a vector so we tweak the first two lines to read like this:

scores = rep(1.0 / length(names), length(names))  
df = data.frame(score = scores, name = names)

Next we can get rid of the inner for loop and replace it with a call to ifelse wrapped inside a dplyr mutate call:

library(dplyr)
likelihoods2 = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  df = data.frame(score = scores, name = names)
 
  for(observation in observations) {
    df = df %>% mutate(score = ifelse(name < observation, 0, score * (1.0 / name)) )
  }
 
  return(df)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods2(dice, c(6))
 
> l1
       score name
1 0.00000000    4
2 0.03333333    6
3 0.02500000    8
4 0.01666667   12
5 0.01000000   20

Finally we’ll tidy up the scores so they’re relatively weighted against each other:

likelihoods2 = function(names, observations) {
  scores = rep(1.0 / length(names), length(names))  
  df = data.frame(score = scores, name = names)
 
  for(observation in observations) {
    df = df %>% mutate(score = ifelse(name < observation, 0, score * (1.0 / name)) )
  }
 
  return(df %>% mutate(weighted = score / sum(score)) %>% select(name, weighted))
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods2(dice, c(6))
 
> l1
  name  weighted
1    4 0.0000000
2    6 0.3921569
3    8 0.2941176
4   12 0.1960784
5   20 0.1176471

Now we’re down to just the one for loop. Getting rid of that one is a bit trickier. First we’ll create a data frame which contains a row for every (observation, dice) pair, simulating the nested for loops:

likelihoods3 = function(names, observations) {
  l = list(observation = observations, roll = names)
  obsDf = do.call(expand.grid,l) %>% 
    mutate(likelihood = 1.0 / roll, 
           score = ifelse(roll < observation, 0, likelihood))   
 
  return(obsDf)
}
 
dice = c(4,6,8,12,20)
l1 = likelihoods3(dice, c(6))
 
> l1
  observation roll likelihood      score
1           6    4 0.25000000 0.00000000
2           6    6 0.16666667 0.16666667
3           6    8 0.12500000 0.12500000
4           6   12 0.08333333 0.08333333
5           6   20 0.05000000 0.05000000
 
l2 = likelihoods3(dice, c(6, 4, 8, 7, 7, 2))
> l2
   observation roll likelihood      score
1            6    4 0.25000000 0.00000000
2            4    4 0.25000000 0.25000000
3            8    4 0.25000000 0.00000000
4            7    4 0.25000000 0.00000000
5            7    4 0.25000000 0.00000000
6            2    4 0.25000000 0.25000000
7            6    6 0.16666667 0.16666667
8            4    6 0.16666667 0.16666667
9            8    6 0.16666667 0.00000000
10           7    6 0.16666667 0.00000000
11           7    6 0.16666667 0.00000000
12           2    6 0.16666667 0.16666667
13           6    8 0.12500000 0.12500000
14           4    8 0.12500000 0.12500000
15           8    8 0.12500000 0.12500000
16           7    8 0.12500000 0.12500000
17           7    8 0.12500000 0.12500000
18           2    8 0.12500000 0.12500000
19           6   12 0.08333333 0.08333333
20           4   12 0.08333333 0.08333333
21           8   12 0.08333333 0.08333333
22           7   12 0.08333333 0.08333333
23           7   12 0.08333333 0.08333333
24           2   12 0.08333333 0.08333333
25           6   20 0.05000000 0.05000000
26           4   20 0.05000000 0.05000000
27           8   20 0.05000000 0.05000000
28           7   20 0.05000000 0.05000000
29           7   20 0.05000000 0.05000000
30           2   20 0.05000000 0.05000000

Now we need to iterate over the data frame, grouping by ‘roll’ so that we end up with one row for each one.

We’ll add a new column which stores the posterior probability for each dice. This will be calculated by multiplying the prior probability by the product of the ‘score’ entries.

This is what our new likelihood function looks like:

likelihoods3 = function(names, observations) {
  l = list(observation = observations, roll = names)
  obsDf = do.call(expand.grid,l) %>% 
    mutate(likelihood = 1.0 / roll, 
           score = ifelse(roll < observation, 0, likelihood))   
 
  return (obsDf %>% 
    group_by(roll) %>% 
    summarise(s = (1.0/length(names)) * prod(score) ) %>%
    ungroup() %>% 
    mutate(weighted = s / sum(s)) %>%
    select(roll, weighted))
}
 
l1 = likelihoods3(dice, c(6))
> l1
Source: local data frame [5 x 2]
 
  roll  weighted
1    4 0.0000000
2    6 0.3921569
3    8 0.2941176
4   12 0.1960784
5   20 0.1176471
 
l2 = likelihoods3(dice, c(6, 4, 8, 7, 7, 2))
> l2
Source: local data frame [5 x 2]
 
  roll    weighted
1    4 0.000000000
2    6 0.000000000
3    8 0.915845272
4   12 0.080403426
5   20 0.003751302

We’ve now got the same result as we did with our nested for loops so I think the refactoring has been a success.

Categories: Blogs

The Story of a Digital Artist

J.D. Meier's Blog - Wed, 04/22/2015 - 17:35

I’m always on the hunt for people that do what makes them come alive.

Artists in particular are especially interesting for me, especially when they are able to do what they love.

I’ve known too many artists that lived painful lives, trying to be an artist, but never making ends meet.

I’ve also known too many artists that lived another life outside of art, but never really lived, because they never answered their calling.

I believe that in today’s world, there are a lot more options for you to live life on you terms.

With technology at our fingertips, it’s easier to connect with people around the world and share your art, whatever that may be.

On Sources of Insight, I’ve asked artist Rebecca Tsien to share her story:

Why I Draw People and Animals

It’s more than a story of a digital artist.   It’s a journey of fulfillment.

Rebecca has found a way to do what she loves.  She lives and breathes her passion.

Maybe her story can inspire you.

Maybe there’s a way you can do more art.

Categories: Blogs

Coming Soon: RabbitMQ For Developers

Derick Bailey - new ThoughtStream - Wed, 04/22/2015 - 16:32

A few months ago, I started working on a screencast series for WatchMeCode that covers RabbitMQ – a great little messaging system that allows you to quickly and easily write distributed applications by using message queues. The screencast series is done, and it’s been a rather popular one already. But I also realize that there’s a lot of additional information out there, that would be very useful for other developers to have.

When it comes to figuring out how to organize your RabbitMQ topology – that is, the exchanges, queues and bindings – there isn’t a lot of information around. You can get a few bits of info from a dozen sources and put it all together with trial and error to see how things work best, but there hasn’t been a single place to go to for this info.

There is also a lot of expert knowledge on what messaging can do for you, in the minds of other developers that are working in this space. People that are working in RabbitMQ and other messaging systems are using messages to do some amazing things.

I want to bring more of this knowledge to the rest of us – the “it depends…” that developers, architects and experts in the world of message based architectures collect over years of experience. So I am.

The Bundle With All Of This

With all that in mind, I am working on building a package that will be available to everyone, to help you get up and running with RabbitMQ quickly!

You will learn the how-to’s of working with RabbitMQ and using NodeJS to send / receive messages. You’ll learn the why-to’s of each exchange type with RabbitMQ, and why you may or may not want to use a specific option in a specific scenario. You’ll see interviews with industry experts, authors and developers that work with RabbitMQ on a daily basis. And you’ll receive some extra bonus materials – things that I’m not yet ready to announce!

With all of these resources coming together, this will certainly be a high value package for any developer that is working with RabbitMQ or is curious about where and why they would want to work with RabbitMQ.

The Screencasts

I have the screencasts, which I’ve already released to my WatchMeCode subscribers. These screencasts are aimed at NodeJS developers that want to get up and running with distributed systems very quickly. I’ll walk you through the installation, basic configuration and use of RabbitMQ with NodeJS and ExpressJS applications. It’s a 12 episode, multi-hour long series of screencasts that builds on each other as it goes.

NewImage NewImage NewImage

The screencasts (only 3 of the 12 shown here) are done and in the bag, already. Any subscriber to WatchMeCode will already have access to these – but not to the rest of the material! The eBook, the interviews and the bonuses that I have planned are going to be saved for the full package when it releases.

The eBook

Over the last month or so, I’ve been working on my own eBook that is tackling the problem of figuring out your options with RabbitMQ, trying to understand which of the many exchange types is the best one for a given scenario, etc. I’ve got the eBook about 90% done at this point, and should be finishing it soon.

NewImage

This isn’t just another technical book with lots of boring detail about API’s, though. Sure, I do cover a few technical bits when it comes to understanding the different types of exchanges. But the majority of this book – and the real value that it offers – is a very different approach to teaching technical content.

From the books’ preface, and “about this book”:

Rather than providing a strictly technical approach to show you how to use RabbitMQ, part 2 takes a story-telling approach to guide you through the decision making paths of real world projects and systems. Through a narrative style that puts you in the mind of another developer, you will see the struggles, the success and some of the failures in designing a message based system with RabbitMQ.

Chapter 5 will show a sample job scheduling application, following a developer through a decision to implement RabbitMQ and organize it appropriately. The running jobs in the schedule can be put in to various states through an administrative web application, but how should the queues and exchanges be organized to facilitate this? The inspiration for the solution seems to come out of nowhere.

Chapter 6 follows a developer through the often painful process of discovering and documenting performance problems, and finding solutions to the issue. RabbitMQ is already in place, in this case, but was it the right thing to do? Should RabbitMQ be taken out of the equation in order to reduce the latency between requesting a download and actually sending the file back to the user? Find out whether or not RabbitMQ makes the cut in this story of struggling to improve the performance of a file sharing service.

And there’s far more to the book than just these 2 chapters. I’ve got several more written already, and at least one more to come – all with the goal of putting you inside the mind of another developer as they learn, struggle, succeed and sometimes fail their way in to a solid RabbitMQ structure.

The Interviews

In the last few weeks, I have also been scheduling and recording interviews with experts in the world of messaging based architectures. The list of people that I am interviewing is quite impressive already, and it’s still growing. I have persons from some very large and notable companies, authors of some of my favorite books, frameworks and plugins, developers and architects that are leading their team down a very successful path with messaging, and more!

NewImageNewImage

These interviews are each focused on a specific area of expertise for every guest. From architectural concepts like CQRS, to long running “transactions” in Sagas, error handling and failures to production requirements for RabbitMQ, these interviews will provide a new level of insight in to what RabbitMQ can and should do for you, your current applications and beyond.

Shut Up And Take My Money!

Sorry, but I can’t just yet…

There’s a lot of work left to make sure this is the best possible package to help you get up and running quickly. I’m working on it as fast as I can! But, it will be a few more weeks before the raw material is complete, and possibly a few weeks after that to ensure everything is packaged properly. But it’s coming… and it will be here soon!

If you’re interested in staying up to date with what I’m doing in this package, be sure to join my mailing list. You’ll receive updates on the progress of the bundle, and you’ll get a hefty discount when this thing ships! You won’t want to miss the discount, either… this will be the largest discount that will ever be available for this package deal!

 

Join My List and Get Updates On This Bundle!

 

 

Categories: Blogs

A Dreyfus model for Agile adoption

lizkeogh.com - Elizabeth Keogh - Wed, 04/22/2015 - 09:30

A couple of people have asked for this recently, so just posting it here to get it under the CC licence. It was written a while ago, and there are better maturity models out there, but I still find this useful for showing teams a roadmap they can understand.

If you want to know more about how to create and use your own Dreyfus models, this post might help.

What does an Agile Team look like? Novice

We have a board
We put our stories on the board
Every two weeks, we get together and celebrate what we’ve done
Sometimes we talk to the stakeholders about it
We think we might miss our deadline and have told our PM
Agile is really hard to do well

Beginner

We are trying to deliver working software
We hold retrospectives to talk about what made us unhappy
When something doesn’t work, we ask our coach what to do about it
Our coach gives us good ideas
We have delegated someone to deal with our offshore communications
We have a great BA who talks to the stakeholders a lot
We know we’re going to miss our deadline; our PM is on it
Agile requires a great deal of discipline

Practitioner

We know that our software will work in production
Every two weeks, we show our working software to the stakeholders
We talk to the stakeholders about the next set of stories they wants us to do
We have established a realistic deadline and are happy that we’ll make it
We have some good ideas of our own
We deal with blockers promptly
We write unit tests
We write acceptance tests
We hold retrospectives to work out what stopped us delivering software
We always know what ‘done’ looks like before we start work
We love our offshore team members; we know who they are and what they look like and talk to them every day
Our stakeholders are really interested in the work we’re doing
We always have tests before we start work, even if they’re manual
We’ve captured knowledge about how to deploy our code to production
Agile is a lot of fun

Knowledgeable

We are going to come well within our deadline
Sometimes we invite our CEO to our show-and-tell, so he can see what Agile looks like done well
People applaud at the end of the show-and-tell; everyone is very happy
That screen shows the offshore team working; we can talk to them any time; they can see us too
We hold retrospectives to celebrate what we learnt
We challenge our coach and change our practices to help us deliver better
We run the tests before we start work – even the manual tests, to see what’s broken and know what will be different when we’re done
Agile is applicable to more than just software delivery

Expert

We go to conferences and talk about our fantastic Agile experiences
We are helping other teams go Agile
Business outside of IT are really interested in what we’re doing
We regularly revisit our practices, and look at other teams to see what they’re doing
The company is innovative and fun
The company are happy to try things out and get quick feedback
We never have to work late or weekends
We deploy to production every two weeks*
Agile is really easy when you do it well!

* Wow, this model is old.


Categories: Blogs

Single Tasking und positive Verstärkung – ein Selbstexperiment

Scrum 4 You - Wed, 04/22/2015 - 08:05

Nach dem Training “Selbstorganisation braucht Führung” und unserem internen Consulting Day zum Schwerpunkt Zeitmanagement beschließe ich, an meiner eigenen Selbstorganisation zu arbeiten und die gewonnenen Erkenntnisse in die Tat umzusetzen. Also beginne ich mit einem Selbstexperiment zum Thema Single Tasking und positive Verstärkung. Für alle, die sich darunter nichts vorstellen können: Single Tasking bedeutet, sich auf einen Task nach dem anderen zu konzentrieren, während man beim Multitasking zwischen zwei oder mehreren Aufgaben hin und her wechselt. Ein kleines Experiment zu den schädlichen Auswirkungen auf die Produktivität findet sich hier. Zum Thema positive Verstärkung sei nur gesagt: Es wirkt wesentlich motivierender auf einen Menschen, wenn er positiv statt negativ bestärkt wird. Das Paradebeispiel dafür ist das Casino: Spieler verlieren sehr oft (negative Verstärkung), aber gewinnen sehr selten (positive Verstärkung) – trotzdem gibt es viele Spielsüchtige. Grund dafür ist besagte positive Verstärkung, die eben weitaus stärker wirkt.

Start – Tag 1

19:40
Die Idee zum Selbstexperiment ist geboren. Ich beschließe, für jeden Single Tasking Vorgang 1 Euro von meinem eigenen Budget zu kassieren, den ich anschließend für meine Belohnung ausgeben darf (positive Verstärkung). Derzeit denke ich noch, dass ich mit dieser Methode ziemlich schnell bankrott bin.

19:50
Erster Euro eingeheimst für den Download der Counter App für dieses Experiment. Ha! Parallel stelle ich jedoch fest, dass sich dieser Vorgang fast wie ein Purzelbaum im Hirn anfühlt. Ich scheine selten Gedankengänge oder Tätigkeiten zu Ende zu bringen, ohne mittendrin etwas Neues anzufangen. Und ich dachte, ich wäre Single Task Profi …

20:10
Ein weiterer Euro hinzugefügt und gleich wieder abgezogen, da ich mitten in der Tätigkeit meine Counter App gezückt habe, um den Punkt zu registrieren. Ich frage mich, ob mein Hirn aus irgendeinem Grund süchtig nach Multitasking ist – beschließe, der Sache auf dem Grund zu gehen und einen Blogbeitrag darüber zu schreiben. Parallel stelle ich aber fest, dass der alleinige Versuch, bei einer Sache bleiben zu wollen, einen Entspannungseffekt produziert. Die Aufgabe befreit mich davon, unentwegt und gleichzeitig an unterschiedliche Themen denken zu wollen.

20:22
Während ich diesen Blog schreibe, stelle ich fest, dass mich mein Mac gerade meinen Belohnungseuro gekostet hat. Ein Pop-up mit dem Wunsch nach einer Passworteingabe für die Cloud, darauf folgende Falscheingaben und der Ärger darüber haben es mir nicht ermöglicht, kontinuierlich hier weiterzuschreiben. Verhindert sogar mein Computer das erfolgreiche Single Tasking?

20:38
Bewerbung eines Bewerbers in einem Zug gelesen und somit für Bewerbungsgespräch vorbereitet. Punkt gesammelt. Interessanter Nebeneffekt: Positive Verstärkung wirkt auch, wenn ich es bei mir selbst durchführe und weiß, warum ich es tue.

21:17
Computer sind eindeutig hinderlich beim Single Tasking. Selbst mit einem Apple scheint es unmöglich, eine Aufgabe durchzuführen, ohne mittendrin durch ein Pop-up unterbrochen zu werden. Falls das mal nicht der Fall ist, ist die Internetverbindung so langsam, dass es kaum möglich ist, bei der Sache zu bleiben.

Tag 2

09:59
Ich stelle fest, wie schwierig es ist, dranzubleiben und eine Sache unterbrechungsfrei zu machen. Einerseits sind die Leute in meiner Umgebung ebenfalls gewohnt, durch unterschiedliche Themen zu springen, andererseits stellen reinkommende Nachrichten sowie Informationen von Handy und Laptop das zweite große Hindernis dar.

17:03
Nachdem ich streckenweise gar keinen Punkt bekommen habe, kann ich mir abends gleich mehrere Punkte holen. Auf Anraten von Boris habe ich ein Bullet Journal erstellt und mit Extreme Timeboxing begonnen. Ergebnis: Ich bin von 3 auf 11 Punkte gesprungen, die jetzt auch gleich in Dinge investiert werden, die mir Spaß machen. :)

Resumee

  1. Single Tasking befreit ungemein und ermöglicht es, zufrieden nach Hause zu gehen.
  2. Ohne externe Maßnahmen wie eine gut geführte Aufgabenliste und konsequentes Timeboxing ist es wegen der vielen Ablenkungen fast unmöglich, bei nur einer Sache zu bleiben.
  3. Positive Verstärkung zieht. Sogar wenn man sein eigenes Budget dafür ausgeben darf – klingt seltsam, ist aber erfreulicherweise so.

Eigentlich ist die Sache damit erledigt. Aber wir kennen doch den guten alten Schweinehund, der wieder auflauert, wenn man gerade wegschaut. Ich beschließe, die Langzeiteffekte zu prüfen.

To be continued

Categories: Blogs

Protecting Agile Projects from Consultants

TV Agile - Wed, 04/22/2015 - 07:58
Zigurd Mednieks explains why you should protect Agile projects from the consultants. Consultants can drive a software project to enormous success, but sometimes they drive the project off the rails instead. Managers must know when to check up on an agile project and what questions to ask to make sure the consultant doesn’t run away […]
Categories: Blogs

Measurements Towards Continuous Delivery

Learn more about our Scrum and Agile training sessions on WorldMindware.com

I was asked yesterday what measurements a team could start to take to track their progress towards continuous delivery. Here are some initial thoughts.

Lead time per work item to production

Lead time starts the moment we have enough information that we could start the work (ie it’s “ready”). Most teams that measure lead time will stop the clock when that item reaches the teams definition of “done” which may or may not mean that the work is in production. In this case, we want to explicitly keep tracking the time until it really is in production.
Note that when we’re talking about continuous delivery, we make the distinction between deploy and release. Deploy is when we’ve pushed it to the production environment and release is when we turn it on. This measurement stops at the end of deploy.

Cycle time to “done”

If the lead time above is excessively long then we might want to track just cycle time. Cycle time starts when we begin working on the item and stops when we reach “done”.
When teams are first starting their journey to continuous delivery, lead times to production are often measured in months and it can be hard to get sufficient feedback with cycles that long. Measuring cycle time to “done” can be a good intermediate measurement while we work on reducing lead time to production.

Escaped defects

If a bug is discovered after the team said the work was done then we want to track that. Prior to hitting “done”, it’s not really a bug – it’s just unfinished work.
Shipping buggy code is bad and this should be obvious. Continuously delivering buggy code is worse. Let’s get the code in good shape before we start pushing deploys out regularly.

Defect fix times

How old is the oldest reported bug? I’ve seen teams that had bug lists that went on for pages and where the oldest were measured in years. Really successful teams fix bugs as fast as they appear.

Total regression test time

Track the total time it takes to do a full regression test. This includes both manual and automated tests. Teams that have primarily manual tests will measure this in weeks or months. Teams that have primarily automated tests will measure this in minutes or hours.
This is important because we would like to do a full regression test prior to any production deploy. Not doing that regression test introduces risk to the deployment. We can’t turn on continuous delivery if the risk is too high.

Time the build can be broken

How long can your continuous integration build be broken before it’s fixed? We all make mistakes. Sometimes something gets checked in that breaks the build. The question is how important is it to the team to get that build fixed? Does the team drop everything else to get it fixed or do they let it stay broken for days at a time?

Continuous delivery isn’t possible with a broken build.

Number of branches in version control

By the time you’ll be ready to turn on continuous delivery, you’ll only have one branch. Measuring how many you have now and tracking that over time will give you some indication of where you stand.

If your code isn’t in version control at all then stop taking measurements and just fix that one right now. I’m aware of teams in 2015 that still aren’t using version control and you’ll never get to continuous delivery that way.

Production outages during deployment

If your production deployments require taking the system offline then measure how much time it’s offline. If you achieve zero-downtime deploys then stop measuring this one.  Some applications such as batch processes may never require zero-downtime deploys. Interactive applications like webapps absolutely do.

I don’t suggest starting with everything at once. Pick one or two measurements and start there.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

11 Tips for a Successful Scrum Implementation

Mike Vizdos - Implementing Scrum - Tue, 04/21/2015 - 21:27
www.implementingscrum.com -- Cartoon -- April 21, 2008 Congratulations.

You may have just returned from an internal meeting where you were given some new orders to “go agile” with your team or group.

You may be responsible for leading an agency (or a professional services team) within your organization to “go agile.”

Your team is responsible to #deliver “using agile” or “using scrum” (This last posting may also help — “Holy CRAP I have to do THIS???”)

Now.  The pressure is on for you to do this TODAY.

Actually, YESTERDAY.

Your boss — or some other stakeholder — is demanding results.  Lucky you!

Do you feel a bit out of control or caught between a rock and a hard place?

Are you STUCK?

Here’s the secret:

You CAN enable the team to run in an agile manner using Scrum (or something like it).

How?

It’s pretty easy actually.  In real life, “easy” is anything BUT that (you know there is no Silver Bullet).

Let me help you. Let’s talk.

Or.

Here are TEN FREE STEPS to start on your own:

STEP ZERO:

I have this sinking feeling you may want to do this with all your teams and clients across the organization.  You want to jump in and scale this agile stuff in a big way and just GO FOR IT.

Don’t.

This is not time for you to panic.  This is not the time for you to make a career-limiting-move.

Here is what you can do.

STEP ONE:

Identify your most important (valuable?) client today.

Think about the pareto principle (80/20 rule).

You DO have those very valuable 20% of customers who DO provide you with 80% of your success.

Focus on them and use Scrum to #deliver with them.

Actually. Start with just ONE client (or customer).

Yes.  This means you will have to say NO to a lot of people.

It’s OK.

Remember that 80% of your current clients are keeping you up at night on wasteful non-value-added crap.  You may piss them off by saying NO; however, they are probably already pissed off at you and the cost of switching is either too high or they do not really feel enough pain and love blaming someone anyway.

They are not your problem. If you are bleeding now, this is not the time to keep doing whatever it was that you were doing in the past.

STEP TWO:

Have a real face-to-face conversation with your customer (or client) about playing the role of “Product Owner” for the ONE Scrum Team.

This is a HUGE commitment.  Amazing things happen when you ask.  I’ve seen it done and have helped others do this.  What seems unreasonable today really can happen when you Focus and #deliver together.

STEP THREE:

Now.

Dedicate a team of 5-9 people that can #deliver with your Product Owner on this Scrum Team.

Identify someone to play to role of the ScrumMaster; this person should have real world experience and failures someone else has paid them to make!

And.

Watch the magic happen.

STEP FOUR:

Create an initial Product Backlog together as a Scrum Team.  This should be based on the highest value project with a clear vision of the WHY effectively communicated by the Product Owner.

Don’t worry about estimating at this point.  Read up on the topic of #noEstimates and learn to Focus and #deliver together… as a real team.

STEP FIVE:

The Product Owner can prioritize the Product Backlog items however they feel is necessary.  Use Story Boarding or other ways to tease out the REAL value to your end users.  Oh… and do this quickly (think hours or two days MAX because there is no such thing as a “Sprint Zero”).

STEP SIX:

The Scrum Team can then plan their first Sprint using the Scrum Master as the facilitator.  The Development Team can figure out what to commit to (or forecast) for this Sprint (or Iteration).

Here’s the thing… they will be wrong.  It’s OK.  Focus and #deliver SOMETHING. The weather forecast is rarely right in the real world… but we learn to adjust real life to the real weather conditions as we learn more together.

STEP SEVEN:

Have a daily stand-up meeting each day for a short iteration (how about one week to get started!?!?!).

Allow for the team to coordinate on the Sprint Goal and meeting the Definition of Done for the Sprint.

STEP EIGHT:

Do the work. Daily.  This is where the miracles can happen and will evolve over time as your dedicated team becomes high performing [did I mention this will take time]?

STEP NINE:

At the end of the Sprint, host a Sprint Review.  Demo your Potentially Shippable Software — or Product Increment — together as a Scrum Team.  Have your Product Owner stand up and proudly show what the team was able to actually #deliver.

Gather feedback from your key stakeholders, project sponsors, and perhaps even you real world customers (or end users!).

Focus. #deliver

STEP TEN:

Gather your team for a Sprint Retrospective.

Learn together about what can improved in the next Sprint.

STEP ELEVEN:

Keep going.

That’s it.

Focus. #deliver

Need help?

LET’S TALK

 

 

WARNING The proverbial shit will probably hit the fan. Bad things WILL happen. This is totally normal.

As time takes time, the team DOES #deliver something.

Remember: Delivering the wrong thing TODAY is better than delivering the wrong thing months — or years — from now.

Focus.  #deliver

And.

Keep learning.

This is all good.

Easy.

Right?

I can help.

LET’S TALK

Categories: Blogs

The Myths of Business Model Innovation

J.D. Meier's Blog - Tue, 04/21/2015 - 19:37

Business model innovation has a couple of myths.

One myth is that business model innovation takes big thinking.  Another myth about business model innovation is that technology is the answer.

In the book, The Business Model Navigator, Oliver Gassman, Karolin Frankenberger, and Michaela Csik share a couple of myths that need busting so that more people can actually achieve business model innovation.

The "Think Big" Myth

Business model innovation does not need to be “big bang.”  It can be incremental.  Incremental changes can create more options and more opportunities for serendipity.

Via The Business Model Navigator:

“'Business model innovations are always radical and new to the world.'   Most people associate new business models with the giants leaps taken by Internet companies.  The fact is that business model innovation, in the same way as product innovation, can be incremental.  For instance, Netflix's business model innovation of mailing DVDs to customers was undoubtedly incremental and yet brought great success to the company.  The Internet opened up new avenues for Netflix that allowed the company to steadily evolve into an online streaming service provider.”

The Technology Myth

It’s not technology for technology’s sake.  It’s applying technology to revolutionize a business that creates the business model innovation.

Via The Business Model Navigator:

“'Every business model innovation is based on a fascinating new technology that inspires new products.'  The fact is that while new technologies can indeed drive new business models, they are often generic in nature.  Where creativity comes in is in applying them to revolutionize a business.  It is the business application and the specific use of the technology which makes the difference.  Technology for technology's sake is the number one flop factor in innovation projects.  The truly revolutionary act is that of uncovering the economic potential of a new technology.”

If you want to get started with business model innovation, don’t just go for the home run.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Cognizant on the Next Generation Enterprise

Drive Digital Transformation by Re-Imagining Operations

Drive Digital Transformation by Re-envisioning Your Customer Experience

The Future of Jobs

Categories: Blogs