Skip to content

Feed aggregator

Agile Bodensee, Constance, Germany, October 1–2 2014

Scrum Expert - Thu, 09/04/2014 - 15:21
The Agile Bodensee conference is a two-day event focused on Agile and Scrum and that takes places on the shore of the Lake Constance (the Bodensee in German) in the southern part of Germany. The first day is dedicated to workshops and the second day to presentations. Most of them are in German but there are also some English talks. In the agenda you can find topics like “Agile testing – celebrate bug prevention instead of bug detection”, “Building cross-functional Scrum-Teams or How electricians, mechanics, and (embedded) software engineers went T-shaped”, ...
Categories: Communities

7 Things I Validated in My First Month With VersionOne

Agile Management Blog - VersionOne - Thu, 09/04/2014 - 15:05

What’s your lucky number? In June, I attended the Agile West BSC/ADC conference in Las Vegas and didn’t do so well at the roulette table; however, two months later on 7/21/14 (lucky 7s) I placed an even bigger bet; not on red, but on Pantone Color Number 216. It’s the signature color of VersionOne, the all-in-one agile project management tool and services provider.

After almost seven years of building up my agile acumen at Cars.com, I decided that it was time to close that season of my career, and I began researching and planning my next career challenge. I outlined three key pillars that were very important to me and my ability to truly enjoy going to work every day:

  • Customer Centric Vision – Having the ability to know WHY what we do matters to customers, and matching values and alignment to priorities.
  • Clear Agile Direction – Finding a company who is moving the agile ball further down the line.
  • Fun and Innovative Culture – Life is too short and if you are going to work hard, it makes it much easier to find the joy in your job if you have fun doing it.

These were the most important traits I was seeking in part because they were the things that made me love being at Cars.com. I plugged into my network, had some interviews (and one offer that didn’t work out), and continued my quest to not settle until I found a career that valued these same traits. Then, just when I thought that finding my career “Match.com” was out of reach, I found an opening for a Chicago-based Product Speversiononecialist that turned out to be the perfect blend of vision, direction and fun.

Luckily for me, it was also at a place I knew very well, VersionOne. After a few interviews and a relatively painless courtship, I accepted the position and can report that so far it has been a jackpot payoff.

Since my start at VersionOne, I have validated or learned seven key things that I believe you can also learn from, no matter where you work:

  1. The customer is king (or queen); however, not everyone is a customer – If the slipper doesn’t fit, you can’t force it!
  2. Community is very important – Sometimes a company does things because it’s for the greater good.
  3. You can’t fake culture – If you’ve got a great culture, you’ve got it. Hang onto it.
  4. Agile is a mindset, not a methodology – Like culture, the question is not “Are you Agile?” but rather, “How Agile are you?” If you aren’t sure, take this Agile Development Quiz.
  5. Cold beer, anyone? A cold beverage and a game of pool after work is still a great way to end the day. When they say “Work Hard, Play Hard,” believe them!
  6. Valuable training is essential – Among the many other benefits, new-hire training as a group bonds people together and to the company.
  7. VersionOne is a great place to be because of the people, agile project management products and services to help companies achieve agile success, as well as VersionOne’s commitment to the agile community at large.

Going into my Product Specialist position I tried hard to find red flags, indicators that would give me some warning or reason to pause. At the end of the day, every flag I saw was Pantone 216. It’s my new lucky number!

I hope you find your lucky number, whether it’s a career at VersionOne, a business engagement with us, or something totally unrelated!

Read more on my personal blog at a12p.blogspot.com.

 

Categories: Companies

Webinar: DevOps for Agility

Rally Agile Blog - Thu, 09/04/2014 - 13:00

Today’s fast-moving markets expose threats and opportunities at every turn. Whether you’re a large enterprise or a small startup, it’s no longer enough to simply practice Agile development. To survive — and thrive — in this disruptive environment, you need agility throughout your organization.

Join Rally and Chef for a webinar about the role of DevOps in building agility and responsiveness. Learn more about how Rally practices continuous deployment, accelerates application development, and tightens customer feedback loops. Hear how you can institutionalize DevOps and use Chef to support a speed-focused approach. 

          

DATE: Thursday, September 4

TIME: 11:00 AM Pacific / 2:00 PM Eastern

PARTICIPANTS: 

  • Jonathan Chauncey, developer at Rally Software
  • Jeff Smith, development manager at Rally Software
  • Colin Campbell, director of patterns and practices at Chef
  • [moderator] Alan Shimel, editor-in-chief, DevOps.com

Register here: http://devops.megameeting.com/registration/?id=1949-232488

Rally Software
Categories: Companies

Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

J.D. Meier's Blog - Thu, 09/04/2014 - 09:11

A Minimum Credible Release, or MCR, is simply the minimal set of user stories that need to be implemented in order for the product increment to be worth releasing.

I don’t know exactly when Minimum Credible Release became an established practice, but I do know that we were using Minimum Credible Release as a concept back in the early 2000’s on the Microsoft patterns & practices team.  It’s how we defined the minimum scope for our project releases.

The value of the Minimum Credible Release is that it provides a baseline for the team to focus on so they can ship.   It’s a metaphorical “finish line.”   This is especially important when the team gets into the thick of things, and you start to face scope creep.

The Minimum Credible Release is also a powerful tool when it comes to communicating to stakeholders what to expect.   If you want people to invest, they need to know what to expect in terms of the minimum bar that they will get for their investment.

The Minimum Credible Release is also the hallmark of great user experience in action.  It takes great judgment to define a compelling minimal release.

A sample is worth a thousand words, so here is a visual way to think about this.  

Let’s say you had a pile of prioritized user stories, like this:

image

You would establish a cut line for your minimum release:

image

Note that this is an over-simplified example to keep the focus on the idea of a list of user stories with a cut line.

And the art part is in where and how you draw the line for the release.

While you would think this is such a simple, obvious, and high-value practice, not everybody does it.

All too often there are projects that run for a period of time without a defined Minimum Credible Release.   They often turn into never-ending projects or somebody’s bitter disappointment.   If you get agreement with users about what the Minimum Credible Release will be, you have a much better chance of making your users happy.  This goes for stakeholders, too.

There is another concept that, while related, I don’t think it’s the same thing.

It’s Minimum Viable Product, or MVP.

Here is what Eric Ries, author of The Lean Startup, says about the Minimum Viable Product:

“The idea of minimum viable product is useful because you can basically say: our vision is to build a product that solves this core problem for customers and we think that for the people who are early adopters for this kind of solution, they will be the most forgiving. And they will fill in their minds the features that aren’t quite there if we give them the core, tent-pole features that point the direction of where we’re trying to go.

So, the minimum viable product is that product which has just those features (and no more) that allows you to ship a product that resonates with early adopters; some of whom will pay you money or give you feedback.”

And, here is what Josh Kaufman, author of The Personal MBA, has to say about the Minimum Viable Product:

“The Lean Startup provides many strategies for validating the worth of a business idea. One core strategy is to develop a minimum viable product – the smallest offer you can create that someone will actually buy, then offer it to real customers. If they buy, you’re in good shape. If your original idea doesn’t work, you simply ‘pivot’ and try another idea.”

So if you want happier users, better products, reduced risk, and more reliable releases, look to Minimum Credible Releases and Minimum Viable Products.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Blogs

10 Tipps zum Skalieren

Scrum 4 You - Thu, 09/04/2014 - 07:47

Wir werden immer wieder gefragt, wie man sinnvoll skaliert, also mit mehr als einem Team arbeitet. Soll man da gleich elektronische Taskboards nutzen, braucht es einen Chief Product Owner und vieles mehr. Hier zehn kleine Tipps – sie können euch dabei helfen, erfolgreicher mit vielen Teams zu arbeiten.

  1. Synchronisiert die Sprints. Wenn Teams zusammenarbeiten müssen, also voneinander abhängig sind, dann hat es sich bewährt, die Sprints zu synchronisieren. Teams, die nichts miteinander zu tun haben, brauchen bei dieser Synchronisation nicht mitzumachen.
  2. Infrastruktur. Wichtiger als alle Prozesse ist die entsprechende Infrastruktur. Zum Beispiel müsst ihr in der Lage sein, ständig zu integrieren. Stichwort “Continuous Delivery”. Wenn ihr diese Infrastruktur noch nicht habt, dann wird das Ende des Sprints oder der Kadenz zum Integrationszeitpunkt. Mindestens einmal im Sprint muss alles zusammenlaufen.
  3. Defects First. Immer zuerst alle Defects lösen und wieder integrieren. Erst an neuen Features arbeiten, wenn die Defects gelöst sind. Am besten täglich dafür sorgen, dass Fehler behoben werden.
  4. Dann kümmert man sich um die Integrations-Themen – hier liegt die Stärke des Scrum of Scrums. Identifiziert täglich, wo es Probleme beim Zusammenbringen der Applikation gibt. Gebt euch nicht damit zufrieden, dass es möglicherweise später schon funktionieren wird. Löst jedes dieser Probleme sofort. Erst dann macht man weiter.
  5. Infrastruktur vor Features. Davon abgesehen, dass jedes Team immer ein neues Feature, eine neue User Story pro Sprint liefern sollte – entwickelt eure Instrastruktur zuerst weiter. Ständig. Updates der Server, der Test-Systeme, neue Interfaces nutzen, immer zuerst.
  6. Erst dann werden neue Funktionalitäten entwickelt. Also etwas Neues wird immer erst dann wieder in die Applikation eingebaut, wenn sie wirklich funktioniert und alle Bedingungen bereit sind.
  7. Visualisiert die Arbeit der Product Owner. Sie müssen sich auch einmal als Product Owner-Team vor einem Taskboard treffen und ihre Arbeit auf diese Weise transparent machen.
  8. Der Chief Product Owner entscheidet nur selten über die Reihenfolge: Seine Aufgabe ist es dafür zu sorgen, dass er die besten Product Owner in seinem Team hat. Er sorgt dafür, dass jeder ein Backlog hat, das zum Gesamtprodukt passt. Er identifiziert die notwendigen Skills und arbeitet ständig mit seinem Product Owner-Team daran, dass es immer möglichst viel Wert liefert.
  9. Entwickelt mit den Architekten oder Lead-Entwicklern ein Konzept, so dass die unterschiedlichen Teams möglichst entkoppelt voneinander arbeiten können. Die Architektur sollte vorgeben, wie zusammengearbeitet werden kann. Nicht umgekehrt.
  10. Delegation der Verantwortung dorthin wo die Information ist. Arbeitet hart daran, dass die Teams selbst Entscheidungen treffen und sie treffen können. Oft fehlt es tatsächlich an den entsprechenden Kenntnissen. Entwickelt erst die Kenntnisse und dann kommt die Selbstorganisation von selbst.

Diese Tipps haben uns in den letzten Jahren geholfen, in großen Teams erfolgreich zu sein. Sie dienen uns als Anhaltspunkt, worauf wir achten müssen. Lösen können die Regeln für sich allein nichts.

Wer mehr über große verteilte Teams wissen will – wir bieten dazu ein Training an: Scrum International bringt viele neue Ideen und Arbeitsweisen.

Related posts:

  1. Scrum Rollen: Der Product Owner
  2. Balanced Agility
  3. Produktfindung mit Design Thinking

Categories: Blogs

How to create an Alternative Burndown Graph in Google Docs

Scrumology.com - Kane Mar - Wed, 09/03/2014 - 22:01

The Alternative Burndown Graph is one of the more useful Agile graphs. It’s especially useful for communicating outside of the team, with stakeholders, senior management, etc. This is because it’s can be used to show three important pieces of information; 1. how the team is progressing, 2. changes to the baseline of the Product Backlog, and 3. can be used to determine a completion date. It is a graph that’s generated at the start of every Sprint and shows how work is being completed Sprint over Sprint.

Here’s what the Alternative Burndown Graph looks like.

Alternative Burndown Graph

The Alternative Burndown Graph is useful because it shows (1) how the team is progressing, (2) changes to the baseline of the Product Backlog, and (3) can be used to determine a completion date.

I’m frequently asked if I have a version of the graph that I can share, so I created a Google spreadsheet to generate a simple and useful version of the graph. You can get a copy of my Google Spreadsheet here.

Understanding the Alternative Burndown Graph

The Alternative Burndown Graph is started by adding up the total amount of work remaining in the Product Backlog [1][2], and plotting this initial value. There after, any work completed by the team is taken from the top of the graph. By drawing a trend line through the tops of the graph we can show how the team is making progress Sprint over Sprint.

But what about work being added to the Product Backlog? With this style of graph, any work added to the Product Backlog is added to the bottom of the graph. And, by drawing a trend line through the bottoms of the graph we can show how the baseline of the Product Backlog has been changing Sprint over Sprint.

Finally, we can then use all this information to help calculate when we think the team will have completed the work in the Product Backlog. The intersection of the two trend lines is where the Product Backlog will give us the most likely time of completion of this Backlog.

Creating the Graph in Google Spreadsheets

It’s a very easy graph to generate with any simple spreadsheet application, but I’m frequently asked if I have a sample version of the graph that I can share. So, here’s a version as a Google spreadsheet.

Alternative Burndown Graph spreadsheet

A snapshot of the Alternative Burndown Graph spreadsheet showing all the data needed to calculate the graph, and also the charting tab.

The spreadsheet contains two tabs. The first tab contains the data necessary for the graph, and the second tab contains the graph. The graph is generated using the data in the rows titled Series A and Series B.

To start using this graph,

  1. Make a copy of the Google Spreadsheet
  2. Enter the total of the teams estimates in the product backlog into the first column of Series A.
  3. There after all you need to record is the total number of the teams estimates completed at the end of each Sprint, and
  4. The total number of the teams estimates added to the Product Backlog (by the Product Owner) during the sprint.
Updating the spreadsheet data

After (1) copying the spreadsheet you’ll need to (2) enter the total work in the product backlog (3) record the work completed during each Sprint, and
(4) record the total work added during each Sprint.

So, there you have it. A nice simple Alternative Burndown. Let me know if you find this useful.

[1] Teams will seldom calculate the total amount of work for the entire Product Backlog … because this might be a large amount of work especially for very large Product Backlogs. It is more likely that the Product Owner and team will select some subset of the Product Backlog (say the top 20%, or perhaps sufficient work for the next release) and use then generate this graph for that subset.
[2] That fact that we need to ‘add up the total amount for work remaining the the backlog’ is often interpreted as an implication that we need a numerical scale. This is not so; we can easily use an abstract scale although it’s something most people are uncomfortable with. To make it easy to generate this graph many teams will adopt a numerical sequence, but its important to remember that it doesn’t matter what scale you use, provided that you’re consistent.

Categories: Blogs

Bringing Leadership Agility to Agile

TV Agile - Wed, 09/03/2014 - 19:42
Agile can transform software development teams – and other kinds of teas – in amazing ways. As Agile takes hold and proves its worth, people begin to see the need for a new kind of “leadership culture” at the senior and middle management levels. What is needed is not simply the application of Agile methods […]
Categories: Blogs

Agile Java Developer, Frankfurt am Main, Germany

DevAgile.com - Wed, 09/03/2014 - 17:51
A leading financial software house is seeking a skilled Senior Java Developer to design and develop algorithmic trading platforms and trade execution systems. These white-label platforms are used by over 175 trading firms, including investment banks, hedge funds and asset managers around the world.
Categories: Communities

Ranking: the Best and Worst Roles to Transition to ScrumMasters

Learn more about our Scrum and Agile training sessions on WorldMindware.com

So you’re trying to do Scrum well because you heard it gave you great results.  You know that the ScrumMaster role is critical.  How do you find the right people to fill that role?  Here is a list of several roles that people commonly leave to become ScrumMasters, and a few not-so-common roles as well, all ranked by how well those people typically do once they become ScrumMasters.  From Worst to Best:

  • Worst: PMI-trained Project Managers (PMPs).  Too focused on control and cost and schedule.  Not easily able to give teams the space to self-organize.  Not able to detach themselves from results and focus on the process, values and teamwork needed to make Scrum successful.
  • Bad, but not awful: Functional Managers.  The power dynamic can be a serious hindrance to allowing teams to self-organize.  But, good functional managers already are good at building teams, and empowering individuals to solve problems.
  • Bad, but not awful: Technical Leads.  Here, the biggest problem is the desire to solve all the team’s technical problems instead of letting them do it.  Now, instead of detachment from results (project managers), it’s detachment from solutions.
  • So-so: Quality Assurance people.  Good at rooting out root-cause for problems… just need to shift from technical mindset or business mindset to cultural and process mindset.  Another problem is that QA is not nearly as respected as it should be and QA people typically don’t have a lot of organizational influence.
  • So-so: Junior techies: Enthusiasm can make up for a lot of gaps and naiveté can help solve problems in creative ways, but there will be a huge uphill battle in terms of respect and influence in an organization.
  • Good: non-PMI-trained Project Managers: rely on teamwork and influence rather than tools, processes and control (of course there are exceptions).
  • Awesome: Executive Assistants.  Respected and respectful, use influence for everything, know everyone, know all the processes, know all the ways to “just get it done”. Of course, they don’t usually know much about technology, but that often turns out to be good: they don’t make assumptions, and are humble about helping the technical team!

The ScrumMaster creates high performance teams using the Scrum process and values.  The ScrumMaster is not accountable for business results, nor project success, nor technical solutions, nor even audit process compliance.  The ScrumMaster is responsible for removing obstacles to a team’s performance of their work.  The ScrumMaster is an organizational change agent.

Other things you might want to consider when looking for a ScrumMaster:

  • Does the person have experience managing volunteer groups?
  • Does the person have good people skills in general?
  • Does the person want to create high-performance teams?
  • Can the person learn anything – business, process, technical, people, etc.?

Bottom line: try and avoid having PMI-trained project managers become ScrumMasters.  Even with good training, even with time to adjust, I often find that Scrum teams with PMI-trained project managers are always struggling and almost never become true teams.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Oh, how I long for the illusive green pastures …

Derick Bailey - new ThoughtStream - Wed, 09/03/2014 - 14:00

the-grass-is-not-greener

Everyone Shovels Poop

It doesn’t matter who you are, what you do, or where you work.

I’m here at the end of yet another 3 day weekend with my kids. Last week, my son was at home in between day care and school. This week, it’s Labor Day in the U.S. and school is closed. And it’s days like this, weeks where I lose multiple days, that I miss the regularity of a “normal” job.

The Grass Really Isn’t Greener

I sulk at lost time for projects and clients, at my inability to be productive today when today is a part of my weekly plan to get things done. I long for a job where I leave the work at work, and don’t worry about it again until I get back to the office.

But that green pasture I remember and look fondly upon, is only a sales pitch in my faulty memory.

It Takes A Little BS …

The truth is, there is no green pasture over there. Nor is there a green pasture here where I am now. In fact, the only green in any pasture that you will come across in your career, is the one where the right amount of crap mixes with the right amount of blue skies, sunshine, rain and wind.

The truth is, it takes all kinds of weather – good and bad – for a pasture to grow green enough that it will be of value for a while. But that value has a limit and it gets used up, so the process starts again.

It’s during the rainy season and the following dry season that we have to do the hard work. This work, shoveling the BS and spreading it around the now dry wasteland of pasture, is what allows the pasture to be green again in the next cycle.

How Much Of What Kind Are You Willing To Live With?

You will always face a fair amount of crap in your job – whether you work for someone else or for yourself.

The secret to being happy at your job, then, is not to look at another job, hoping for a greener pasture. That green is temporary.  The secret is to recognize the cycles of fertilization, growth and harvest.

Don’t try to find a job without BS – that will never happen. Find a job where the crap you have to shovel is the kind you know you can live with; where the green pasture that comes around once every cycle of seasons makes the hard work and stench worth it.

… To A Point

It’s true, you will have to deal with hard work and BS at times. But there’s also a time when you may legitimately be drowning in feces at any given job. You need to understand how much of this “fertilizer” you’re willing to live with, and what kind of fertilizer it is you can live with. Don’t let yourself be swallowed up, but don’t turn and run at the first sign of work, either.

    – Derick

     Related Stories 
Categories: Blogs

Scaling Agile — a perfect method

Business Craftsmanship - Tobias Mayer - Wed, 09/03/2014 - 09:01

There are lots of frameworks and suggestions these days for how to scale Agile to the enterprise. They all miss the point, because the proponents of these methods use the wrong definition for scaling. I have a perfect method for scaling Agile. To describe it I’ll use a fish metaphor. A fishmonger scales fish in just the same way we should be scaling Agile. I actually call this method Scaling and Filleting Agile, or SaFA for short (pronounced “safer”).

image image
SaFA begins with the (quite reasonable) assumption that if you are an enterprise attempting to “do Agile” then you have likely accumulated a lot of crap: expensive tracking tools, various flavors of consultant, painful, pointless meetings, new roles that look exactly like the old roles but with different names, and no shortage of charts, graphs, burnups, burndowns… and burnout.

The fish is your enterprise. Looks good from the outside, but unstomachable within. So start scaling. Remove the veneer of Agile with all its phony buzz words, its bright, shiny corporate artifacts. Then take out the choking hazards, the bottlenecks to actual productivity, the organizational impediments to true engagement, the small annoyances that make people vomit. Finally, remove the guts and gore that represent all the rest of the waste, the remnants of a life you are no longer to be bound to.

Congratulations. The value of your enterprise fish has risen in the market, and now you have something to plan a recipe around. 

Categories: Blogs

Agile Projektsteuerung mit Artefakten und KPIs

Scrum 4 You - Wed, 09/03/2014 - 07:28

Kürzlich durfte ich mit einem Team zusammenarbeiten, das Treiber entwickelt und ein großes Problem hatte: Es konnte seine Sprint-Commitments nie einhalten. Nach zwei Monaten war das Team so weit, dass es nicht mehr Scrum machen wollte. Das Framework passe einfach nicht zu ihnen, sagten sie.

Ich setzte mich mit dem Team zusammen und wir begannen, den Prozess, so wie er aktuell ist, aufzuzeichnen. Üblicherweise können Commitments dann nicht eingehalten werden, wenn das Team im Laufe des Sprints viele ungeplante Aufgaben (Support, Change Requests) zu bewältigen hat. Das war aber hier nicht der Fall – der Anteil ungeplanter Aufgaben war mit jenem anderer Teams vergleichbar. Das eigentliche Problem war die Anzahl an Backlog Items, die gleichzeitig in Bearbeitung waren. Je mehr Treiber das Team parallel entwickelte, desto länger wurde die Fertigstellungszeit. Warum aber nahm das Team so viele Treiber gleichzeitig in Bearbeitung? Um der Sache auf den Grund zu gehen, setzten wir Artefakte auf, die den Arbeitsfluss verdeutlichen sollten:

  1. Das Taskboard: Anstelle der klassischen Dreiteilung (Open/In Progress/Done) fingen wir an, den Workflow der Treiber-Entwicklung darzustellen. Es gab also pro Arbeitschritt eine eigene Spalte. Außerdem begrenzten wir die Zeilen am Taskboard, so dass maximal fünf Treiber gleichzeitig in Bearbeitung genommen werden konnten.
  2. Das Cumulative Flow Diagram: Hier wird jede Woche gezählt, wie viele Aufgaben sich in welcher Spalte auf dem Taskboard befinden. So entsteht für jede Spalte eine eigene Kurve, die wöchentlich fortgeschrieben wird. Dadurch lässt sich ablesen, ob sich der Arbeitsprozess im Fluss befindet. In unserem Fall hatten wir mit einer flachen “Done-Kurve” zu tun (denn die Anzahl an fertig gestellten  erhöhte sich nicht) während z.B. die “Waiting for Test”-Spalte steil anstieg, weil neue Treiber auf den Integrationstest warteten. Dadurch konnten wir sehen, wo die Engpässe im System sind – und was wir verändern müssen, um wieder einen Fluss zu erzeugen. Zudem lassen sich am Cumulative Flow Diagram Lead Time und Work in Progress ablesen (siehe nächster Punkt).

    Cumulative Flow Diagram

  3. WIP und Lead Time: Wir einigten uns auf zwei KPIs (Key Performance Indicators). WIP (Work in Progress): Das ist die Anzahl an Treibern, die sich gleichzeitig in Bearbeitung befinden. Die Obergrenze hierfür haben wir auf fünf festgelegt und das Taskboard als quasi physikalische Grenze entsprechend aufgebaut. Lead Time: Das ist die Zeit, die zwischen Aufnahme eines neuen Treibers in die Entwicklung und dem Release vergeht. Hier fingen wir an, jedem Backlog Item auf dem Taskboard zwei Zeitstempel zu geben – “IN” und “OUT”. Je länger die Lead Time, desto unproduktiver ist das Team.
  4. Release-Burndown-Chart: Der Product Owner hatte für das Jahr mit 13 neuen Treibern (darunter Neuentwicklung und Updates) geplant. Mit dem Burndown-Chart konnten wir die tatsächlich fertig gestellten Treiber visualisieren – und anhand des Kurvenverlaufs erkennen, ob mit der aktuellen Lead Time das Ziel noch realistisch ist.
Was hat sich verändert?

Vor Veränderung kommt immer die Transparenz. Gerade als erfahrener agile Practitioner sollte man von schnellen Ratschlägen Abstand nehmen. Erst wenn tatsächlich verstanden wurde, wie das Team aktuell arbeitet, können Hebel für Veränderungen identifiziert werden. In diesem Fall wurde klar, dass das Team mit langen Wartezeiten konfrontiert war, da es für die Erstellung von Skripten sowie für die Verifikation und Validation auf Unterstützung von anderen Teams angewiesen war. Das Team füllte diese Wartezeiten, indem es in der Zwischenzeit mit etwas Neuem anfing. Wir kennen das Phänomen, wenn wir zu viele Downloads gleichzeitig starten – am Ende dauert alles viel länger. Deshalb konnte das Team keine Commitments einhalten. In der Außenwahrnehmung wurde das Teams als unzuverlässig gesehen – in Wirklichkeit machte es zu viel gleichzeitig.

Um das Vertrauen in die Planbarkeit zurückzugewinnen, fing das Team mit wöchentlichen Forecasts an. Am Montag stellten sich alle vor das Taskboard und beschlossen gemeinsam, was sie bis Freitag erreicht haben wollten. Das Taskboard, wie es dann am Freitag aussehen sollte, wurde aufgezeichnet und zum Vergleich neben das aktuelle Taskboard gehängt. Das funktionierte gut.

Product Owner und Management erhielten über den Release-Burndown und die Lead Time zwei ehrliche Indikatoren darüber, was tatsächlich geschafft werden kann. Somit hatten sie zum ersten Mal eine verlässliche Planungsgrundlage. Und der ScrumMaster konnte die Impediments beim Namen nennen, die der Produktivität im Wege standen. Mittelfristig wurden diese über eine engere Zusammenarbeit mit den anderen Teams (tägliches SoS) gelöst, so dass Abhängigkeiten sofort adressiert werden konnten und durch bessere Synchronisation Wartezeiten erst gar nicht entstehen. Langfristig geht es darum, dass das Team immer mehr selber in die Hand nehmen kann (etwa beim Testen) und so schneller wird.

Related posts:

  1. Das Daily Scrum – ein How-To für das kürzeste Scrum Meeting
  2. Scrum Tools | tinypm | Review
  3. Das Burndown-Chart – 10 Gründe dafür

Categories: Blogs

SAFe for Lean Systems Engineering

Agile Product Owner - Tue, 09/02/2014 - 22:38

In the last year or so, we’ve certified an increasing number of SPCs who work in enterprises building really big systems —hi-res image capture satellites, computer servers, aviation electronics and more. In addition, SAFe was developed, in part, in the context of Nokia feature phones (the surviving part that was sold to Microsoft), John Deere (computer controls and embedded software for combines, tractors and sprayers) Nokia Siemens Networks (who executed a most spectacular business turn around), so systems thinking is not new to SAFe.

However, SAFe is optimized for big software, not systems, and while SAFe can be applied easily that way, building a big new banking application that crosses fifty extant applications, and building and evolving large scale embedded systems begs for different words, objects and thinking.

To this end, we’ve expanded the team with the addition of systems expert Harry Koehnemann of the 321 Gang, and we are developing a new variant of SAFe, SAFe for Lean Systems Engineering. We should have a public announcement of a preview in the next few months, with a goal of a MVP release sometime in 2015.

The Purpose of this Post

As we’ve been formulating and structuring content, we’ve been asking ourselves the question, what is Lean Systems Engineering? What do we stand for? What are the values, principles and practices that it entails?

To that end, we offer the following Manifesto for SAFe Lean Systems Engineering, and we are publishing it here to open up for broader industry comments.

For those for whom this topic is of interest, feel free to review and comment here. Just as we’ve developed SAFe iteratively and in full public view, we’ll do the same with SAFe LSE, so consider this the beginning, not the end of an important dialogue.

============================================================ Manifesto for SAFe Lean Systems Engineering

Lean systems engineering is a discipline whereby empowered systems developers work in cross-functional teams building systems that benefit customers and users. Fed by constant customer collaboration, led by lean thinking manager-teachers, we:

  • Understand the economics of the value chain
  • Develop systems iteratively and incrementally
  • Integrate frequently; adapt immediately
  • Manage risk, efficacy and predictability with model- and set-based engineering
  • Embrace agile development values and practices
  • Decentralize decision-making, synchronize via cross-domain collaboration and planning
  • Limit work in process. Reduce batch sizes. Manage queue lengths.
  • Base milestones on objective evaluation of working systems

We commit to building better systems and to continuously improving and disseminating our methods and practices

 

Categories: Blogs

Adapting Scrum to UX

Scrum Expert - Tue, 09/02/2014 - 21:54
User eXperience (UX) includes the practical, affective and valuable aspects of human-computer interaction and product ownership. This article from Anindya Sengupta tries to answer common questions about UX and Scrum. It explores the challenges faced by a team working with a separate UX team in Scrum. It also gives recommendations for UX teams that are part of a Scrum team. The article is based on a scenario where a development team and a UX team work together on a Scrum project from two different locations. The article provides some tips on ...
Categories: Communities

Using The NodeJS Debugger On Code Called From Grunt (And Grunt-Jasmine-Node Specs)

Derick Bailey - new ThoughtStream - Tue, 09/02/2014 - 21:51

I found myself needing to run a debugger on my Jasmine specs. The really fun part is that I am running these specs through the grunt-jasmine-node plugin for grunt. This means what I really need to do is run a debugger on top of grunt, and have it hit my Jasmine specs when they get around to being executed.

It turns out I can do this through a rather simple, one liner in a bash shell (or command prompt on Windows) and I instantly have access to the built in nodejs debugger when running code from any grunt command! But to get there, I had to follow a few steps down an interesting path and learn how to hijack grunt’s script execution.

Debugging grunt jasmine node

Hijacking Grunt Execution

The first step in the solution is to hijack the execution of grunt so that you are running it w/ a call to NodeJS directly, instead of just using the Grunt command line tool. To do that, you need to know what the Grunt command line tool is. On OSX and Linux, you can get the “grunt” file location by running:

As you can see, the grunt command line is located at /user/local/bin/grunt on my box. A quick “cat” of that shows that this is a shell script with a shebang (hash-bang) to tell the shell how to execute the file as a nodejs script.

Now that I know this is a NodeJS script, I have options. I could modify the file to run a different shell command, but that’s a bad idea. Any updates or re-installs would requite me to change that file again. No thanks. The other option I have is just calling the “node” command line myself and telling node to run this script.

I’m now running the “grunt” script directly with node, instead of letting the shell figure out how to run it.

Using “which” To Simplify Execution

Again, in OSX and Linux I can use the “which” command line to make this even easier. Rather than having to remember where the grunt command is when I want to run it from node directly, I can use the $( … ) call. This call executes a child script and interpolates the output in to the parent command. At the same time, I’m going to add the “debug” call to the node command line.

When I run this, it gets expanded in to the same direct command line call as above (with the addition of “debug” of course). Only now, I don’t have to remember the path to the “grunt” command line tool. (Sadly, I don’t know how to get this effect on Windows. You might have to hard code the location of the grunt script for that OS.)

Making It A Bash Script

As short as the above script is, I don’t want to type it all out every time I want to debug code run from grunt. So I stuffed that one liner in to a bash script with it’s own shebang (hash bang) telling it to use bash to run this script.

Now that I have this in it’s own script, I need to be able to pass command line parameters in to the real command calls. On OSX and Linux, that can be done with the inclusion of “$@” – meaning “all command line parameters”, or “$1″, “$2″, “$..n” (where “n” is the parameter position number). On Windows, the same can be done with “%*” to get all params, or “%1″, “%2″, “%3″ … “%..n” (thanks to Alexander Gross in the comments, for the “%*” tip).

The resulting script on my OSX box looks like this:

Running The Debugger On Jasmine-Node

I now have a “grunt-debug” file in my project folder. When I need to debug my grunt-jasmine-node tests now, I just run this:

My break points hit and I can debug in to the code (including my grunt-jasmine-node specs) with the built in NodeJS debugger!

NewImage

 

 

P.S. New To Debugging JavaScript? Or, Want To Use A Visual Debugger?

If you’re not familiar with the command line debugger that comes with NodeJS, or if you want to use a visual debugger such as Visual Studio, WebStorm, or even a browser based debugger, check out my series on debugging JavaScript at WatchMeCode. You’ll find an introduction to all of the major JavaScript debuggers that I’ve used in the last 5 years.

NewImageNewImage

NewImageNewImage

     Related Stories 
Categories: Blogs

September 2014 Real Agility Newsletter Published

Learn more about our Scrum and Agile training sessions on WorldMindware.comContents: Message from Mishkin
Product Owner Training – New AgendA
We’re Hiring!
Upcoming Learning Events
Other Information and Links

Please feel free to subscribe to the Real Agility Newsletter to gain access to archives and receive future issues.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

A Little Tense

lizkeogh.com - Elizabeth Keogh - Tue, 09/02/2014 - 18:29

Following on from my last blog post about deriving Gherkin from conversations, I wanted to share some tips on tenses. This is beginner stuff, but it turns out there are a lot of beginners out there! It also isn’t gospel, so if you’re doing something different, it’s probably OK.

Contexts have happened in the past

When I phrase a context, I often put it in the past tense:

Given Fred bought a microwave

Sometimes the past has set up something which is ongoing in the present, but it’s not an action as much as a continuation. So we’ll either use the present continuous tense (“is X-ing”) or we’ll be describing an ongoing state:

Given Bertha is reading Moby Dick

Given Fluffy is 1 1/2 months old

It doesn’t matter how the context was set up, either, so often we find that contexts use the passive voice for the events which made them occur (often “was X-ed” or “has been X-ed”, for whatever the past tense of “X” is):

Given Pat’s holiday has been approved

Given the last bar of chocolate was sold

Events happen in the present

The event is the thing which causes the outcome:

When I go to the checkout

When Bob adds the gig to his calendar

I sometimes see people phrase events in the passive voice:

When the last book is sold

but for events, I much prefer to change it so that it’s active:

 When we sell the last book

When a customer buys the last book

This helps to differentiate it from the contexts, and makes us think a bit harder about who or what triggers the outcome.

Outcomes should happen

I tend to use the word “should” with outcomes these days. As well as allowing for questioning and uncertainty, it differentiates the outcome from contexts and events, which might otherwise have the same syntax and be hard to automate in some frameworks as a result (JBehave, for instance, didn’t actually care whether you used Given, When or Then at the beginning of a step; it just told it there was a step to run).

Then the book should be listed as out of stock

Then we should be told that Fluffy is too young

I often use the passive voice here as well, since in most cases it’s the system producing the outcome, unless it’s pixies.

And that’s it!


Categories: Blogs

Continuous Value Delivery the Agile Way

J.D. Meier's Blog - Tue, 09/02/2014 - 17:53

Continuous Value Delivery helps businesses realize the benefits from their technology investments in a continuous fashion.

Businesses these days expect at least quarterly results from their technology investments.  The beauty is, with Continuous Value Delivery they can get it, too.  

Continuous Value Delivery is a practice that makes delivering user value and business value a rapid, reliable, and repeatable process.  It’s a natural evolution from Continuous Integration and Continuous Delivery.  Continuous Value Delivery simply adds a focus on Value Realization, which addresses planning for value, driving adoption, and measuring results.

But let’s take a look at the evolution of software practices that have made it possible to provide Continuous Value Delivery in our Cloud-first, mobile-first world.

Long before there was Continuous Value Delivery, there was Continuous Integration …

Continuous Integration

Continuous Integration is a software development practice where team members integrate their work frequently.  The goal of Continuous Integration is to reduce and prevent integration problems.  In Continuous Integration, each integration is verified against tests.

Then, along came, Continuous Delivery …

Continuous Delivery

Continuous Delivery extended the idea of Continuous Integration to automate and improve the process of software delivery.  With Continuous Delivery,  software checked in on the mainline is always ready for release.  When you combine automated testing, Continuous Integration, and Continuous Delivery, it's possible to push out updates, fixes, and new releases to customers with lower risk and minimal manual overhead.

Continuous Delivery changes the model from a big bang approach, where software is shipped at the end of a long project cycle, to where software can be iteratively and incrementally shipped along the way.

This set the stage for Continuous Value Delivery …

Continuous Value Delivery

Continuous Value Delivery puts a focus on Value Realization as a first-class citizen.  

To be able to ship value on a continuous basis, you need to have a simple way to have a simple mechanism for units of value.  Scenarios and stories are an effective way to chunk and carve up value into useful increments.  Scenario and stories also help with driving adoption.

For Continuous Value Delivery, you also need a way to "pull" value, as well as "push" value.   Kanbans provide an easy way to visualize the flow of value, and support a “pull” mechanism and reinforce “the voice of the customer.”    User stories provide an easy way to create a backlog or catalog of potential value, that you can “push” based on priorities and user demand.

Businesses that are making the most of their technology investments are linking scenarios, backlogs, and Kanbans to their value chains and their value streams.

Value Planning Enables Continuous Value Delivery

If you want to drive continuous value to the business, then you need to plan for it.  As part of value planning, you need to identify key stakeholders in the business.    With the stakeholders you need to identify the business benefits that they care about, along with the KPIs and value measures that they care about.

At this stage, you also want to identify who in the business will be responsible for collecting the data and reporting the value.

Adoption is the Key to Value Realization

Adoption is the key component of Continuous Value Delivery.  After all, if you release new features, but nobody uses them, then the users won't get the new benefits.   In order to realize the value, users need to use the new features and actually change their behaviors.

So while deployment was the old bottleneck, adoption is the new bottleneck.

Users and the business can only absorb so much value at once.  In order to flow more value, you need to reduce friction around adoption, and drive consumption of technology.  You can do this through effective adoption planning, user readiness, communication plans, and measurement.

Value Measurement and Reporting

To close the loop, you want the business to acknowledge the delivery of value.   That’s where measurement and reporting come in.

From a measurement standpoint, you can use adoption and usage metrics to better understand what's being used and how much.  But that’s only part of the story.

To connect the dots back to the business impact, you need to measure changes in behavior, such as what people have stopped doing, started doing, and continue doing.   This will be an indicator of benefits being realized.

Ultimately, to show the most value to the business, you need to move the benefits up the stack.  At the lowest level, you can observe the benefits, by simply observing the changes in behavior.  If you can observe the benefits, then you should be able to measure the benefits.  And if you can measure the benefits, then you should be able to quantify the benefits.   And if you can quantify the benefits, then you should be able to associate some sort of financial amount that shows how things are being done better, faster, or cheaper.

The value reporting exercise should help inform and adjust any value planning efforts.  For example, if adoption is proving to be the bottleneck, now you can drill into where exactly the bottleneck is occurring and you can refocus efforts more effectively.

Plan, Do, Check, Act

In essence, your value realization loop is really a cycle of plan, do, check, act, where value is made explicit, and it is regarded as a first-class citizen throughout the process of Continuous Value Delivery.

That’s a way better approach than building solutions and hoping that value will come or that you’ll stumble your way into business impact.

As history shows, too many projects try to luck their way into value, and it’s far better to design for it.

Value Sprints

A Sprint is simply a unit of development in Scrum.   The idea is to provide a working increment of the solution at the end of the Sprint, that is potentially shippable.  

It’s a “timeboxed” effort.   This helps reduce risk as well as support a loop of continuous learning.  For example, a team might work in 1 week, 2 week or 1 month sprints.   At the end of the Sprint, you can review the progress, and make any necessary adjustments to improve for the next Sprint.

In the business arena, we can think in terms of Value Sprints, where we don’t want to stop at just shipping a chunk of value.

Just shipping or deploying software and solutions does not lead to adoption.

And that’s how software and IT projects fall down.

With a Value Sprint, we want to do a add a few specific things to the mix to ensure appropriate Value Realization and true benefits delivery.  Specifically, we want to integrate Value Planning right up front, and as part of each Sprint.   Most importantly, we want to plan and drive adoption, as part of the Value Sprint.

If we can accelerate adoption, then we can accelerate time to value.

And, of course, we want to report on the value as part of the Value Sprint.

In practice, our field tells us that Value Sprints of 6-8 weeks tend to work well with the business.    Obviously, the right answer depends on your context, but it helps to know what others have been doing.   The length of the loop depends on the business cadence, as well as how well adoption can be driven in an organization, which varies drastically based on ability to execute and maturity levels.  And, for a lot of businesses, it’s important to show results within a quarterly cycle.

But what’s really important is that you don’t turn value into a long winded run, or a long shot down the line, and that you don’t simply hope that value happens.

Through Value Sprints and Continuous Value Delivery you can create a sustainable approach where the business can realize the value from it’s technology investments in a sustainable and more reliable way for real business results.

And that’s how you win in the game of software today.

You Might Also Like

Blessing Sibanyoni on Value Realization

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Blogs

Sarajevo Special Scrum.org Product Owner Training

Agile Tips - Tue, 09/02/2014 - 16:36
  




Studies* show that certain memories help us learn and remember more effectively. Combine a course with a team building exercise, an appraised movie, and you have something amazing - a training with lasting impact. 
This is exactly what we wanted to achieve with the first Bosnia Agile organised Scrum.org Product Owner training.
The PSPO (link) training was build around two events, the Sarajevo Film Festival  and a rafting team building exercise. It does not come as a surprise that this event was sold out 6 weeks ahead.
 Between watching a great movie 'The Railway Man' and rafting in this beautiful region, the students where also learning everything important about Scrum, value, lean and agile product ownership. Since I was the trainer it might sound pretentious from me to say how awesome this training was, but in all modesty, it truly was. It was an outstanding and new experience not just for myself. I made many friends and shared many great moments with my fellow students. I am sure that Sarajevo played a big part in this. Sarajevo, the capital of Bosnia-Herzegovina, is a very dynamic and friendly city surrounded by beautiful nature and with good reason was the host for the ’84 Winter Olympics. I for myself cannot wait to go there again.
25 happy Product Owners and a happy trainer cannot be wrong. We have been so pleased that the organisers and I decided to repeat this setup once again next year.
If you live somewhere in the EU you should consider combining education with worthwhile memories. More training content will stick and even better, you will have more fun learning. Consider this: even with budgeting in the costs of transportation and accommodation, it is probably still cheaper than your near-by training provider.
See you next year at the 21st Sarajevo Film Festival …
 

*)http://brainbasedteaching.wikispaces.com/file/view/Caine+114.pdfhttp://pratclif.com/brain/neurons.htm
Categories: Blogs

Don’t Estimate Stories In Sprint Planning

Leading Agile - Mike Cottmeyer - Tue, 09/02/2014 - 15:46

This is part three in a series on estimating. Part one was “Don’t Estimate Software Defects” and Part two was “Don’t Estimate Spikes”.

I don’t estimate stories in sprint planning. Nor do I re-estimate stories in sprint planning. I estimate stories in a separate estimating meeting and usually at least a couple sprints in advance, if not more. There are a few reasons why (re)estimating during sprint planning is a dangerous practice: In sprint planning, we are thinking at a lower level of detail with far greater knowledge about the story, the code base and the system than we had when we estimated the rest of the backlog. You cannot correctly estimate such a story in sprint planning relative to some other story estimated with far less detail. Such a practice leads to velocity inflation and risky release planning.

Different Level of Detail

Once we get to sprint planning, we may have implemented a spike related to the story and we’re digging into the story details. We’ve thought about the design of the code for the story. We’re digging into the tasks. We know a whole lot more about this story than we did when we estimated the rest of the backlog.

So, if I estimate or reestimate the story in sprint planning, I CAN’T estimate it relatively to the rest of the backlog. I don’t know that level of detail about the rest of the stories. I’m now thinking at a different level of detail, using a different kind of thinking than when we estimated the other stories.

We know our estimates are wrong. We knew they were wrong when we estimated in release planning, and we know they are still wrong when we get to sprint planning. That’s why we have velocity — an empirical measure that takes all that wrongness into account.

We don’t need the right estimate to keep us from overcommitting. During sprint planning, we break the stories down into tasks, estimate those tasks, and compare the task estimates against our capacity. It’s that, not points, that keep us from overcommitting in this sprint. No need to change the estimate.

“So what?” you ask. Bear with me and I’ll tell you why you care.

Greater Knowledge and Skill

If you are making release commitments, if you have a release plan, or if you follow our other advice, then your release backlog was likely estimated many weeks ago. All those estimates didn’t have the benefit of what we know now. Many of those estimates are suspect. Likely wrong. If we allow estimate changes once we hit sprint planning, there will be even more estimate changes in sprint planning. This is not the only wrong estimate. If this is happening too often (discovering poor estimates), we should reconsider how well groomed the backlog is and whether we should reestimate the whole remaining product backlog.

“So what?”, you ask “I just like my estimates to be right anyway.” Consider how reestimating in sprint planning might affect our velocity and therefore release planning…

Velocity Inflation

New estimates are most likely going to be higher, not lower. If an estimate is too big, development is not concerned. They might not mention it. If an estimate is too small, development is very concerned.

Think about how often you change or want to change an estimate in sprint planning. Hold that number in your head for a moment. Maybe it’s one story every other sprint? Maybe more? Perhaps 1 out of 20? Also consider how big of an increase it usually is. Is it from a 3 to a 5? From a 1 to a 2? From a 5 to an 8? Perhaps it’s usually a 1.6x increase?

Think about the rest of the product backlog for a moment. If we tend to increase 5% of the stories 1.6x AT THE LAST MINUTE, then your velocity relative to your remaining product backlog is TOO BIG. The remaining backlog is estimated relative to the original points, not relative to the reestimated points.

Example: Suppose we typically do around 10 stories and 20 points per sprint, and the distribution in size is something like [1, 1, 1, 2, 2, 2, 3, 3, 5]. Suppose it’s a 3 point story that becomes a 5. If we complete all those stories, should we say our velocity is 20 or 22? Say we have 400 points remaining in our release backlog. Can we implement 22 points out of that release backlog every sprint? I would say no, you can only knock out 20 backlog points each sprint.

To look at it another way, you think you have 400 points in your backlog, but since you are going to reestimate them on the fly, you really have 440 points to do. You won’t get done in 400/22 iterations. You’ll get done in 440/22 iterations. Or 400/20.

Our team’s ability to produce, their capacity, is stable or at least not increasing rapidly. That is our cash on hand. The price of stories, however, can increase more rapidly due to inflation. Silly analogy: I want to buy a backlog that is originally priced at $400. I can’t pay for it all at once, so I start putting aside some money each pay period. I can put aside $22 per period toward this backlog, but by the time I’ve saved up enough for the original $400, I find that the store has raised the price to $440. Bummer.

Conclusion

If I’m making a release commitment, I want to be absolutely sure I’m not overcommitting. I want to under commit and over deliver. I’m going to evaluate risks, reduce risk early, put risk mitigation stories in my backlog (create options) and reserve buffer for those risks I’m accepting instead of mitigating. I’m going to anticipate that I don’t know all the stories that we need to do. We haven’t thought of some of the dark matter we’re going to have to deal with, so I reserve a little buffer for that. That’s a reasonable and prudent approach. I absolutely don’t want velocity inflation to consume those little buffers. I need them for what I set them aside for. If I allow velocity inflation to use up my contingency, then I’m running a riskier project than I think I am.

It is for similar reasons that I don’t estimate newly discovered defects or unplanned spikes. If you aren’t careful, you’ll inflate your velocity and under estimate the size of your remaining backlog.

The post Don’t Estimate Stories In Sprint Planning appeared first on LeadingAgile.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.