Skip to content

Feed aggregator

Journée Agile, Liege, Belgium, September 11 2014

Scrum Expert - Mon, 08/25/2014 - 15:39
The Journée Agile is a one-day conference focused on agile software development approaches like Scrum that takes place in Belgium every year, All the presentations and workshops are in French. The keynote of the 2014 edition will be given by Jurgen Appelo. In the agenda you can find topics like “Des outils du monde de la psychologie pour les equipes Scrum et Agile”, “L’attitude de Testing Agile”, “Spécifications Agiles”, “Passer de Scrum à Scrumban, pour quoi faire?” ou “Real options – Prises de décisions”. Web site: http://www.journeeagile.be/ Location for the 2014 conference: HEC-ULg, ...
Categories: Communities

Capacity Planning and the Project Portfolio

Johanna Rothman - Mon, 08/25/2014 - 15:17

I was problem-solving with a potential client the other day. They want to manage their project portfolio. They use Jira, so they think they can see everything everyone is doing. (I’m a little skeptical, but, okay.) They want to know how much the teams can do, so they can do capacity planning based on what the teams can do. (Red flag #1)

The worst part? They don’t have feature teams. They have component teams: front end, middleware, back end. You might, too. (Red flag #2)

Problem #1: They have a very large program, not a series of unrelated projects. They also have projects.

Problem #2: They want to use capacity planning, instead of flowing work through teams.

They are setting themselves up to optimize at the lowest level, instead of optimizing at the highest level of the organization.

If you read Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, you understand this problem. A program is a strategic collection of projects where the business value of the all of the projects is greater than any one of the projects itself. Each project has value. Yes. But all together, the program, has much more value. You have to consider the program has a whole.

Don’t Predict the Project Portfolio Based on Capacity

If you are considering doing capacity planning on what the teams can do based on their estimation or previous capacity, don’t do it.

First, you can’t possibly know based on previous data. Why? Because the teams are interconnected in interesting ways.

When you have component teams, not feature teams, their interdependencies are significant and unpredictable. Your ability to predict the future based on past velocity? Zero. Nada. Zilch.

This is legacy thinking from waterfall. Well, you can try to do it this way. But you will be wrong in many dimensions:

  • You will make mistakes because of prediction based on estimation. Estimates are guesses. When you have teams using relative estimation, you have problems.
  • Your estimates will be off because of the silent interdependencies that arise from component teams. No one can predict these if you have large stories, even if you do awesome program management. The larger the stories, the more your estimates are off. The longer the planning horizon, the more your estimates are off.
  • You will miss all the great ideas for your project portfolio that arise from innovation that you can’t predict in advance. As the teams complete features, and as the product owners realize what the teams do, the teams and the product owners will have innovative ideas. You, the management team, want to be able to capitalize on this feedback.

It’s not that estimates are bad. It’s that estimates are off. The more teams you have, the less your estimates are normalized between teams. Your t-shirt sizes are not my Fibonacci numbers, are not that team’s swarming or mobbing. (It doesn’t matter if you have component teams or feature teams for this to be true.)

When you have component teams, you have the additional problem of not knowing how the interdependencies affect your estimates. Your estimates will be off, because no one’s estimates take the interdependencies into account.

You don’t want to normalize estimates among teams. You want to normalize story size. Once you make story size really small, it doesn’t matter what the estimates are.

When you  make the story size really small, the product owners are in charge of the team’s capacity and release dates. Why? Because they are in charge of the backlogs and the roadmaps.

The more a program stops trying to estimate at the low level and uses small stories and manages interdependencies at the team level, the more the program has momentum.

The part where you gather all the projects? Do that part. You need to see all the work. Yes. that part works and helps the program see where they are going.

Use Value for the Project Portfolio

Okay, so you try to estimate the value of the features, epics, or themes in the roadmap of the project portfolio. Maybe you even use the cost of delay as Jutta and I suggest in Diving for Hidden Treasures: Finding the Real Value in Your Project Portfolio (yes, this book is still in progress). How will you know if you are correct?

You don’t. You see the demos the teams provide, and you reassess on a reasonable time basis. What’s reasonable? Not every week or two. Give the teams a chance to make progress. If people are multitasking, not more often than once every two months, or every quarter. They have to get to each project. Hint: stop the multitasking and you get tons more throughput.

Categories: Blogs

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 13:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

Categories: Companies

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 08:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!

 

Categories: Companies

Scrum in der Medizintechnik: Wie stelle ich ein erfolgreiches Team zusammen?

Scrum 4 You - Mon, 08/25/2014 - 07:30

Auch in stark regulierten Branchen wie der Medizintechnik ist es möglich, Produkte mit Scrum zu entwickeln – das hat sich mittlerweile von den IT-Abteilungen deutscher Dienstleistungsunternehmen über den innovativen Mittelstand bis in die großen Konzerne durchgesprochen*. Aber wo anfangen, wenn die Entscheidung für Scrum einmal gefällt wurde? Der Erfolg Ihres Projekts steht und fällt mit dem Team.

Scrum, richtig gelebt, bietet Ihnen die Möglichkeit, sämtliches Know-how, das Sie für die Entwicklung Ihres Produkts benötigen, in einem Team zu bündeln. Auf diese Weise lassen sich zusätzliche Aufwände und Redundanzen an Übergabepositionen minimieren und gleichzeitig dringend benötigtes Wissen beinahe automatisch auf mehrere Köpfe verteilen.

baseball-74003_640

Herausforderungen für die Medizintechnik

Für die Hersteller medizintechnischer Geräte bedeutet das allerdings nicht nur, Anwendungsentwickler und Tester zusammenzusetzen. Zu der ohnehin schon komplexen Aufgabe, Hardware und Software miteinander zu kombinieren, kommt hinzu, dass konstruierte Teile bestellt, Risikomanagement-Checklisten abgearbeitet und vor allem regulatorische Anforderungen erfüllt werden müssen. Neben Hard- und Softwareentwicklern sowie Konstrukteuren brauchen Sie in Ihrem Team also auch noch einen Einkäufer, jemanden aus der Produktdokumentation und eine Person, die sich mit den regulatorischen Anforderungen auskennt.

Pilotteams nicht als Wetterfrösche missbrauchen!

Viele Unternehmen treffen die Entscheidung, Scrum zunächst in Pilotprojekten auszuprobieren, um die Organisation nicht zu überfordern und ein Gefühl dafür zu bekommen, ob das funktionieren kann, oder nicht. Der Gedanke, die Organisation nicht zu überrumpeln, ist nachvollziehbar, und das Aufsetzen einer Pilotgruppe auch empfehlenswert.

Aber: Jedes Pilot-Team wird früher oder später an strukturelle Grenzen stoßen, vor allem wenn die Abteilungen, die den Teams zuarbeiten, nicht entsprechend geschult sind. Insbesondere wenn es um Anforderungen „aus dem Feld“ oder seitens der Regulierungsbehörden geht, kann die Integration entsprechender Know-how-Träger in ein Scrum-Team sehr viel Zeit sparen. Heißt das, mein QM – Mitarbeiter ist jetzt 100% der Zeit im Scrum-Team? Nicht unbedingt.

Verantwortungsbewusstsein schaffen

Allein das Commitment „entwicklungsfernerer“ Kollegen, regelmäßig zu Meetings wie Sprint Planning, Daily oder Review zu erscheinen, wird einen großen Beitrag zu mehr Effizienz Ihrer Scrum-Teams leisten.

Bei einem unserer Kunden in der Laborautomatisierung gibt es beispielsweise einen Projekteinkäufer, der regelmäßig die Dailies mehrerer Scrum Teams besucht und somit für das Team stets zu verlässlichen Zeitpunkten als Ansprechpartner zur Verfügung steht, sollte es beispielsweise Rückfragen zu Lieferzeiten geben. Und auch für die Arbeit des Projekteinkäufers ergeben sich durch die Unmittelbarkeit viele Vorteile. So bekommt er schnell ein Gefühl für die Dringlichkeit sowie über mögliche Zusammenhänge einzelner Bestellungen.

Auch bei der Erstellung von Bedienungs- oder Servicehandbüchern lassen sich eine Vielzahl von Verzögerungen und doppelten Arbeitsschritten durch die rechtzeitige Einbindung entsprechender Kollegen vermeiden. Schaffen Sie ein Bewusstsein dafür, dass Ihr Projekt nur durch das Zusammenwirken aller Parteien erfolgreich werden kann und das Scrum hierfür den notwendigen Rahmen bietet. Haben Sie die Rahmenbedingungen einmal abgesteckt und kommuniziert, werden sich Ihre Teams so aufstellen, wie sie es für eine anwenderfreundliche und normgerechte Produktentwicklung brauchen.

*Sowohl der Technical Information Report TIR 45:2012 der AAMI (Association for the Advancement of Medical Instrumentation ) als auch die Prozess-Norm IEC 62304 geben Herstellern explizit die Freiheit, ihre Produkte so zu entwickeln, wie sie es für richtig halten – solange die Produktsicherheit und -qualität gewährleistet bleiben.

Related posts:

  1. Wann ist ein ScrumMaster erfolgreich?
  2. Portfolio
  3. Auch wenn’s mal wieder länger dauert: Pull die wichtigsten Themen zuerst

Categories: Blogs

"How Thin is Thin?" An Example of Effective Story Slicing

Practical Agility - Dave Rooney - Sun, 08/24/2014 - 19:00
Graphene is pure carbon in the form of a very thin, nearly transparent sheet, one atom thick. It is remarkably strong for  its very low weight and it conducts heat and electricity with great efficiency. Wikipedia If you have spent any time at all working in an Agile software development environment, you've heard the mantra to split your Stories as thin as you possibly can while still
Categories: Blogs

Tipps zum Schreiben: Ein Sonntagserlebnis

Scrum 4 You - Sun, 08/24/2014 - 16:30

Ich werde immer wieder gefragt (meist von meinen Kollegen), wie ich es schaffe, neben meinen Trainings und Consulting-Aufträgen auch noch Bücher zu schreiben und Blogbeiträge zu verfassen. Es ist ganz einfach: Ich schreibe. Ich erzähle euch mal, wie so ein Tag aussehen kann.

Heute – Sonntag – habe ich meine Frau und eine Freundin auf ein Reitturnier begleitet. Wir sind um 06:00 aufgestanden, um 07:00 habe ich unser Pferd Rübe geputzt, dann sind wir eine Stunde gefahren und irgendwann gab es für mich mal eine Pause von 20 Minuten. Ich habe mir einen Kaffee gekauft, mich hingesetzt mein derzeit favorisiertes Schreibprogram Writer, ein Google Chrome Plugin, gestartet und geschrieben.

20 Minuten später kam meine Frau, ihre Freundin brauchte Hilfe mit ihrem Pferd. Also den Rechner zugeklappt, eingepackt und in den nächsten zwei Stunden zugeschaut, wie die beiden sehr erfolgreich waren. Dann gab es wieder 10 Minuten: Den Rechner rausgeholt unter den Baum gesetzt und da weitergeschrieben, wo ich aufgehört hatte. Klar muss ich dann immer den vorherigen Absatz umschreiben um reinzukommen, aber ich konnte wieder einige hundert Worte schreiben. Meine Frau kommt vorbei und bittet mich, das Pferd zu halten. Ich klappe den Rechner wieder zu. Dann waren wir fertig, haben die Pferde in den Stall zurückgebracht, die beiden anderen Pferde versorgt und sind nach Hause gefahren. Dort geduscht, etwas gegessen und meine Frau ist noch einmal zu den Pferden gefahren (das ist heute eine Ausnahme) und ich sitze seit 75 Minuten hier am Küchentisch und schreibe.

Zugegeben, ein solcher Tag ist auch für mich eine Ausnahme. Ich bin gerade wieder vom Schreiben gefangen. Sonst würde ich nicht in den “Auszeiten” schreiben. In den letzten 8 Wochen, nachdem ich das Manuskript zu “Selbstorganisation braucht Führung” an Dolores, meine Editorin, abgegeben habe, war ich erst einmal fertig mit Schreiben. Ausgeschrieben. Doch die letzten Blogs, von denen ihr in den letzten Tagen wieder einige lesen konntet, zeigen, dass es so viel zu bemerken gibt, das muss einfach zu Papier bzw. zu Datei gebracht werden. Normalerweise schreibe ich morgens, kurz nach dem Aufstehen, oder abends im Hotel, am Flughafen, wenn ich auf den Flieger warte, im Zug nach irgendwohin.

“Aber wie macht er das?”, höre ich fragen. Einem guten Freund von mir passiert das Gleiche beim Fotografieren. Er macht Fotos. Ständig. Ein anderer malt. Ich schreibe – ich denke dabei nicht nach, sondern ich schreibe. Manchmal wird es gut, manchmal sehr gut. Brauchbar ist es mittlerweile immer – aber das ist Übung. Macht es Spaß? Ohne Ende.

Versucht es auch, ich kann es nur empfehlen. Hört einfach auf darüber nachzudenken, was ihr schreiben wollt und schreibt.

Related posts:

  1. Führung ist?
  2. Über das Schreiben: Leidenschaft | Passion | Freewriting
  3. Bin ich am Arbeitsplatz zufrieden?

Categories: Blogs

Listen, Test, Code, Design OVER Sprints, Scrums, and ScrumMasters

"Back to basics" is Scrum?I've been noticing people talk about getting "back to the basics" and then proceed to talk about Scrum roles and rituals.

This annoys me for 2 main reasons:
  1.  Scrum was never "basics" for me and I've typically been doing this longer than the person who suggests this
  2. The more important reason is that if we think about this carefully, Scrum cannot be the "basics"
"Back to basics" should be about the essence of what we are doing"Back to basics", "focusing on the fundamentals", etc. is about getting back to the essence of an activity.  I touched upon this when I was exploring the concept of doctrine but let's think about this using the frame of "basics" or "fundamentals".

If we look at the context of developing software for a purpose, as opposed to as a hobby, what is the essence of what needs to happen?
  1. You need a shared understanding of what problem the software is intended to solve.  We have learned that the best way to do this is to engage directly with the relevant situation and people.
  2. You need a shared understanding of what the solution needs to do to solve the problem.  We have learned that the best way to do this is through conversations leading to agreed examples and then iterating.
  3. You need to build the solution.  We have learned that the best way to do this is in a thoughtful, collaborative, disciplined way.
  4. You need to manage the growing complexity of the system to ensure that it continues to be easy to change.  We have learned that the best way to do this is as an ongoing exercise reflecting the best knowledge we have at each point.
A more compact version of this might be: Listen, Test, Code, Design.
If you don't get good at these basics, all your Sprints, Scrums, and ScrumMasters won't matter much.
Categories: Blogs

Measuring Business value in Agile projects

Agile World - Venkatesh Krishnamurthy - Sun, 08/24/2014 - 01:44

image

Because the first principle of the Agile Manifesto talks about delivering valuable software to the customer, many agile practitioners are constantly thoughtful of the value in each step of the software-development lifecycle.

At the thirty-thousand-foot level, value creation starts with gathering requirements and continues with backlog creation, backlog grooming, writing user stories, and development, finally ending with integration, deployment, and support. Even with knowledge of all these moving parts, it is common to see organizations only measuring value during development and ignoring the rest of the steps.

What’s the fix? During backlog creation, user stories need to be compared and contrasted in order to promote maximum value delivery. The product owner might need to use different techniques, such as T-shirt sizing, in order to better prioritize the project’s stories.

An alternate approach to measuring the business value of user stories is to use a three-dimensional metric that incorporates complexity, business value, and the ROI. Creating value can often require a change in perspective from the normal project’s tasks and functions. Thinking outside the box, identifying business value before writing the user stories is much better than writing and then trying to evaluate.

Read  the complete article about measuring business value on TechWell

Picture courtesy https://flic.kr/p/8E7Dr5

Categories: Blogs

Xebia IT Architects Innovation Day

Xebia Blog - Sat, 08/23/2014 - 18:51

Friday August 22nd was Xebia’s first Innovation Day. We spent a full day experimenting with technology. I helped organizing the day for XITA, Xebia’s IT Architects department (Hmm. Department doesn’t feel quite right to describe what we are, but anyway). Innovation days are intended to inspire as well as educate. We split up in small teams and focused on a particular technology. Below is as list of project teams:

• Docker-izing enterprise software
• Run a web application high-available across multiple CoreOS nodes using Kubernetes
• Application architecture (team 1)
• Application architecture (team 2)
• Replace Puppet with Salt
• Scale "infinitely" with Apache Mesos

In the coming weeks we will publish what we learned in separate blogs.

First Xebia Innovation Day

Categories: Companies

New Foundations 3.0 Webinar

Agile Product Owner - Sat, 08/23/2014 - 16:26

Hi,

We’ve just posted an updated introductory Webinar: SAFe Foundations: Be Agile. Scale Up. Stay Lean. at ScaledAgileFramework/foundations. “Foundations” is the free Powerpoint briefing (download from the same page) that you can use in most any context to describe SAFe to your intended audience.

In this 45 minute webinar, I walk through the Foundations PPT and describe:

  • The rationale for Agile and SAFe
  • A bit of SAFe history
  • SAFe core values
  • Business benefits enterprises have achieved with SAFe
  • Lean Thinking overview
  • A brief overview, of SAFe Team, Program, and Portfolio levels
  • Introduction to the Principles of Lean Leadership
  • Next Steps and Implementation 1-2-3 Guidance

Thanks to Jennifer Fawcett for hosting the event.

Categories: Blogs

Why Iterative Planning?

Leading Agile - Mike Cottmeyer - Fri, 08/22/2014 - 17:40

First, I would like to credit Eric Ries in his 2010 Web 2.0 speech for giving me the idea for these awesome graphics. If you have never seen the speech then I highly recommend the version found on YouTube. I have always admired people with creative slides who can capture ideas with elegant simplicity. Since my artistic ability peaked in the first grade, the images in this post demonstrate my foray into abstract expressionism and hopefully convey the point of why we in software need iterative planning.

Unknown Problem | Unknown Solution

Most software changes start life in the state of an unknown problem with an unknown solution. Iterative planning graphNow the product mangers reading this may beg to differ but, most of the time a vague idea on having the software do something is not a known problem space. Say for instance I want to allow uninsured people to buy insurance as a government subsidized rate.  Most of us can imagine that this is a huge problem space and truly we would have no idea how to make this happen.  In this case the problem space and the solution space is unknown.  In order to plan a software delivery that will solve the want above, I need to clearly understand the problem that needs to be solved.  To do this in agile software delivery we create something called a roadmap.  The roadmap is a way of breaking this big unknown problem into smaller chucks that we can estimate (“guess wrong”) as to how long it will take to implement them.  It is usually at this stage that these chunks of work are funded.

Known Problem | Unknown Solution

Now a software releasIterative planning graphe is ready to be planned with some chunk of the roadmap.  In order to do that, the problem should be fairly well known and can be broken it into pieces.  These pieces can be estimated (“guessed wrong”) and slotted into delivery iterations.  Lets say we want to allow people to log into a website and sign up for insurance.  This is a relatively well-known problem space, there are security concerns, 3rd party integrations, databases, platforms and deployments.  Maybe this will not all fit in one release, but with more elaboration and planning a reasonable release plan with a list of risks will emerge. It is usually at this stage that the guess of the size of the thing in the roadmap is known to be wrong and changes must be made to the roadmap.

Known Problem | Known Solution

Iterative planning graphFinally we are ready to plan an iteration. So take a chunk of the release plan and break it into pieces and as a team there needs to be some certainty that these pieces of work can be completed in the sprint. If there are still things that don’t have a clear solution then don’t take those in the sprint and take a spike or research item instead. It is now that the wrongness of the guess during release planning is known and adjustments can be made both to the release plan and the roadmap.

Planning and elaboration go hand in hand as items move from unknown problem -unknown solution to known problem-unknown solution to known problem – known solution.

The post Why Iterative Planning? appeared first on LeadingAgile.

Categories: Blogs

GOAT14 – Call for Speakers

Notes from a Tool User - Mark Levison - Fri, 08/22/2014 - 15:41

This year’s Gatineau Ottawa Agile Tour (#GOAT14) will take place on Monday, November 24th 2014, and Agile Pain Relief Consulting is once again a proud sponsor. Organizers are looking for engaging and inspirational speakers for this year’s conference. If you are interested in participating, please submit a proposal by completing the online form at http://confengine.com/gatineau-ottawa-agile-tour-2014. The organizing committee will select speakers based on the following criteria:

  • Learning potential for and appeal to participants
  • Practicality and usefulness/applicability of content to the workplace
  • Overall program balance Speaker’s experience and reputation
  • Interactive elements (i.e. exercises, simulations, questions…)

Deadline for proposals: Sunday September 15th at 23:59pm

About the Gatineau – Ottawa Agile Tour
The Gatineau – Ottawa Agile Tour (#GOAT14) is a full day of conferences around the theme of Agility applied to software development, but also to management, marketing, product management and other areas of today’s businesses.

Categories: Blogs

Neo4j: LOAD CSV – Handling empty columns

Mark Needham - Fri, 08/22/2014 - 14:51

A common problem that people encounter when trying to import CSV files into Neo4j using Cypher’s LOAD CSV command is how to handle empty or ‘null’ entries in said files.

For example let’s try and import the following file which has 3 columns, 1 populated, 2 empty:

$ cat /tmp/foo.csv
a,b,c
mark,,
load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
SET p.b = row.b, p.c = row.c
RETURN p

When we execute that query we’ll see that our Person node has properties ‘b’ and ‘c’ with no value:

==> +-----------------------------+
==> | p                           |
==> +-----------------------------+
==> | Node[5]{a:"mark",b:"",c:""} |
==> +-----------------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 3
==> Labels added: 1
==> 26 ms

That isn’t what we want – we don’t want those properties to be set unless they have a value.

TO achieve this we need to introduce a conditional when setting the ‘b’ and ‘c’ properties. We’ll assume that ‘a’ is always present as that’s the key for our Person nodes.

The following query will do what we want:

load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
FOREACH(ignoreMe IN CASE WHEN trim(row.b) <> "" THEN [1] ELSE [] END | SET p.b = row.b)
FOREACH(ignoreMe IN CASE WHEN trim(row.c) <> "" THEN [1] ELSE [] END | SET p.c = row.c)
RETURN p

Since there’s no if or else statements in cypher we create our own conditional statement by using FOREACH. If there’s a value in the CSV column then we’ll loop once and set the property and if not we won’t loop at all and therefore no property will be set.

==> +-------------------+
==> | p                 |
==> +-------------------+
==> | Node[4]{a:"mark"} |
==> +-------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 1
==> Labels added: 1
Categories: Blogs

You shall not pass – Control your code quality gates with a wizard – Part III

Danube - Fri, 08/22/2014 - 13:25
You shall not pass – Control your code quality gates with a wizard – Part III

If you read the previous blog post in this series, you should already have a pretty good understanding on how to design your own quality gates with our wizard. When you finish reading this one, you can call yourself a wizard too. We will design a very powerful policy consisting of quite complex quality gates. All steps are first performed within the graphical quality gate wizard. For those of you who are interested in what is going on under the hood, we will also show the corresponding snippets of the XML document which is generated by the wizard. You can safely ignore those details if you do not intend to develop your own tooling around our quality gate enforcing backend. If you play with this thought though, we will also show you how to deploy quality gates specified in our declarative language without using our graphical wizard.

Your reward – The Power Example powerexample1 You shall not pass – Control your code quality gates with a wizard – Part III

Power example with six quality gates

Before we reveal the last secrets of our wizard and the submit rule evaluation algorithm, you probably like to know the reward for joining us. The policy we are going to design consists of the following steps:

1. At least one user has to give Code-Review+2 , authors cannot approve their own commits (their votes will be ignored)

2. Code-Review -2 blocks submit

3. Verified -1 blocks submit

4. At least two CI users (belonging to Gerrit group CI-Role) have to give Verified +1 before a change can be submitted

5. Only team leads (a list of Gerrit users) can submit

6. If a file called COPYRIGHT is changed within a commit, a Gerrit group called Legal has to approve (Code-Review +2) the Gerrit change

The final policy can be downloaded from here. Please note that it will not work out of the box for you as your technical group ids for the Legal and CI groups as well as the concrete user names for team leads will differ. We will guide you step by step how you come up with a result that fits your specific situation.

Starting with something known – Gerrit’s Default Submit Policy

powerexample step1to3 You shall not pass – Control your code quality gates with a wizard – Part III

Looking at steps 1, 2 and 3, you probably realized that they are quite similar to Gerrit’s Default Submit policy. Because of that, let’s start by loading the template Default Gerrit Submit Policy. Once you see the first tab of the editor that opens, adjust name and description as shown in the screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

If you now switch to the Source tab (the third one), you can see the XML the wizard generated for the default policy:

 You shall not pass – Control your code quality gates with a wizard – Part III

The XML based language you can see here is enforced by our Gerrit Quality Gate backend. We believe that this language is way easier to learn than writing custom Prolog snippets (the default way of customizing Gerrit’s submit behavior). Furthermore, it exposes some features of Gerrit (like user group info) which are not exposed as Prolog facts. Our Quality Gate backend is implemented as a Gerrit plugin that contributes a custom Prolog predicate which in turn parses the XML based language and instructs Gerrit’s Prolog engine accordingly. This amount of detail is probably only relevant to you if you intend to mix your own Prolog snippets with policies generated by our wizard.

The schema describing our language can be found here. Looking at the screenshot above, you can clearly see that the XML top element GerritWorkflow contains all settings of the first tab of our wizard. You have probably spotted the attributes for name, description, enableCodeReview and enableVerification. The latter two store the info whether to present users with the ability to vote on the Code-Review/Verified categories (given appropriate permissions).

The only child elements accepted by the GerritWorkflow element are SubmitRules. You can clearly see the three submit rules of the default policy, we have covered in detail in our second blog post. Let’s examine the first submit rule named Code-Review+2-And-Verified-To-Submit. If all its voting conditions are satisfied, it will be evaluated to allow, making submit possible if no other rule gets evaluated to block. As this rule has not specified any value for its actionIfNotSatisfied attribute, it will evaluate to ignore if not all its voting conditions are satisfied. Talking about voting conditions, you can see two VotingCondition child elements. The first one is satisfied if somebody gave Code-Review +2, the second one if somebody gave Verified +1. The second SubmitRule element maps directly to step 2 of our power example ( Code-Review -2 blocks submit), the third one directly to step 3 (Verified -1 blocks submit).

Ignore author votes by introducing a voting filter

powerexample step1 You shall not pass – Control your code quality gates with a wizard – Part III

Let’s modify the first submit rule that it matches the first step of our power example policy:

“ At least one user has to give Code-Review+2 , authors cannot approve their own commits (their votes will be ignored)”

For this, we first switch to the second tab of our wizard (Submit Rules) and double click on the first submit rule. Right after, we double click on the first voting condition (Code-Review) and check the Ignore author votes checkbox in the dialog that opens, see screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

Once we save this change (press Finish in the two dialogs) and switch back to the Source tab, we can see that the XML of the first submit rule has changed:

 You shall not pass – Control your code quality gates with a wizard – Part III

The first VotingCondition element now has a VoteAuthorFilter child element. This one has its ignoreAuthorVotes attributes set to true, which in turn will make sure that only votes of non authors will be taken under consideration when this voting condition gets evaluated. You also notice the ignoreNonAuthorVotes attribute. With that one, it would be possible to turn the condition around (if set to true) and ignore all but the author’s votes. If both attributes are set to true, all votes will be ignored. Voting conditions always apply to the latest change set of the Gerrit change in question.

Adding a group filter to the verified voting condition

powerexample step4 You shall not pass – Control your code quality gates with a wizard – Part III

Now that we have realized step 1 of our power example and step 2 and 3 could be just left unmodified from the default policy, let’s focus on step 4:

“At least two CI users (belonging to Gerrit group CI-Role) have to give Verified +1 before a change can be submitted”.

This can be achieved by modifying the second voting condition (Verified) of the first submit rule. This time, we do not ignore Verified votes from authors (we could by just checking the same box again) but by adding a group and a count filter.

 You shall not pass – Control your code quality gates with a wizard – Part III

Like shown in the screenshot above, enter 2 into the Vote Count Min field and add the Gerrit group of your choice that represents your CI users. The wizard allows you to select TeamForge groups, TeamForge project roles and internal Gerrit groups.

If we finish the dialogs and switch back to the Source tab, we can see that the second voting condition of our first submit rule has changed:

 You shall not pass – Control your code quality gates with a wizard – Part III

Two filters appeared, one VoteVoterFilter and one VoteCountingFilter. The first one makes sure that only votes casted by the CI_ROLE (we chose TeamForge project role role1086 here) will be recognized when evaluating the surrounding VotingCondition.

The second filter is a counting filter. Counting and summing filters are applied after all other filters within the same VotingCondition have been already applied. In our case, it will be applied after all votes which

a) do not fit into voting category Verified (votingCategory attribute of parent element)

b) do not have verdict +1 (value attribute of parent element)

c) have not been casted by a user which is part of the CI_ROLE (see paragraph above)

have been filtered out.

After that, our VoteCounting filter will only match if at least two (minCount attribute) votes are left. If this is not the case, the surrounding VotingCondition will not be satisfied and as a consequence, its surrounding SubmitRule will not be satisfied either.

Introducing SubmitRule filters

powerexample step5 You shall not pass – Control your code quality gates with a wizard – Part III

So far, we only talked about voting conditions and its child filter elements. Sometimes, you do not want an entire submit rule to be evaluated if a certain condition is not fulfilled. Our second blog post already used a submit rule filter for a rule that should only be evaluated if a commit was targeted for the experimental branch.

Step 5  of our power policy is another example:  “Only team leads (a list of Gerrit users) can submit”

We will add a filter to our first submit rule that will make sure that it only gets evaluated if a team lead looks at the Gerrit change. As we only have three submit rules so far and the first one is the only one which can potentially be evaluated to allow, it is sufficient to add this filter only to the first one. To do that, we switch back to the Submit Rules tab, double click on the first submit rule and click on the Next button in the dialog that opens. After that, you can see four tabs, grouping all available submit rule filters. You probably remember those tabs from the second blog post where the values for those filters have been automatically populated based on the characteristics of an existing Gerrit change (more precisely, its latest change set).

This time, we will manually enter the filter values we need. Let’s switch to the User tab and select the accounts of your team leads. In the screenshot below you can see that we chose the accounts of eszymanski and dsheta as team leads.

 You shall not pass – Control your code quality gates with a wizard – Part III

Once you select your team leads instead (our wizard makes it possible to interactively select any TeamForge user or internal Gerrit account), let’s click on Back and finally adjust the display name of our submit rule to its new meaning: Code-Review+2-Verified-From-2-CI-And-Project-Lead-To-Submit

If we finish the dialog and switch back to the Source tab, you can see that our first submit rule has not only changed its displayName but also got a new child element:

 You shall not pass – Control your code quality gates with a wizard – Part III

The UserFilter element makes sure that the surrounding submit rule will only be evaluated if at least one of its CurrentUser child elements matches the user currently looking at the Gerrit change.

If there are multiple submit rule filters, all of them have to match if their surrounding submit rule should be evaluated. You may ask what happens if no submit rule can be evaluated because none of them has matching filters. In that case, submit will be blocked and a corresponding message displayed in Gerrit’s Web UI. The same will happen if you have not defined any submit rule at all. As always, you can test your submit rules directly in the wizard against existing changes before deploying.

Providing guidance to your users with display only rules

powerexample step5 You shall not pass – Control your code quality gates with a wizard – Part III

Before we design a submit rule for the final step (6), let’s try to remember the submit rule evaluation algorithm and what will happen if a non team lead looks at a Gerrit change with our current policy. Quoting from blog post two:

 You shall not pass – Control your code quality gates with a wizard – Part III

a) For every submit rule that can be evaluated, figure out whether its voting conditions are satisfied (if a submit rule does not have a voting condition, it is automatically satisfied)

b) If all voting conditions are satisfied for a submit rule, the rule gets evaluated to the action specified in the actionIfSatisfiedField (ignore if no value set), otherwise the rule gets evaluated to the action specified in actionIfNotSatisfied field

c) If any of the evaluated submit rules got evaluated to block, submit will be disabled and the display name of all blocking rules displayed in Gerrit’s UI as reason for this decision

d) If no evaluated submit rule got evaluated to block but at least one to allow, submit will be enabled

e) If all evaluated rules got evaluated to ignore, submit will be disabled and the display names of all potential submit rule candidates displayed

As our first submit rule (Code-Review+2-Verified-From-2-CI-And-Project-Lead-To-Submit) has a submit rule filter which will not match if you are not a team lead, this rule will not be evaluated. This leaves us with submit rules two (Code-Review-Veto-Blocks-Submit) and three (Verified-Veto-Blocks-Submit). Neither of those submit rules have a submit rule filter so they will always be evaluated. Both rules have one Voting Condition, checking whether there is any Code-Review -2 or Verified -1 vote. If the corresponding voting condition can be satisfied, the surrounding submit rule will be evaluated to block, blocking submit and showing its display name as reason within Gerrit’s Web UI.

Let’s pretend nobody has vetoed our Gerrit change so far. In that case, all evaluated rules will be evaluated to ignore and the final step (e) of our algorithm will kick in. Submit will be disabled and the display names of all potential submit rule candidates, IOW all evaluated submit rules which can potentially be evaluated to allow will be shown. In our case, there are no potential submit rule candidates though as the only submit rule which can potentially evaluate to allow is submit rule one. This submit rule was not evaluated though as its submit rule filter did not match (no team lead was looking at the change). As a result, Gerrit can only show a very generic message why submit is not possible, leaving non team leads confused on what to wait for.

How to give guidance under those circumstances? Should we just modify our algorithm and also display the display names of submit rules that did not get evaluated? Probably not. Imagine you have a secret quality gate for a group called Apophenia who can bypass other quality gates if the commit to the enigma branch if the number of lines added to the commit is 23 (for anybody who does not know what I am talking about, I can really recommend this movie).

The corresponding submit rule would have submit rule filters making sure that the rule only gets evaluated for that particular branch, commit stats and user group. As long as those filters are not matched, the display name of surrounding submit rule must not be revealed under any circumstances. We are sure you can imagine a more business like scenario with similar characteristics.

Fortunately, there is a way how to guide users under those circumstances: Display only rules

Display only rules are submit rules without any voting conditions and no submit rule filters. Consequently, they are always evaluated and will always satisfy. They do not have any value (or ignore for that matter) set for their actionIfSatisfied attribute though. Hence, they will never influence whether submit is enabled or not (that’s why they are called display only after all). Their actionIfNotSatisfied attribute is set to allow. This makes them potential submit rule candidates.  In other words, their display names will always be shown whenever no other submit rules allows or blocks submit, providing perfect guidance.

In our particular example, we will create a display only rule with display name Team-Lead-To-Submit which will give all non team leads guidance why they cannot submit although nobody vetoed the change.

At this point, we like to demonstrate another cool feature of the Source tab. It is bidirectional, so you can also modify the XML and your changes will be reflected in the first and second tab of our wizard. Let’s paste our display only rule as one child element of the GerritWorkflow element:

<cn:SubmitRule actionIfNotSatisfied="allow" displayName="Team-Lead-To-Submit"/>

If you switch back to the Submit Rule tab, it should look like this

 You shall not pass – Control your code quality gates with a wizard – Part III

You probably recognized that this is the first time we used the Not Satisfied Action field, admittedly for a quite exotic use case, namely display only rules. The final step in our power policy will hopefully demonstrate a more common use case to use this field.

Not Satisfied Action for Exception Driven Rules

powerexample step61 You shall not pass – Control your code quality gates with a wizard – Part III

Step 6 of our power policy is an example of what we call exception driven rule:

“If a file called COPYRIGHT is changed within a commit, a Gerrit group called Legal has to approve (Code-Review +2) the Gerrit change”

Why exception driven? Well, having somebody from Legal approving a change is not sufficient by itself to enable submit, so having a separate submit rule with actionIfSatisfied set to allow is not the answer. Should we then just add legal approval as voting condition to all submit rules which can potentially enable submit? This is probably not a good idea either. Not every commit has to be approved by legal, only the ones changing the COPYRIGHT file.

Hence the best idea is to keep the existing submit rules unmodified and add a new submit rule which will

I) if evaluated, checks whether legal has approved the change and if not blocks submit (exception driven)

II) only be evaluated if legal has to actually approve the change (if the COPYRIGHT file changed)

Let’s tackle I) first by creating a new submit rule (push Adding Rule Manually button) with display name Legal-To-Approve-Changes-In-Copyright-File and setting Not Satisfied Action to block.

 You shall not pass – Control your code quality gates with a wizard – Part III

If we kept our new submit rule like this, it would not block a single change as it does not have any voting condition (and hence would always evaluate to satisfied). So let’s add a voting condition that requires a Gerrit group called Legal to give Code-Review +2. The screenshot below shows how this condition should look like. In our case, Legal is a TeamForge user group (group1008).

 You shall not pass – Control your code quality gates with a wizard – Part III

In the current state, all changes which do not satisfy our new voting condition would be blocked.

Implementing II) will make sure we only evaluate this submit rule (and its voting condition) if the corresponding commit changed the COPYRIGHT file. To do that, we have to click on Next, and switch to the Commit Detail tab which contains all submit rule filters which match characteristics of the commit associated with the evaluated change. The only field to fill in is the Commit delta file pattern. Its value has to be set to ^COPYRIGHT as shown in the screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

Why ^COPYRIGHT and not just COPYRIGHT? If a filter name does not end with Pattern, it only matches exact values. If a filter ends with Pattern though, it depends on the field value.

If the field value starts with ^, the field value is treated as a regular expression. ^COPYRIGHT will match any file change list that contains COPYRIGHT somewhere. If the field value does not start with ^, it is treated as an exact value. If we entered just COPYRIGHT, this would have only matched commits where only the COPYRIGHT (and no other file) got changed. Keep this logic in mind whenever you deal with pattern filters. Branch filters and commit message filters are other prominent examples where using a regular expression is probably better than an exact value.

If we finish the dialogs and switch to the Source tab, you can see the XML for our new submit rule:

 You shall not pass – Control your code quality gates with a wizard – Part III

The actionIfNotSatisfied attribute is set to block, we have one submit rule filter (CommitDetailFilter) and one voting condition with a filter (VoteVoterFilter).

Congratulations, you have successfully designed the power policy and can now test and deploy it!

powerexample1 You shall not pass – Control your code quality gates with a wizard – Part III

Power example with six quality gates

Learning more about the XML based quality gate language

Although you have seen quite a bit of our XML based language so far, we fully realize that we have not shown you every single feature. We do not believe this is necessary though, as our graphical wizard supports all features of the language. If you are unsure how a certain filter works, just create one example with the wizard, switch to the Source tab and find out how to do it properly. Our schema is another great resource as it is fully documented and will make sure that you do not come up with any invalid XML document. Last but not least, our wizard ships with many predefined templates. We tried to cover every single feature of the language within those templates.

For those of you who are familiar with Gerrit’s Prolog cookbook, we turned all Prolog examples into our declarative language and were able to cover the entire functionality demonstrated. The results can be found here.

As always, if you have any questions regarding the language, feel also free to drop a comment on this blog.

How to deploy quality gates without the graphical wizard

As explained before, our Quality Gate enforcing plugin ties into Gerrit’s Prolog based mechanism to customize its submit behavior. Gerrit expects the current submit rules in a Prolog file called rules.pl in a special ref called refs/meta/config. The deployment process for rules.pl is explained here.

Whenever our wizard generates a rules.pl file, it makes use of a custom Prolog predicate called cn:workflow/2 which is provided by our Quality Gate enforcing plugin. This predicate has two arguments. The first one takes the XML content as is, the second one will be bound to the body of Gerrit’s submit_rule/1 predicate. In a nutshell, the generated rules.pl looks like this:

submit_rule(Z):-cn:workflow(‘<XML document describing your quality gate policy>’, Z).

Our wizard does not use any other Prolog predicates. You can use our predicate as part of your own Prolog programs if you decide to come up with your own tooling and generate rules.pl by yourself. While passing the XML content, make sure it does not contain any character which would break Prolog quoting (no ‘ characters no newlines or XML encode then). Our graphical wizard takes care of this step.

Final words and Call for Participation

If you made it through the entire third blog post you can proudly call yourself a wizard too icon cool You shall not pass – Control your code quality gates with a wizard – Part III

Designing quality gates from scratch can be a complex matter. Fortunately, our wizard comes with many predefined templates you can just deploy. In addition, we turned any example from the Prolog cookbook into our format. If you are unsure how to match a certain state of a Gerrit change, just use the built in functionality of our wizard to turn it into a submit rule and adopt it according to your needs. Before you deploy, you can always simulate your quality gates within the wizard. It will follow the submit rule evaluation algorithm step by step and shows the evaluation result for every rule. If you do not like our wizard and do not like Prolog either, feel free to use our XML based language independently. This blog post has demonstrated how to do that.

Talking about the XML based language, its specification is Open Source. We encourage you to build your own wizard or other frontends and will happily assist if you have any questions regarding its functionality. Gerrit’s functionality to customize submit behavior is unmatched in the industry. We hope that with our contributions we made it a little easier to tap into it.

Coming up with the wizard, the language and our backend was a team effort. About half a dozen people worked for two months to get to the current state. We like to know from you whether it is worth investing further in this area. Want to have more examples? Better documentation? A tutorial video? A Web UI based wizard? Performance is not right? Cannot express the rules you like to express? Want to use the feature with vanilla Gerrit?

Please, spread the word about this new feature and give us feedback!

The post You shall not pass – Control your code quality gates with a wizard – Part III appeared first on blogs.collab.net.

Categories: Companies

R: Rook – Hello world example – ‘Cannot find a suitable app in file’

Mark Needham - Fri, 08/22/2014 - 13:05

I’ve been playing around with the Rook library and struggled a bit getting a basic Hello World application up and running so I thought I should document it for future me.

I wanted to spin up a web server using Rook and serve a page with the text ‘Hello World’. I started with the following code:

library(Rook)
s <- Rhttpd$new()
 
s$add(name='MyApp',app='helloworld.R')
s$start()
s$browse("MyApp")

where helloWorld.R contained the following code:

function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Unfortunately that failed on the ‘s$add’ line with the following error message:

> s$add(name='MyApp',app='helloworld.R')
Error in .Object$initialize(...) : 
  Cannot find a suitable app in file helloworld.R

I hadn’t realised that you actually need to assign that function to a variable ‘app’ in order for it to be picked up:

app <- function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Once I fixed that everything seemed to work as expected:s

> s
Server started on 127.0.0.1:27120
[1] MyApp http://127.0.0.1:27120/custom/MyApp
 
Call browse() with an index number or name to run an application.
Categories: Blogs

Die Frage nach dem Warum

Scrum 4 You - Fri, 08/22/2014 - 07:30

Als ScrumMaster bzw. Agile Consultant stelle ich am Anfang von jedem Scrum Meeting immer dieselbe offene Frage in die Runde der Teilnehmer: „Weshalb ist dieses Meeting Teil des Scrum Flows?“ Wohlgemerkt: Das frage ich nicht nur, wenn ich ein Team gerade neu übernommen habe, oder wenn wir einen Termin neu gestalten. Für mich hat diese Frage einen positiven Aspekt, den man immer wieder wiederholen darf – jenen des kontinuierlichen Lernens und der Frage nach dem Sinn.

Eines habe ich nämlich durch Scrum gelernt: Es hat alles einen Sinn. Und wenn etwas tatsächlich doch keinen Sinn hat, dann ist es Waste und sollte tunlichst geändert bzw. abgeschafft werden! Da wir als Firma Boris Gloger Consulting immer öfter von Managern angerufen werden, um Scrum in ihren Unternehmen zu implementieren, hat der Begriff „Scrum“ bei vielen Mitarbeitern auch das Synonym „neuester Trend“. Gerade dienstältere Mitarbeiter belächeln mich dann manchmal und meinen nur „Sie wissen ja gar nicht, wie viele Prozesse wir hier schon kommen und gehen gesehen haben. Dieses Scrum wird der nächste gewesen sein“. Dagegen weigere ich mich jedoch. Ja, es ist aktuell trendy, agil zu arbeiten (siehe „Studie Agile Status Quo“). Doch gibt es auch einen guten Grund dafür!

Damit Scrum nicht nur als Trend wahrgenommen und gelebt wird, ist es wichtig, dass jene Menschen, die
damit arbeiten sollen, den Sinn dahinter erkennen. Und aus diesem Grund stelle ich die Frage nach dem „Warum“ am Anfang jedes (Scrum-)Meetings. Auch vor kurzem wieder am Anfang der Sprinthe-question-mark-350169_640t Retrospektive bei einem cross-funktionalen Team, das schon seit einem Jahr Scrum in der Hardware macht. Kleiner Tipp am Rande: Hört gut zu. Die erste Antwort beantwortet meistens das Was. Auch dieses Mal kam wieder die Antwort: „Wir schauen uns an, was im letzten Sprint gut gelaufen ist und was wir im nächsten Sprint anders machen wollen“. Ja – das ist korrekt. Doch beantwortet das meine Frage nach dem Warum? Nein. Also noch einmal fragen: „Weshalb sitzen wir jetzt in diesem Meeting?“

Ein schöner Nebeneffekt dieser offenen Frage ist, dass es Zynismus und lustige Kommentare zulässt. So kann man gleich mit einem Lachen in ein Meeting starten. Oder Bedenken aus dem Weg räumen. Falsche Interpretationen gerade ziehen. Einen Einblick in die Stimmung im Team bekommen. Und auch als Agile Consultant immer wieder Neues erfahren.

Versucht es selbst! Ich freue mich über Erfahrungsberichte.

Related posts:

  1. Die Retrospektive macht das Team, das Team macht die Retrospektive
  2. Scrum – wider die Methodenfixierung
  3. Klassiker | Sprint Planning

Categories: Blogs

The Agile Reader – Weekend Edition: 08/22/2014

Scrumology.com - Kane Mar - Fri, 08/22/2014 - 05:43

You can get the Weekend Edition delivered directly to you via email by signing up here.

The Weekend Edition is a list of some interesting links found on the web to catch up with over the weekend. It is generated automatically, so I can’t vouch for any particular link but I’ve found the results are generally interesting and useful.

  • What’s going on? Agile and Scrum Certification Online Free Webinar On How are Agile… http://t.co/kZzOXtsgdn
  • RT @magenic: How do you define success in an #agile environment? #scrum
  • “@bfavellato: Tutorials, Practices & Demos: IBM Rational Solution for #Agile ALM with Scrum @JazzDotNet @JazzHub”
  • RT @pisarose: Great ideas! A scrum approach to #content creation – @shellykramer #marketing #agile
  • 8 Ways to Avoid Making an #Agile Mistake #agiledevelopment #scrum
  • Interesting reading for the next week #agile #scrum #gamedev
  • RT @apuntoprieto: Interesting reading for the next week #agile #scrum #gamedev
  • <a href="
    Warning: require_once(/home3/clinton3/public_html/wp-settings.php): failed to open stream: No such file or directory in /home3/clinton3/public_html/wp-config.php on line 30
    http://clintonkeith.com/agd.html&nbsp”>RT : Interesting reading for the next week #agile #scrum #gamedev
  • Keep it Simple: What is Agile SCRUM: #scrum #agile
  • How to help a team that is not performing so well – Part I – #scrum #agile #learning #improvement
  • RT @AgileBelgium: #Agile Tour Brussels 2014: program published, registration open. #atbru #program #published #scrum…
  • RT @yochum: Scrum Expert: Patterns: a New Standard for Scrum #agile #scrum
  • Read this #Kindle #8399

    The Scrum Checklist, For the Agile Scrum Master, Product Owner,… http://t.co/yUjUOdOomR

  • Read this #Kindle #8399

    How to Become a Scrum Master in 7 Simple Steps (Agile Project M… http://t.co/oao1BBUY4w

  • Read this #Kindle #8399

    Scrum, (Mega Pack), For the Agile Scrum Master, Product Owner, … http://t.co/Fu8olhfO5O

  • Interested in this job? Eliassen Group Agile Coaching & Senior Scrum Master Training in Bethesda, MD #agilecoach
  • Agile Transformation Program Manager – Scrum Master – 8188 #job
  • Agile Scrum isn’t a silver bullet solution for s/w development, but it can be a big help. #AppsTrans #HP
  • Using personas to drive epic & user story development: by @romanpichler #prodmgmt #agile #scrum
  • RT @lgoncalves1979: How to help a team that is not performing so well – Part I – #scrum #agile #learning #improvement
  • Are you thinking about getting a Scrum Master Certification? #agile #scrum
  • Here is my half day w/shop presentation on Leading Agile Virtual Teams, delivered at #LEADit #scrum #virtualteams
  • “Scrum is a means of becoming agile…but you can, and should outgrow it if you do it right.” @geoffcwatts #scrumish
  • Here is our half day w/shop presentation on Leading Agile Virtual Teams, delivered at #LEADit #scrum #virtualteams
  • CHECK THIS #BOOK #71084 #Kindle

    The Scrum Checklist, For the Agile Scrum Master, Produc… http://t.co/sHx8Y1K7Y2

  • CHECK THIS #BOOK #71084 #Kindle

    How to Become a Scrum Master in 7 Simple Steps (Agile P… http://t.co/lhocB5NJ2e

  • CHECK THIS #BOOK #71084 #Kindle

    Scrum, (Mega Pack), For the Agile Scrum Master, Product… http://t.co/iGZxmFryVa

  • Check this out: The FREE SCRUM EBOOK as sold on Amazon. #Scrum #Agile inspired by #Ken Schwaber
  • RT @AgileBelgium: #Agile Tour Brussels 2014: program published, registration open. #atbru #program #published #scrum…
  • Software – Introduction to scrum training and agile training at #approach #scrum #tortillis #organizations
  • RT @ScrumDan: Everyone has been asking me to sell my user Story Cards. #agile #scrum
  • Agile Scrum isn’t a silver bullet solution for s/w development, but it can be a big help. #AppsTrans #HP
  • Read Book : #Kindle #5142 #9: Succeeding with Agile: Software Development Using Scrum

    S… http://t.co/Tz3pnSodf5

  • Read Book : #Kindle #5142 #1: Scrum Shortcuts without Cutting Corners: Agile Tactics, To… http://t.co/exXFblQpVS
  • RT @rhundhausen: One blog to inform them all: High quality #Scrum and #Agile posts by high quality practitioners #pr…
  • Here’s how we built the New – Iterative Agile Development Lessons #scrum #agile #iterative
  • RT @BDCEng: Here’s how we built the New – Iterative Agile Development Lessons #scrum #agile #…
  • Read Now #7153 #Kindle

    The Scrum Checklist, For the Agile Scrum Master, Product Owner, … http://t.co/2u5OQmFWVo

  • Read Now #7153 #Kindle

    How to Become a Scrum Master in 7 Simple Steps (Agile Project Ma… http://t.co/k707gBuGhN

  • Read Now #7153 #Kindle

    Scrum, (Mega Pack), For the Agile Scrum Master, Product Owner, S… http://t.co/UXjx7ILtrh

  • #Kindle #3: Scrum Shortcuts without Cutting Corners: Agile Tactics, Tools, & Tips (Addis… http://t.co/pHtlthstma
  • Books & Deals >> #32033 #Kindle

    Scrum Shortcuts without Cutting Corners: Agile Tactics,… http://t.co/fvfDxQmkBw

  • Paperback Scrum: Need a scrum guide: #scrum #agile
  • Lean, Agile & Scrum Conference – Dave Snowden @beyondreqs #cynefin… http://t.co/qPvMd3KRau
  • RT @techXOcafe: Lean, Agile & Scrum Conference – Dave Snowden @beyondreqs #cynefin… http://t.co/qPvMd3KRau
  • RT @gabrielagill53: Making employees happy = #successful company!! @Happy_Melly #scrum #agile #awesome @WIKISPEED
  • Pega Application Developer to join our #EmployeroftheWeek #securityclearance #Agile #Scrum
  • Pega Application Developer to join our #EmployeroftheWeek #securityclearance #Agile #Scrum
  • Pega Application Developer to join our #EmployeroftheWeek #securityclearance #Agile #Scrum
  • RT @yochum: Scrum Expert: Patterns: a New Standard for Scrum #agile #scrum
  • RT @ScrumDan: Everyone has been asking me to sell my user Story Cards. #agile #scrum
  • Agile by McKnight, Scrum by Day is out! Stories via @StratacticalCo @trompouet
  • RT @ClearedJobsDC: Pega Application Developer to join our #EmployeroftheWeek #securityclearance #Agile #Scrum
  • CHECK THIS #BOOK #71084 #Kindle

    Scrum Shortcuts without Cutting Corners: Agile Tactics,… http://t.co/mzo4x9rVBj

  • RT @rhundhausen: One blog to inform them all: High quality #Scrum and #Agile posts by high quality practitioners #pr…
  • The Best Books : #9516 #Kindle

    Scrum Shortcuts without Cutting Corners: Agile Tactics, … http://t.co/OGeJkB0zkU

  • What does the Scrum Master actually do in agile projects? – http://t.co/LvrgDthKV7
  • [QUESTION] The Product Owner Says #NoEstimates From the Team. Now what? #Agile #Scrum #PM #pmot
  • Categories: Blogs

    Does the actual experience at your organisation reflect a supporting culture?

    One of my favourite books that I've read recently is True Professionalism by David H. Maister. In it, he references another of his books, Managing The Professional Service Firm, about asking junior professionals about their experience on work assignments.

    I think this list of questions is generally applicable even outside of professional service firms to assess whether the body language of an organisation actually indicates a supporting culture, independent of what might be claimed:

    Is it usually true that...
    1. When work is assigned, you understand thoroughly what is expected of you
    2. You understand how your tasks fit into the overall objectives of the project, engagement, organisation
    3. You are kept informed about the things that you need to know in order to do your job properly
    4. You receive good coaching to help improve performance
    5. You receive prompt feedback on your work, good or bad
    6. You feel that you are a member of a well-functioning team
    If you flip this around, we can use this as a checklist for when you take on work:
    1. What is expected of me for this work?
    2. How do my tasks fit into overall objectives?
    3. Do I need to know anything else in order to do this job properly?
    4. Who will support me to help improve my performance?
    5. How will I get feedback on whether the work is good or bad?
    6. Who will be part of my team and how will we interact?
    Categories: Blogs

    New Sprintly Feature: Change Item Type

    sprint.ly - scrum software - Thu, 08/21/2014 - 20:49

    One of our top requested Sprintly features is “how do I change the item type?” Ever file a defect in Sprintly and realize that it should have been a task? Today we’ve shipped this ever useful feature!

    Place an item in edit mode via the gear icon, select the new item type and hit Update. In this example, I changed a defect into a task:

    image

    You won’t be able to change a Story into another item type at this point. Stories are unique in that they can have sub-items. Tasks, defects and tests cannot have sub-items.

    We hope you enjoy this Sprintly product update and always, let us know how we can be of help.

    Categories: Companies

    Knowledge Sharing


    SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.