Skip to content

Feed aggregator

The Hard Thing About Hard Things – Ben Horowitz: Book Review

Mark Needham - Tue, 10/14/2014 - 01:59

I came across ‘The Hard Thing About Hard Things‘ while reading an article about Ben Horowitz’s venture capital firm and it was intriguing enough that I bought it and then read through it over a couple of days.

Although the blurb suggests that it’s a book about about building and running a startup I think a lot of the lessons are applicable for any business.

These were some of the main points that stood out for me:

  • The Positivity Delusion – CEOs should tell it like it is.

    My single biggest improvement as CEO occurred on the day when I stopped being too positive.

    Horowitz suggests that he used to be too positive and would shield bad news from his employees as he thought he’d make the problem worse by transferring the burden onto them.

    He came to the realisation that this was counter productive since he often wasn’t the best placed person to fix a problem e.g. if it was a problem with the product then the engineering team needed to know so they could write the code to fix it.

    He goes on to suggest that…

    A healthy company culture encourages people to share bad news. A company that discusses its problems freely and openly can quickly solve them. A company that covers up its problems frustrated everyone involved.

    I’ve certainly worked on projects in the past where the view projected by the most senior person is overly positive and seems to ignore any problems that seem obvious to everyone else. This eventually leads to people being unsure whether to take them seriously which isn’t a great situation to be in.

  • Lead Bullets – fix the problem, don’t run away from it.

    Horowitz describes a couple of situations where his products have been inferior to their competitors and it’s been tempting to take the easy way out by not fixing the product.

    There comes a time in every company’s life where it must fight for its life. If you find yourself running when you should be fighting, you need to ask yourself, “If our company isn’t good enough to win, then do we need to exist at all?”.

    I can’t think of any examples around this from my experience but I really like the advice – I’m sure it’ll come in handy in future.

  • Give ground grudgingly – dealing with the company increasing in size.

    Horowitz suggests that the following things become more difficult as a company grows in size:

    • Communication
    • Common Knowledge
    • Decision Making

    but…

    If the company doesn’t expand it will never be much…so the challenge is to grow but degrade as slowly as possible.

    He uses the metaphor of an offensive linesman in American football who has to stop onrushing defensive linesman but giving ground to them slowly by backing up a little at a time.

    I’ve worked in a few different companies now and noticed things become more structured (and in my eyes worse!) as the company grew over time but I hadn’t really thought about why that was happening. The chapter on scaling a company does a decent job.

  • The Law of Crappy People – people baseline against the worst person at a grade level.

    For any title level in a large organisation, the talent on that level will eventually converge to the crappiest person with that title.

    This is something that he’s also written about on his blog and certainly seems very recognisable.

    His suggestion for mitigating the problem is to have a “properly constructed and highly disciplined promotion process” in place. He describes this like so:

    When a manager wishes to promote an employee, she will submit that employee for review with an explanation of why she believes her employee satisfies the skill criteria required for the level.

    The committee should compare the employee to both the level’s skill description and the skills of the other employees at that level to determine whether or not to approve the promotion.

  • Hire people with the right kind of ambition

    The wrong kind of ambition is ambition for the executive’s personal success regardless of the company’s outcome.

    This suggestion comes from the chapter in which Horowitz discusses how to minimise politics in an organisation.

    I really like this idea but it seems like a difficult thing to judge/achieve. In my experience people often have their own goals which aren’t necessarily completely aligned with the company’s. Perhaps complete alignment isn’t as important unless you’re right at the top of the company?

    He also has quite a neat definition of politics:

    What do I mean by politics? I mean people advancing their careers or agendas by means other than merit and contribution.

    He goes on to describe a few stories of how political behaviour can subtly creep into a company without the CEO meaning for it to happen. This chapter was definitely eye opening for me.

There are some other interesting chapters on the best types of CEOs for different companies, when to hire Senior external people, product management and much more.

I realise that the things I’ve picked out are mostly a case of confirmation bias so I’m sure everyone will have different things that stand out for them.

Definitely worth a read.

Categories: Blogs

Agile Beyond Software

TV Agile - Mon, 10/13/2014 - 21:09
Agile is most closely associated with software development, Agile software development to be precise. That’s enough to put people off right there and then. But for those who listen long enough invariably ask the big question: “Does Agile work outside of software?” This is the question Allan Kelly will attempt to answer in this presentation. […]
Categories: Blogs

Fast and Easy integration testing with Docker and Overcast

Xebia Blog - Mon, 10/13/2014 - 19:40
Challenges with integration testing

Suppose that you are writing a MongoDB driver for java. To verify if all the implemented functionality works correctly, you ideally want to test it against a REAL MongoDB server. This brings a couple of challenges:

  • Mongo is not written in java, so we can not embed it easily in our java application
  • We need to install and configure MongoDB somewhere, and maintain the installation, or write scripts to set it up as part of our test run.
  • Every test we run against the mongo server, will change the state, and tests might influence each other. We want to isolate our tests as much as possible.
  • We want to test our driver against multiple versions of MongoDB.
  • We want to run the tests as fast as possible. If we want to run tests in parallel, we need multiple servers. How do we manage them?

Let's try to address these challenges.

First of all, we do not really want to implement our own MonogDB driver. Many implementations exist and we will be reusing the mongo java driver to focus on how one would write the integration test code.

Overcast and Docker

logoWe are going to use Docker and Overcast. Probably you already know Docker. It's a technology to run applications inside software containers. Overcast is the library we will use to manage docker for us. Overcast is a open source java library
developed by XebiaLabs to help you to write test that connect to cloud hosts. Overcast has support for various cloud platforms, including EC2, VirtualBox, Vagrant, Libvirt (KVM). Recently support for Docker has been added by me in Overcast version 2.4.0.

Overcast helps you to decouple your test code from the cloud host setup. You can define a cloud host with all its configuration separately from your tests. In your test code you will only refer to a specific overcast configuration. Overcast will take care of creating, starting, provisioning that host. When the tests are finished it will also tear down the host. In your tests you will use Overcast to get the hostname and ports for this cloud host to be able to connect to them, because usually these are dynamically determined.

We will use Overcast to create Docker containers running a MongoDB server. Overcast will help us to retrieve the dynamically exposed port by the Docker host. The host in our case will always be the docker host. Docker in our case runs on an external Linux host. Overcast will use a TCP connection to communicate with Docker. We map the internal ports to a port on the docker host to make it externally available. MongoDB will internally run on port 27017, but docker will map this port to a local port in the range 49153 to 65535 (defined by docker).

Setting up our tests

Lets get started. First, we need a Docker image with MongoDB installed. Thanks to the Docker community, this is as easy as reusing one of the already existing images from the Docker Hub. All the hard work of creating such an image is already done for us, and thanks to containers we can run it on any host capable of running docker containers. How do we configure Overcast to run the MongoDB container? This is our minimal configuration we put in a file called overcast.conf:

mongodb {
    dockerHost="http://localhost:2375"
    dockerImage="mongo:2.7"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

That's all! The dockerHost is configured to be localhost with the default port. This is the default value and you can omit this. The docker image called mongo version 2.7 will be automatically pulled from the central docker registry. We set exposeAllPorts to true to inform docker it needs to dynamically map all exposed ports by the docker image. We set remove to true to make sure the container is automatically removed when stopped. Notice we override the default container startup command by passing in an extra parameter "--smallfiles" to improve testing performance. For our setup this is all we need, but overcast also has support for defining static port mappings, setting environment variables, etc. Have a look at the Overcast documentation for more details.

How do we use this overcast host in our test code? Let's have a look at the test code that sets up the Overcast host and instantiates the mongodb client that is used by every test. The code uses the TestNG @BeforeMethod and @AfterMethod annotations.

private CloudHost itestHost;
private Mongo mongoClient;

@BeforeMethod
public void before() throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost("mongodb");
    itestHost.setup();

    String host = itestHost.getHostName();
    int port = itestHost.getPort(27017);

    MongoClientOptions options = MongoClientOptions.builder()
        .connectTimeout(300 * 1000)
        .build();

    mongoClient = new MongoClient(new ServerAddress(host, port), options);
    logger.info("Mongo connection: " + mongoClient.toString());
}

@AfterMethod
public void after(){
    mongoClient.close();
    itestHost.teardown();
}

It is important to understand that the mongoClient is the object under test. Like mentioned before, we borrowed this library to demonstrate how one would integration test such a library. The itestHost is the Overcast CloudHost. In before(), we instantiate the cloud host by using the CloudHostFactory. The setup() will pull the required images from the docker registry, create a docker container, and start this container. We get the host and port from the itestHost and use them to build our mongo client. Notice that we put a high connection timeout on the connection options, to make sure the mongodb server is started in time. Especially the first run it can take some time to pull images. You can of course always pull the images beforehand. In the @AfterMethod, we simply close the connection with mongoDB and tear down the docker container.

Writing a test

The before and after are executed for every test, so we will get a completely clean mongodb server for every test, running on a different port. This completely isolates our test cases so that no tests can influence each other. You are free to choose your own testing strategy, sharing a cloud host by multiple tests is also possible. Lets have a look at one of the tests we wrote for mongo client:

@Test
public void shouldCountDocuments() throws DockerException, InterruptedException, UnknownHostException {

    DB db = mongoClient.getDB("mydb");
    DBCollection coll = db.getCollection("testCollection");
    BasicDBObject doc = new BasicDBObject("name", "MongoDB");

    for (int i=0; i < 100; i++) {
        WriteResult writeResult = coll.insert(new BasicDBObject("i", i));
        logger.info("writing document " + writeResult);
    }

    int count = (int) coll.getCount();
    assertThat(count, equalTo(100));
}

Even without knowledge of MongoDB this test should not be that hard to understand. It creates a database, a new collection and inserts 100 documents in the database. Finally the test asserts if the getCount method returns the correct amount of documents in the collection. Many more aspects of the mongodb client can be tested in additional tests in this way. In our example setup, we have implemented two more tests to demonstrate this. Our example project contains 3 tests. When you run the 3 example tests sequentially (assuming the mongo docker image has been pulled), you will see that it takes only a few seconds to run them all. This is extremely fast.

Testing against multiple MongoDB versions

We also want to run all our integration tests against different versions of the mongoDB server to ensure there are no regressions. Overcast allows you to define multiple configurations. Lets add configuration for two more versions of MongoDB:

defaultConfig {
    dockerHost="http://localhost:2375"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

mongodb27=${defaultConfig}
mongodb27.dockerImage="mongo:2.7"

mongodb26=${defaultConfig}
mongodb26.dockerImage="mongo:2.6"

mongodb24=${defaultConfig}
mongodb24.dockerImage="mongo:2.4"

The default configuration contains the configuration we have already seen. The other three configurations extend from the defaultConfig, and define a specific mongoDB image version. Lets also change our test code a little bit to make the overcast configuration we use in the test setup depend on a parameter:

@Parameters("overcastConfig")
@BeforeMethod
public void before(String overcastConfig) throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost(overcastConfig);

Here we used the paramaterized tests feature from TestNG. We can now define a TestNG suite to define our test cases and how to pass in the different overcast configurations. Lets have a look at our TestNG suite definition:

<suite name="MongoSuite" verbose="1">
    <test name="MongoDB27tests">
        <parameter name="overcastConfig" value="mongodb27"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB26tests">
        <parameter name="overcastConfig" value="mongodb26"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB24tests">
        <parameter name="overcastConfig" value="mongodb24"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
</suite>

With this test suite definition we define 3 test cases that will pass a different overcast configuration to the tests. The overcast configuration plus the TestNG configuration enables us to externally configure against which mongodb versions we want to run our test cases.

Parallel test execution

Until this point, all tests will be executed sequentially. Due to the dynamic nature of cloud hosts and docker, nothing limits us to run multiple containers at once. Lets change the TestNG configuration a little bit to enable parallel testing:

<suite name="MongoSuite" verbose="1" parallel="tests" thread-count="3">

This configuration will cause all 3 test cases from our test suite definition to run in parallel (in other words our 3 overcast configurations with different MongoDB versions). Lets run the tests now from IntelliJ and see if all tests will pass:

Screen Shot 2014-10-08 at 8.32.38 PM
We see 9 executed test, because we have 3 tests and 3 configurations. All 9 tests have passed. The total execution time turned out to be under 9 seconds. That's pretty impressive!

During test execution we can see docker starting up multiple containers (see next screenshot). As expected it shows 3 containers with a different image version running simultaneously. It also shows the dynamic port mappings in the "PORTS" column:

Screen Shot 2014-10-08 at 8.50.07 PM

That's it!

Summary

To summarise, the advantages of using Docker with Overcast for integration testing are:

  1. Minimal setup. Only a docker capable host is required to run the tests.
  2. Save time. Minimal amount of configuration and infrastructure setup required to run the integration tests thanks to the docker community.
  3. Isolation. All test can run in their isolated environment so the tests will not affect each other.
  4. Flexibility. Use multiple overcast configuration and parameterized tests for testing against multiple versions.
  5. Speed. The docker container starts up very quickly, and overcast and testng allow you to even parallelize the testing by running multiple containers at once.

The example code for our integration test project is available here. You can use Boot2Docker to setup a docker host on Mac or Windows.

Happy testing!

Paul van der Ende 

Note: Due to a bug in the gradle parallel test runner you might run into this random failure when you run the example test code yourself. The work around is to disable parallelism or use a different test runner like IntelliJ or maven.

 

Categories: Companies

How Digital is Changing Physical Experiences

J.D. Meier's Blog - Mon, 10/13/2014 - 18:12

The business economy is going through massive change, as the old world meets the new world.

The convergence of mobility, analytics, social media, cloud computing, and embedded devices is driving the next wave of digital business transformation, where the physical world meets new online possibilities.

And it’s not limited to high-tech and media companies.

Businesses that master the digital landscape are able to gain strategic, competitive advantage.   They are able to create new customer experiences, they are able to gain better insights into customers, and they are able to respond to new opportunities and changing demands in a seamless and agile way.

In the book, Leading Digital: Turning Technology into Business Transformation: Turning Technology Into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of the ways that businesses are meshing the physical experience with the digital experience to generate new business value.

Provide Customers with an Integrated Experience

Businesses that win find new ways to blend the physical world with the digital world.  To serve customers better, businesses are integrating the experience across physical, phone, mail, social, and mobile channels for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Companies with multiple channels to customers--physical, phone, mail, social, mobile, and so on--are experiencing pressure to provide an integrated experience.  Delivering these omni-channel experiences requires envisioning and implementing change across both front-end and operational processes.  Innovation does not come from opposing the old and the new.  But as Burberry has shown,  innovation comes from creatively meshing the digital and the physical to reinvent new and compelling customer experiences and to foster continuous innovation.”

Bridge In-Store Experiences with New Online Possibilities

Starbucks is a simple example of blending digital experiences with their physical store.   To serve customers better, they deliver premium content to their in-store customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Similarly, the unique Starbucks experience is rooted in connecting with customers in engaging ways.  But Starbucks does not stop with the physical store.  It has digitally enriched the customer experience by bridging its local, in-store experience with attractive new online possibilities.  Delivered via a free Wi-Fi connection, the Starbucks Digital Network offers in-store customers premium digital content, such as the New York Times or The Economist, to enjoy alongside their coffee.  The network also offers access to local content, from free local restaurant reviews from Zagat to check-in via Foursquare.”

An Example of Museums Blending Technology + Art

Museums can create new possibilities by turning walls into digital displays.  With a digital display, the museum can showcase all of their collections and provide rich information, as well as create new backdrops, or tailor information and tours for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Combining physical and digital to enhance customer experiences is not limited to just commercial enterprises.  Public services are getting on the act.  The Cleveland Museum of Art is using technology to enhance the experience and the management of visitors.  'EVERY museum is searching for this holy grail, this blending of technology and art,' said David Franklin, the director of the museum.

 

Fort-foot-wide touch screens display greeting-card sized images of all three thousand objects, and offers information like the location of the actual piece.  By touching an icon on the image, visitors can transfer it from the wall to an iPad (their own, or rented from the museum for $5 a day), creating a personal list of favorites.  From this list, visitors can design a personalized tour, which they can share with others.

 

'There is only so much information you can put on a wall, and no one walks around with catalogs anymore,' Franklin said.  The app can produce a photo of the artwork's original setting--seeing a tapestry in a room filled with tapestries, rather than in a white-walled gallery, is more interesting.  Another feature lets you take the elements of a large tapestry and rearrange them in either comic-book or movie-trailer format.  The experience becomes fun, educational, and engaging.  This reinvention has lured new technology-savvy visitors, but has also made seasoned museum-goers come more often.”

As you figure out the future capability vision for your business, and re-imagine what’s possible, consider how the Nexus of Forces (Cloud, Mobile, Social, and Big Data), along with the mega mega-trend (Internet-of-Things), can help you shape your digital business transformation.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Blogs

Manage Agile, Berlin, Germany, October 27-30 2014

Scrum Expert - Mon, 10/13/2014 - 17:32
Manage Agile is a four day conference focused on Agile management topics – no pure technologic aspects. The Manage Agile conference is a networking platform where specialists and managers discuss agile topics not only in software engineering but also in the whole company up to the management. The conference is divided into two workshop days and two conference days. Most of the presentations are in German, but you will also find English content. In the agenda of Manage Agile you can find topics like “The roots of Agile and Lean in ...
Categories: Communities

Xebia KnowledgeCast Episode 5: Madhur Kathuria and Scrum Day Europe 2014

Xebia Blog - Mon, 10/13/2014 - 11:48

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this 5th episode, we share key insights of Madhur Kathuria, Xebia India’s Director of Agile Consulting and Transformation, as well as some impressions of our Knowledge Exchange and Scrum Day Europe 2014. And of course, Serge Beaumont will have Fun With Stickies!

First, Madhur Kathuria shares his vision on Agile and we interview Guido Schoonheim at Scrum Day Europe 2014.

In this episode's Fun With Stickies Serge Beaumont talks about wide versus deep retrospectives.

Then, we interview Martin Olesen and Patricia Kong at Scrum Day Europe 2014.

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Categories: Companies

Design Thinking für Product Owner – Teil 5: Wie entstehen eigentlich gute Ideen?

Scrum 4 You - Mon, 10/13/2014 - 07:30

Unser Gehirn vermag Erstaunliches. Wie wäre es, wenn wir mehr von diesem Potential zur Problemlösung nutzen würden? Sicher haben auch Sie schon den Spruch gehört: “Kreative arbeiten am stärksten, wenn es aussieht, als ob sie gar nichts täten.” Und tatsächlich ist da etwas dran!

Im Business-Alltag herrscht die analytische Denkform. Wenn es aber darum geht neue Lösungen zu finden, ist es ratsam, auch in andere Denkformen vorzustoßen. In unseren Design-Thinking-Trainings für Product Owner und ScrumMaster wird beispielsweise die Ideenfindung vom Mittagessen unterbrochen: Die Teilnehmer haben sich 30 Minuten intensivem Brainstorming hingegeben, ihr analytisches Denken hat die meisten Ideen und Lösungsmöglichkeiten bereits hervorgebracht, diese warten auf bunten Haftnotizen an der Wand. Der Strom der Ideen versiegt langsam. – Pause -

Wir schalten die analytische Denkform absichtlich ab, aber Sie können sicher sein, dass die Gedanken im “Hinterkopf” weiter kreisen. Gemeinsam oder auch einzeln gehen wir essen und die Bedingung ist, dass jeder einen Block Haftnotizen und einen Stift in der Tasche hat. Und nun passiert Erstaunliches: Die Teilnehmer haben die Gelegenheit, ihre Aufmerksamkeit in die Bereiche der Reflexion, der Inspiration oder auch des Loslassens gleiten zu lassen. Nach der Mittagspause beginnen wir mit einem Team-Resync, d.h. wir nehmen uns 15 Minuten Zeit, um neue Ideen und Einsichten zusammenzutragen. Erfahrungsgemäß bringt hier mindestens die Hälfte der Teilnehmer noch einmal sehr wertvolle Impulse in ihr Team: “Ich habe da draußen etwas gesehen, da habe ich mich erinnert, dass…” oder “als ich noch einmal in Ruhe darüber nachgedacht habe, habe ich gemerkt, wie wichtig mir dieser Aspekt ist” oder “ich habe gar nicht daran gedacht, aber plötzlich hatte ich die Idee …”.

Analyse, Inspiration, Reflexion & Loslassen

Werfen wir nun noch einen genaueren Blick auf diese vier Denkformen und betrachten wir das Two-by-Two Diagramm für die Aufmerksamkeit. Die horizontale Achse reicht von eng (links) bis weit (rechts), die vertikale Achse reicht von innen (unten) bis außen (oben). Damit ergeben sich vier Bereiche.

Design Thinking & Change Management Flipcharts

Aufmerksamkeit eng und nach außen gerichtet: Die Analyse

Hier bewegen wir uns i.d.R. im Business-Umfeld. Diese Denkweise ist anstrengend und wir halten das maximal 1,5 Stunden aus, danach sinkt unsere Leistungsfähigkeit rapide ab. Die Analyse ist geeignet, um sich initial mit einer Fragestellung zu befassen und ein Grundverständnis zu schaffen. Sucht man jedoch auf diese Art nach einer Lösung, wird man nur selten eine Innovation finden.

Aufmerksamkeit eng und nach innen gerichtet: Die Reflexion

In diesem Bereich bewegen wir uns, wenn wir über unsere Prioritäten und persönlichen Ziele nachdenken, wir zapfe unser Wissen und unsere Erfahrung an. Dieser Bereich ist wertvoll, um beispielsweise allein einen Stapel generierter Ideen zu bewerten und diese Ergebnisse später mit anderen zu teilen.

Aufmerksamkeit weit und nach außen gerichtet: Die Inspiration

Hier bewegen wir uns seltener, denn nun kommen unsere Vorstellungskraft und die Erlebniswelten anderer Menschen ins Spiel. Die Business-Welt hat Angst vor diesem Bereich, weil er nicht berechenbar und planbar ist. Viele Kreativtechniken probieren hier in den Raum des Denkbaren vorzustoßen. Gelegentlich müssen diese Vorstöße ganz sanft passieren, weil sich viele Menschen auf für sie unsicheres Terrain begeben. Ein Besuch beim User führt beispielsweise Software-Entwickler oder Ingenieure in den Bereich der Inspiration: “Nur mal ganz kurz … wir kehren gleich zurück … nur mal ganz kurz raus, da hin wo der User ist … nur mal ganz kurz nicht an die Machbarkeit denken … und dann hopp, hopp, schnell zurück.” Es ist immer wieder erstaunlich, wie viele Einsichten und Ideen von solchen Ausflügen mitgebracht werden.
Eine andere, noch komfortablere Methode, um in den Raum des Denkbaren zu gelangen, ist die Verlagerung der Frage an andere Orte oder Zeiten: “Wie würde Supermann das lösen? Wie würde das auf der Enterprise oder im Mittelalter aussehen?”

Aufmerksamkeit weit und nach innen gerichtet: Das Loslassen

Dies ist ein defokussierter Zustand, hier werden Metaphern und Witze verstanden. Es ist ein Bereich, der meist komplett ignoriert wird. Wer ihn allerdings entdeckt, kann beobachten, wie sich Lösungen ganz von selbst im Kopf materialisieren – meistens dann, wenn man es eigentlich gar nicht erwartet. Mancher Mensch kann diesen Zustand nach jahrelangen Meditationsübungen schnell herbeiführen. Mir gelingt es manchmal nach dem Sport unter der Dusche, wenn der Körper ermattet dasteht, Wasser über meinen Kopf fließt und meine Gedanken mit dem Wasser treiben. Gelegentlich “erwache” ich aus diesem Zustand und habe das dringende Gefühl, eine Art “Eingebung” aufschreiben zu müssen.

Integration ins Unternehmen

In Unternehmen ist es i.d.R. nicht einfach, den analytischen Bereich zu verlassen. Andere Denkweisen zu erkunden braucht Zeit und Raum! Außerdem vereinheitlicht die Unternehmenskultur die Denkweise von Mitarbeitern. Tradition und Gewohnheit verhindern den Blick über den Tellerrand. Und genau deshalb sind “Kreative” auch oft gerade dann produktiv, wenn man es ihnen nicht ansieht. Denn Sie haben Wege gefunden, sich mit Ruhe den anderen Denkweisen hinzugeben.

In der Innovationsarbeit gilt also:
Raus aus der Routine -> Sondersituation schaffen -> Rückführung in den Alltag

Dieser Prozess braucht eine professionelle Begleitung. Eine schnelle und günstige Lösung sind daher Coaches von außen, zumal Innovation auch immer eine Schleuse von außen nach innen benötigt. Aber auch die Schnittstellen in dieser Kette sind kritisch. Die Überführung von Wissen benötigt eine interne Konstante. Diese Konstante ist im besten Fall der Product Owner.

Tipp: Mit den unterschiedlichen Arten der Aufmerksamkeit können Sie in unserem Training “Produktfindung mit Design Thinking” experimentieren. Alle Informationen und Termine gibt es hier.

Design Thinking für Product Owner – Teil 1: Was ist eigentlich Design Thinking?
Design Thinking für Product Owner – Teil 2: Das Design-Thinking-Team
Design Thinking für Product Owner – Teil 3: Des Design-Thinking-Raum
Design Thinking für Product Owner – Teil 4: Der Design-Thinking-Prozess

Related posts:

  1. Produktfindung mit Design Thinking
  2. Design Thinking für Product Owner – Teil 1: Was ist eigentlich Design Thinking?
  3. Design Thinking für Product Owner – Teil 2: Das Design-Thinking-Team

Categories: Blogs

Hungarian Notation for Teams

Agile Tools - Mon, 10/13/2014 - 07:04

SONY DSC

Back in the day when I was writing windows programs there was this thing called hungarian notation. It was a form of shorthand that allowed you to add the type of a variable to the name of the variable. It led to variable names like “lpszUserName” that stood for “long pointer to a zero terminated string named UserName.” It made for some pretty awkward variable names, but the idea was that you could always tell the type of the variable, even if you couldn’t see the declaration. It was kind of handy, at least until somebody changed the variable type and forgot to change the name. In hindsight, it really was always doomed to fail for almost any kind of legacy code. Name the variable wrong and you introduce subtle bugs that will haunt you for years.

So there we are looking at a list of teams the other day. They had a lot of interesting things in common. They all have some specialization in a given domain. Often they had different geographic location. We were wondering if perhaps they should have some sort of naming convention applied to their names. That’s when I perked up and said, “Hungarian notation for teams!” If the team is located in Bellevue, then we will use ‘bv’. If the team is in the mobile domain, we’ll use ‘mb’. So for a team named “Viper” located in Bellevue doing mobile development we would have “bvmbViper!” Maybe you have a team that is located in San Francisco ‘sf’ that works on web apps ‘wa’ called “Cheetah” we would have “sfwaCheetah.” Now you can simply look at the name of your team and know instantly where they work, and what they work on.

Genius! Maybe we should do this for people too? I’m an Agile manager ‘am’ who writes a blog ‘bl’. You can call me “amblTomPerry”


Filed under: Agile, Humor Tagged: hungarian notiation, name, notation, Teams
Categories: Blogs

Building the Agile Culture you want

When some organizations think of going Agile, they tend to gravitate toward applying a set of Agile practices.  While this provides insight into the mechanical elements of agile, these types of implementations tend to overlook the cultural elements.  A move to Agile implies that you make the cultural transformation to embrace the Agile values and principles and put them into action. 
Adapting an organization's culture is effectively an effort in change management.  And changing a culture is hard. People underestimate the difficulties of a culture change within their organization because it involves the cooperation of everyone. This is why some organizations avoid this.  But the business benefits can be tremendous. I have seen Agile efforts get started with poorly stated objectives and motivations, a lack of employee ownership or engagement, and a lack of thinking through the effort. Also, Agile journeys significantly benefit from education in both change management and agile techniques to achieve a meaningful cultural change. I have seen companies assign a member of senior management as the change agent, yet they have neither education nor experience in change management. A better approach may be to hire an Agile Coach with change management and Agile experience.
Creating or adapting a culture is not done by accident. It must be considered a change initiative and thought through. As part of readiness of deploying Agile, start the process of adapting to an Agile mindset and the culture you are looking for. What are some activities that will help you move to an agile culture?  Some include:
  • Recognizing that moving to Agile is a cultural change (it’s a journey)
  • Sharing and embracing the Agile values and principles (seriously folks!)
  • Moving to an end-to-end view of delivering value (don’t stop at just the build portion)
  • Adapting your governance to focus on value (enough with the cost, schedule, and scope!)
  • Evaluating employee willingness (employees are your brainpower!)
  • Gaining continuous feedback from customers (adapt toward customer value)
  • Adapting the reward system to align with the new culture (toward team and value)
  • Assessing executive support (build engagement along the way)

What other activities would benefit you in getting to an Agile culture?  Ultimately you want to start living the values and principles that help you develop the culture you are looking for.  As you have approached Agile in the past, how much of it was focused on the mechanics and how much was focused on adapting to an Agile culture? 

PS - to read more about really making the shift toward an Agile culture, consider reading the Agile book entitled Being Agile.  
Categories: Blogs

correction: Agile HARDWARE ....

Agile Scotland - Sun, 10/12/2014 - 20:08
I forgot the word HARDWARE in the last email.  I'm losing it ...

Great news! Agile Engineering expert, Nancy Van Schooenderwoert is over in Scotland again and is giving a seminar on the evening of the 30th of October on Agile Engineering. It's at Glasgow Caledonian University.
http://www.eventbrite.co.uk/e/agilescotland-agile-hardware-development-tickets-13660303335?utm_campaign=new_eventv2&utm_medium=email&utm_source=eb_email&utm_term=eventname_text

Clarke
Categories: Communities

[AgileScotland] Agile Engineering seminar - Glasgow Caledonian University - 30th October, start 7pm

Agile Scotland - Sun, 10/12/2014 - 20:00
Great news! Agile Engineering expert, Nancy Van Schooenderwoert is over in Scotland again and is giving a seminar on the evening of the 30th of October on Agile Engineering. It's at Glasgow Caledonian University.
http://www.eventbrite.co.uk/e/agilescotland-agile-hardware-development-tickets-13660303335?utm_campaign=new_eventv2&utm_medium=email&utm_source=eb_email&utm_term=eventname_text

Clarke
Categories: Communities

Building Glass Houses: Creating the Transparent Organization

Agile Tools - Sun, 10/12/2014 - 07:54

blur-city-drops-of-water-871-733x550

Visual management occurs at many levels. There is personal transparency: the ability for people to see what you are working on within the team. Then there is team transparency: the ability for stakeholders and other teams to see what the team is working on. Finally, there is organizational transparency: the ability for people within and outside the organization to see what the organization is working on. Ideally, we have all three levels of transparency fully developed in an Agile organization.

Individual transparency consists of the ways in which we communicate the state of our work to the team. We can use both active and passive mechanisms to achieve this. Active mechanisms are things like using one-way broadcast like yammer, or just shouting out when you need help, achieve victory, or otherwise want to share with the team. Then there is two-way broadcast like the status in the daily standup, one-on-one communication, working meetings like the planning and demo. Passive mechanisms include updating things like task boards, wiki pages, and status reports. All of this information is primarily directed at the team.

At the team level there are active and passive mechanisms for communication. There are burn down charts, task boards, calendars, which are all passive. Then there is the active communication that takes place at the scrum of scrums and other larger forums where multiple teams and stakeholders meet. I’ve often seen teams struggle to get information out at this level. They tend to do really well at the individual level, but at the team level it is not uncommon to find that teams aren’t getting enough information out beyond their own boundaries.

Finally at the organizational level there are active and passive mechanisms for communication as well. There are passive communication mechanisms like annual reports, company web pages, intranets, and billboards in the coffee room. There is also active communication at company meetings, and…often not much else. This is an area where as Agilists we need the most improvement. It seems as though the communication demands get more challenging the higher up the organization that you go.


Filed under: Agile, Coaching, Lean, Process Tagged: information radiators, transparency
Categories: Blogs

Selling Agile Contracts Slides from Agile Philly 1/2 Day Event 10/6/2014

Agile Philly - Sun, 10/12/2014 - 04:30

Hi All.

Noticed that the Dropbox link to my preso in the mail John circulated was broken so...

I renamed the preso to remove spaces, posted the thing on slide share, and create a tiny URL so you can get to it easy!

Enjoy: http://tinyurl.com/agile-contracts

I really enjoyed sharing this with the group. Please do not hesitate to contact me if you have questions or just want to discuss. I am definitely…

Categories: Communities

BDD Using Cucumber JVM and Groovy (video)

TestDriven.com - Sat, 10/11/2014 - 21:47
BDD Using Cucumber JVM and Groovy
Categories: Communities

Uh Oh... We Discovered More Stories!

Practical Agility - Dave Rooney - Sat, 10/11/2014 - 18:00
As I've said before, I'm a huge fan of Jeff Patton's Story Mapping technique. While Story Mapping goes a long way towards identifying the work that needs to be completed to deliver a viable system, you will inevitably miss some stories. This is a natural outcome of the discovery process that is inherent to software development. When you discover that some functionality is missing or
Categories: Blogs

Codea Calculator II

xProgramming.com - Sat, 10/11/2014 - 15:39

Ignatz and Jmv38 on the Codea forums commented on the previous article. I had hoped to do more anyway so here’s the next one.

Note that this article is different from what it might have been, because I’ve had conversations with other programmers looking at (working with) the code. This isn’t bad, it’s good. It’s like pair programming without all the travel. Be aware, though, that many of the “discoveries” in this article have already been discovered. There’s a long note on the forum replying to their comments. This article will follow that note’s line of reasoning, but I plan to rewrite it rather than copy and edit.

Because that’s what I do.

GUI

Ignatz and Jmv38 talked about why I hadn’t built the GUI and whether I should. I plan to do so, but this isn’t that article.

It’s worth mentioning here that I find GUIs hard to test, so I like to write really solid “model” objects, test the heck out of them, and then write thin GUIs where “nothing can go wrong”. Or at least very little.

Even so, some testing is probably needed, and I think we’ll discover that in the GUI article when I get to it.

Duplication

There is significant duplication in the calculator as it stands. Take a look:

    function Calculator:press(key)
        if key == "*" then
            self:doPending(function(d,t) return d*t end)
        elseif key == "/" then
            self:doPending(function(d,t) return t/d end)
        elseif key == "-" then
            self:doPending(function(d,t) return t-d end)
        elseif key == "+" then
            self:doPending(function(d,t) return t+d end)
        elseif key == "=" then
            self:doPending(nil)
        else
            if self.first then
                self.temp = self.display
                self.display = ""
                self.first = false
            end
            self.display = self.display..key
        end
    end

There are four or five if statements, checking a key and then doing a do-Pending. And inside each doPending there is a function definition that looks exactly the same except for the operation done inside. (I note that they all are in t then d order except the first one. That’s because I did a non-commutative operator second and t has to be first. I should change the first one to match, and I will. (I just did, and the tests all run.))

We’d like to remove this duplication. Perhaps I should stop and say why.

Why we remove duplication

I’ve mentioned some of the concerns already. If there is duplication in the code, that tells us that there is some idea, either in our mind or emerging in the code, that is not well-represented in the code. Here, the idea is “hey, all these operators are alike”.

If there is duplication and changes are needed, we have to make the changes in many places. This is tedious and error-prone.

Here’s a thing I haven’t mentioned in this series yet. Long ago, Kent Beck (creator of Extreme Programming) listed the elements of “simple code”. His list has taken a few forms but the one I prefer is this one:

Simple code:

  1. Runs all the tests.
  2. Contains no duplication.
  3. Expresses all the programmer’s ideas.
  4. Minimizes the number of programming entities: lines, classes, files, etc.

These items are in priority order.

Now if you think about these, you might think “But wait, expressing ideas might require duplication. Numbers 2 and 3 should be reversed.” Well, you might be right. If there were ever a conflict. But I prefer them in this order, because more often than not, duplication to “express an idea” really means that the code contains an idea that I have not as yet recognized. So I like to feel the conflict.

The list does come in the other order and in other forms. Find one you like and use it. I like this one.

Anyway, we’ve got this code: what shall we do about it?

Issues

I see two issues with those ifs. First of all, they are clearly very redundant. The duplication should be removed. But they are also a bit unclear: those embedded functions inside the doPending inside the then make the code tricky to read, especially since we don’t expect to see functions written in line like that. At least I don’t.

Both the following ideas come from Jmv38. We could break the functions out like this:

function Calculator.mult(d,t) return t*d end
function Calculator.divide(d,t) return t/d end
function Calculator.minus(d,t) return t-d end
function Calculator.add(d,t) return d+t end

Then we’d say, for example:

if key == "*" then
    self:doPending(self.mult)

and so on. That might be a bit more clear but still leaves us with duplication.

Maybe we would look at this and realize that there is a one-to-one relationship between the character pressed and the function to be executed. (And the function has that character in the middle of it, but I can’t figure a way in Codea to just do that. Unless I compile on the fly. No. Just no.)

So Jmv38 suggests that we build a local table, indexed by the key pressed:

local operations = {}
operations["*"] = function(d,t) return t*d end
operations["+"] = function(d,t) return t+d end
operations["/"] = function(d,t) return t/d end
operations["-"] = function(d,t) return t-d end 

Now as it happens, I didn’t realize that one could define a local table that way. It’s “obvious” when you think about how Codea works but it wasn’t obvious to me. This is why pair programming is so powerful. The other person knows stuff you don’t know and thinks of stuff you don’t think of.

We’re left with the question of how to access the table. I think I prefer this:

function Calculator:press(key)
    if key=="*" or key=="+" or key=="/" or key=="-" then
        self:doPending(operations[key])
    elseif key == "=" then
        self:doPending(nil)
    else
        if self.first then
            self.temp = self.display
            self.display = ""
            self.first = false
        end
        self.display = self.display..key
    end
end

Are you starting to wish these things were in order plus minus times divide? I am. Something in my brain wants them to be that way. I’ll fix that because what good is having a touch of OCD if you don’t give in to it once in a while?

Done, and the tests still run. I feel better now.

Now Jmv38 suggests going a step further with

local f = operations[key]
if f then
    doPending(f)
elseif key == "="
    ...

Nothing wrong with that, probably. But it does require us to look back and say “When could f be nil? Oh, if key isn’t one of those operators. Hmm …”

I don’t like things that make me go “Hmmm”. So I prefer the or. However, there are a lot of binary operators in the world, and that if could get pretty long. Jmv38’s approach extends to new operators smoothly. So it could be better. I prefer it more explicit. YMMV.

Summing up …

First of all many props to Ignatz and Jmv38 for their ideas and questions. We wouldn’t be here without them.

I’ll show the relevant code one more time below. First, the discussion. We saw more duplication, and we thought of ways to remove it. We also saw some code that didn’t express itself well. We considered ways of fixing it but (with help) we saw an approach that seemed to promise help on both dimensions. We tried it, we liked it.

But those tests!! Without those tests, I’d have been scared as hell about making a major change like a table of functions and removing all my if statements. It would be too easy to screw it up.

The tests give us confidence that when we screw up, we’ll find out before anyone else does. That’s a very good thing.

Next time, the GUI. Might be soon, might not. I’ve got a gig next week.

Current Code
--# Calculator
Calculator = class()

function Calculator:init()
    self.display = ""
    self.op = nil
    self.first = false
end

local operations = {}
operations["+"] = function(d,t) return t+d end
operations["-"] = function(d,t) return t-d end
operations["*"] = function(d,t) return t*d end
operations["/"] = function(d,t) return t/d end

function Calculator:press(key)
    if key=="+" or key=="-" or key=="*" or key=="/" then
        self:doPending(operations[key])
    elseif key == "=" then
        self:doPending(nil)
    else
        if self.first then
            self.temp = self.display
            self.display = ""
            self.first = false
        end
        self.display = self.display..key
    end
end

function Calculator:check(key, result)
    self:press(key)
    self:displayIs(result)
end

function Calculator:doPending(func)
    if self.op ~= nil then
        self.display = "" .. (self.op(self.display, self.temp))
    end
    self.first = true
    self.op = func
end

function Calculator:displayIs(string)
    if self.display ~= string then
        local diag = "expected /"..string.."/ but got /"..self.display.."/!"
        print(diag)
    end
end

function Calculator:draw()
end

function Calculator:touched(touch)
end

--# Main
-- Article Calc

-- Use this function to perform your initial setup
function setup()
    print("tests begin")
    local c = Calculator()
    c:check("1","1")
    c:check("2","12")
    c:check("*","12")
    c:check("3","3")
    c:check("=","36")

    c = Calculator()
    c:check("4","4")
    c:check("5","45")
    c:check("0","450")
    c:check("/","450")
    c:check("1","1")
    c:check("5","15")
    c:check("-","30")
    c:check("1","1")
    c:check("9","19")
    c:check("=","11")
    c:check("+","11")
    c:check("5","5")
    c:check("=","16")
    print("tests end")
end

-- This function gets called once every frame
function draw()
    background(40, 40, 50)
end
Categories: Blogs

VersionOne Announces SAFe 3.0 Alignment

Agile Product Owner - Sat, 10/11/2014 - 14:11

VersionOne just announced their latest update in support of SAFe 3.0. See here for details.

Categories: Blogs

Lessons from running Neo4j based ‘hackathons’

Mark Needham - Sat, 10/11/2014 - 12:52

Over the last 6 months my colleagues and I have been running hands on Neo4j based sessions every few weeks and I was recently asked if I could write up the lessons we’ve learned.

So in no particular order here are some of the things that we’ve learnt:

Have a plan but don’t stick to it rigidly

Something we learnt early on is that it’s helpful to have a rough plan of how you’re going to spend the session otherwise it can feel quite chaotic for attendees.

We show people that plan at the beginning of the session so that they know what to expect and can plan their time accordingly if the second part doesn’t interest them as much.

Having said that, we’ve often gone off on a tangent and since people have been very interested in that we’ve just gone with it.

This sometimes means that you don’t cover everything you had in mind but the main thing is that people enjoy themselves so it’s nothing to worry about.

Prepare for people to be unprepared

We try to set expectations in advanced of the sessions with respect to what people should prepare or have installed on their machines but despite that you’ll have people in varying levels of readiness.

Having noticed this trend over a few months we now allot time in the schedule for getting people up and running and if we’re really struggling then we’ll ask people to pair with each other.

There will also be experience level differences so we always factor in some time to go over the basics for those who are new. We also encourage experienced people to help the others out – after all you only really know if you know something when you try to teach someone else!

Don’t try to do too much

Our first ‘hackathon’-esque event involved an attempt to build a Java application based on a British Library dataset.

I thought we’d be able to model the data set, import it and then wire up some queries to an application in a few hours.

This proved to be ever so slightly ambitious!

It took much longer than anticipated to do those first two steps and we didn’t get to build any of the application – teaching people how to model in a graph is probably a session in its own right.

Show the end state

Feedback we got from attendees to the first few versions was that they’d like to see what the end state should have looked like if they’d completed everything.

In our Clojure Hackathon Rohit got the furthest so we shared his code with everyone afterwards.

An even better approach is to have the final solution ready in advance and have it checked in on a different branch that you can point people at afterwards.

Show the intermediate states

Another thing we noticed was that if people got behind in the first part of the session then they’d never be able to catch up.

Nigel therefore came up with the idea of snapshotting intermediate states so that people could reset themselves after each part of the session. This is something that the Polymer tutorial does as well.

We worked out that we have two solid one hour slots before people start getting distracted by their journey home so we came up with two distinct types of tasks for people to do and then created a branch with the solution at the end of those tasks.

No doubt there will be more lessons to come as we run more sessions but this is where we are at the moment. If you fancy joining in our next session is Java based in a couple of weeks time.

Finally, if you want to see a really slick hands on meetup then you’ll want to head over to the London Clojure DojoBruce Durling has even written up some tips on how you run one yourself.

Categories: Blogs

Ripping the Planning Out of Agile

Agile Tools - Sat, 10/11/2014 - 08:22

needle-31827_640

Recently I was following some twitter feed about #NoEstimates. I’m no expert, but it seems to be a conversation about the fundamental value, or lack of value, that planning provides to teams. What they seem to be arguing is that planning represents a lot of wasted effort that would be better spent elsewhere.

Fundamentally I would have to agree. I’ve wasted a tremendous amount of time arguing about story points, burning down hours, and calculating person days – all for what seems like very little benefit.

What I would rather do is spend more time talking about the problem we are trying to solve. I really value a deep understanding of the system and the changes that we intend to make to it. If I have that much, then I’m well situated to deliver fast enough that nobody’s going to give me much grief about not having estimates. That’s my theory anyway. The sooner you can deliver working software, the sooner people will shut up about estimates.

But often we never do talk about the problem at anything other than a very superficial level. We spend most of our time trying to size the effort according to some artificial schema that has nothing to do with the work or any real empirical evidence at all.

So what if there were no plan? What if we took Scrum and did everything but the planning? You show up Monday morning and you have no idea what you are going to work on. The team sits down with the customer and talks about their most pressing need. They work out what they need to build, make important design decisions, and coordinate among themselves. At no point are there any hours, or points, or days. What would happen to the cadence of the sprint if we removed the planning? Basically, we would have our daily standup, and then we would review our accomplishments at the end of the sprint and look for ways to improve.

That sounds pretty good actually. Like anything else, I’m sure it has pros and cons:

Pros: Save time and energy otherwise wasted on estimation, and use that time instead for important problem solving work.

Cons: Stakeholders really like estimates. It’s like crack. They start to shake and twitch if you take their estimates away. Not many will even let you talk about it.

It might be worth a try sometime. It would certainly make an interesting experiment for a sprint or two. What if the sprint were focused entirely on the improvement cycle instead?


Filed under: Agile, Scrum Tagged: #noestimates, Agile, estimates, Planning, sizing
Categories: Blogs

I am an ironing board

Business Craftsmanship - Tobias Mayer - Fri, 10/10/2014 - 17:46

Session Notes for Open Space session at Agile Open, California, Thursday 9th October 2014. Around 25-30 people in attendance.

image

Session title: Metaphor—Why?

Introduction: (System) metaphor is one of the original XP principles, but has been neglected over time in favor of other, more actionable principles such as test-driven development and continuous integration. A system metaphor creates a shared understanding, and its absence often creates misunderstanding and misalignment. In this session I’m interested in exploring metaphor, not specifically for code, but for human systems,and also individuals. Metaphor can create new understanding, new ways of looking at familiar situations. What might it mean, for example, to say to someone, “you are an ironing board”.

Part #1

For want of a plan of any kind, I started the session with this question, and asked participants to pair up and explore what it meant to them. Some pairs explored it from a personal perspective, and some from the perspective of their work role, e.g. developer or Agile coach. In sharing we focused on “I” statements, and paraphrasing…

"When I am folded away in a closet I see the world differently to when I am out in the open, being of service."

"I seek something hot and steamy to be pressed against me."

"I am the body of workers that supports the change agent (the iron) in removing the wrinkles in my organization (the clothing). Without me there is no foundation for improvement"

A nice corollary for this last one, from the iron’s perspective, was “If I stay too long I will burn a hole, and ruin what I am trying to improve.”

The neat thing about facilitating an open space session, is the sense of release. The session goes where it goes, and all I needed to do was embrace the ideas that emerged, turn them into offers, and suggest some containment for the dialog.

Part #2

The “I am” statement led a participant to be reminded of an improv exercise called Invocation, where an object is identified of spoken of in four different ways, each one taking the speaker from objectivity to subjectivity: “It is…”, “You are…”, “Thou art…” and “I am…” We practiced this framework in the second part, people finding a new partner and offering an object of their own choice, e.g. bird, flipchart marker, redwood tree, highway… The use of this framework led people to greater empathy. Was that what we were seeking with metaphor? Perhaps. The following discussion looked at the difference between the incremental understanding and the immediate jump to “I am”. The latter is more of an analogy: I am like this thing because… opening up new ways of seeing self, while the former takes the speaker deeper into a place of understanding of the other (thing). 

Note: when my partner struggled with the “Thou art” part, I asked him to recite a love sonnet to the flipchart marker he was holding. “Thou art an extension of my imagination…”, he began. I don’t recall the rest but it was very eloquent, and oddly moving. When he tried to do the same in front of the group he wasn’t able to, and afterwards commented how wrong it felt to express intimacy in front of a large group.

Part #3

We discussed the idea of using another person as the “object”. A traffic cop. 1) It is an authority figure, uniformed, unwelcome. 2) You are a man with a job to do. You need to earn a living like the rest of us. 3) Thou art a keeper of the law, a man with a mission. Thou art possibly a family man, seeking to improve the life of your family in the best way you can. Thou enforces the law because thou believeth in justice.” 4) I am a man with a mission, a believer in truth and justice. I care for my community and want to help others do the same. This had now moved beyond metaphor into a pure empathy exercise. Time for a retrospective. Are we getting what we want from this session? Where do we want to go now?

Retrospective

I asked for the five most vocal people to form small teams. Their job was to facilitate a dialog and not speak themselves. One of them asked, what, not speak at all? I had rather meant that they did not offer input but made sure all voices were heard, but this was an idea worth exploring. Yes, facilitate in complete silence. Feedback from the facilitators was interesting. In general it improved the ability to listen. The retrospective gave us three topics to explore further, so we created an short open-space-within-open-space. 

Part #4

  1. The use of metaphor in retrospectives—how would it help to ask participants to come up with a metaphor for the last sprint? What new avenues of exploration/understanding might it encourage?
  2. Exploring common metaphors, e.g. herding cats, war room, firefighting. Many of our metaphors are negative, or derogatory, creating a (perhaps) undesirable mindset. We looked for new metaphors, and the one that shone for me was a new way at looking at “cat herder” (a typical description of a project manager). I am a builder of playgrounds. This idea completely changes how one might approach their job: don’t try to manage the chaos, embrace it and support it with fresh imagination.
  3. Empathy via Invocation. Some wanted to explore this further. I have no notes for this short session.

Time expired, so the small groups dissolved. I asked those remaining at the end to practice with each other during the event, by simply offering a “you are an [object]” challenge to one another at random moments, and see what happened. As I left the eception at the end of the day, I offered a colleague “You are a stuffed mushroom”. I look forward to hearing what he had to say to the others in the group when we next meet.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.