Skip to content


Guest Blog: Cognition … what’s it all about?

Ben Linders - Fri, 06/03/2016 - 09:37

Improving Cognitive PerformanceIn this guest blog post on Andrew Mawson from Advanced Workplace Associates talks about their ongoing research on cognition. The aim of that research is to provide guidelines that help knowledge workers do the right things to maximise their cognitive performance.

Cognition is just a scientific term for the functioning of the brain. Until recently measuring the effectiveness of the brain was a difficult challenge requiring laboratory conditions and very expensive equipment. However, over the last few years, researchers have designed software that reliably measure the performance of different parts of the brain.


Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

There are many different functions of the brain which are known as ‘domains’ but there are 5 of these domains that seem to be most important and which can be measured using software.

Five Key Cognitive Domains

The primary domains are:

  • Attention: the ability to focus one’s perception on target visual or auditory stimuli and filter out unwanted distractions.
  • Executive functioning: the ability to strategically plan one’s actions, abstraction, and cognitive flexibility – the ability to change strategy as needed.
  • Psychomotor and Speed accuracy: Reaction time / processing speed: related functions that deal with how quickly a person can react to stimuli and process information.
  • Episodic Memory: the ability to encode, store, and recall information. In most studies memory is further divided into recognition, recall, verbal, visual, episodic, and working memory. Each type of memory has specific tasks associated with that memory function.
  • Working Memory: is the system responsible for the transient holding and processing of new and already-stored information, and is an important process for reasoning, comprehension, learning and memory updating.

You can probably see that cognition and these domains matter hugely if you are involved in knowledge work. If for example your ‘Attention’ domain is not as effective as it could be, your ability to concentrate in meetings and whilst reading could be lower than somebody who has a better performing ‘Attention’ domain. Imagine if you were in a meeting with 4 other people and you missed a vital piece of the discussion. Regardless of how good your memory might be, if the information isn’t getting as far as your memory, then you won’t be able to recall it. So if at some future date when you are dealing with one of those four colleagues, they may well make an assumption that you absorbed the same information in the meeting as they did…but in fact you didn’t. You can probably see how this can lead to potential confusion. ‘Was that guy in the same meeting as you and I’?

Also, the environment in which you work may matter more to you is your ‘attention’ domain is weaker than someone who is stronger in this area. You may, for instance, have a greater need for a distraction free environment than someone who’s concentration allows them to block everything else out.

You can probably see that depending on your role, different domains have a greater significance to your effectiveness in delivering your role. So for instance if you are involved in a role where accuracy and speed are vital, perhaps if you are an airline pilot or an accountant then the ‘Psychomotor speed and accuracy’ domain may be critical to you. If you are a senior leader involved in determining strategy or planning, then ‘Executive’ function will be more critical.

You can see pretty quickly that in a world where the brains of your people are your key tools in generating value that the effectiveness of these domains matter enormously.

What we wanted to do in our research was to examine all the academic studies that had been done on the subject of ‘cognition’ around the world to establish what makes the most difference to the performance of different parts of the brain and identify what advice could be provided to make improvements then to come up with guidelines that help people do the right things to maximise their cognitive performance.

So over the next 6 months we’re going to reveal the results from our study at our Cognitive Fitness webpage, providing some guidelines that you can use to improve your own cognitive performance and those of your people.

Andrew Mawson is the owner and founder of Advanced Workplace Associates. He eats, sleeps and breathes all things workplace; you can connect with Andrew at LinkedIn.

Categories: Blogs

Hmmm… What does that mean?

George Dinwiddie’s blog - Thu, 06/02/2016 - 21:46

On numerous occasions I’ve observed long-time members of the Agile community complain about misinterpretations of what Agile means, and how it performed. Frequently this is precipitated by yet another blog post about how terrible Agile is, and how it damaged the life of the blogger. Sometimes it’s triggered by a new pronouncement of THE way to practice Agile software development, often in ways that are hardly recognizable as Agile. Or THE way to practice software development is declared post-Agile as if Agile is now obsolete and ready to be tossed in the trash bin.

The responses are both predictable and understandable. “If they’d only listen to what we actually said.” “Of course we don’t mean that.” “That’s not the Agile I know.” “Let’s take back Agile from those who misrepresent it!” It’s frustrating when people take terms you use and mangle the ideas that they represent to you into something unrecognizable and undesirable. I empathize with people making that response.

Recently I observed a similar situation where the shoe was on the other foot. I observed a long-time member of the Agile community describe a concept from another community where I’ve spent a lot of time and effort. The description was so far off the mark that I never would have guessed the concept that was being described.

As nearly as I can tell, they had formed a working definition from context, and from that definition had rejected the concept out of hand. They rejected it so forcefully that it seemed to taint their opinion of everyone connected with that concept. Indeed, “taint” may be insufficient, as they expressed their opinion not as “this person believes…” or “this person behaves…,” but as “this person is….”

I found this profoundly sad from several angles.

The concept is one that I’ve studied and used for years, and have developed layers of understanding that grow deeper over time. The person being dismissed is one I consider a friend and a mentor. It’s someone from whom I’ve learned a great deal over time, and that learning has significantly enriched my life.

The person making these comments is also someone I consider a friend. It distresses me greatly to watch one friend disparage another. I will generally defend the friend who is not there, as I did in this case and have done with regard to the other friend in other situations. This, of course, makes me a surrogate target representing the friend who is not there, and that increases the emotional magnitude of my distress.

The behavior of the friend who was present was so strikingly similar to the behavior I’ve observed that same friend rail against. In the situations where they were defending the concepts of Agile software development from what I considered unwarranted attack, I had felt a close affinity with what they were saying.

Now, seeing the same person taking the opposite role in a different context, I was having a hard time reconciling the difference in their behavior in the two situations. “They should know better than to dismiss a concept they don’t understand!”

Flashback to the year 2000. I was taking a coffee-break with a couple of colleagues at work. We were making jokes, being witty as software developers are wont to do. At one point in the conversation I used the phrase “Extreme Programming” as the punchline to a joke.

One colleague asked, “Extreme Programming? What’s that?”

“I don’t really know.” I had been researching Design Patterns on the Portland Pattern Repository and had seen the term being heatedly discussed, but had considered it noise in the way of my study of Design Patterns. The fact that there were obvious arguments about it, and that the term seemed silly on it face, had lead me to dismiss it out of hand. “I guess I should find out.”

This was the start of my study of Agile software development. I don’t know why my reaction, when confronted with my ignorance, was to enquire more deeply rather than defend my ignorance. I doubt that I react in that manner all the time. That particular reaction, though, has been hugely valuable for me. In many ways, it changed the direction of my life.

It’s not the term used, the name of the concept, that counts. It’s learning the nuances of the concept, starting with “Why would someone advocate this concept?” Assuming the answer to that question is that they’re an idiot leads nowhere productive. Investigating with curiosity often does.

What could I do about people who dismiss valuable concepts out of ignorance? I don’t have a good answer for that. Perhaps ignore the situation is the easiest non-negative response I can take. Arguing never seems to help, in my experience.

But when the shoe is on the other foot and someone suggests something that seems ridiculous at first glance, asking “Hmmm… What does that mean?” has served me better than rejection. At worst it goes nowhere and I’m left with “I don’t know.” Sometimes, however, it has opened my eyes to possibilities that I’d not yet imagined.

Categories: Blogs

Workshop outputs from “How Architects nurture Technical Excellence” - Thu, 06/02/2016 - 15:45
Workshop background

Earlier this week, I ran a workshop at the first ever Agile Europe conference organised by the Agile Alliance in Gdansk, Poland. As described in the abstract:

Architects and architecture are often considered dirty words in the agile world, yet the Architect role and architectural thinking are essential amplifiers for technical excellence, which enable software agility.

In this workshop, we will explore different ways that teams achieve Technical Excellence and explore different tools and approaches that Architects use to successfully influence Technical Excellence.

During the workshop, the participants explored:

  • What are some examples of Technical Excellence?
  • How does one define Technical Excellence?
  • Explored the role of the Architect in agile environments
  • Understood the broader responsibilities of an Architect working in agile environments
  • Focused on specific behaviours and responsibilities of an Architect that help/hinder Technical Excellence

What follows are the results of the collective experiences of the workshop participants during Agile Europe 2016.

How Architects nurture Technical Excellence from Patrick Kua Examples of Technical Excellence

  • A set of coding conventions & standards that are shared, discussed, abided by by the team
  • Introducing more formal code reviews worked wonders, code quality enabled by code reviews, user testing and coding standards, Peer code review process
  • Software modeling with UML
  • First time we’ve used in memory search index to solve severe performance RDBMS problems
  • If scrum is used, a good technical Definition of Done (DoD) is visible and applied
  • Shared APIs for internal and external consumers
  • Introducing ‘no estimates’ approach and delivering software/features well enough to be allowed to continue with it
  • Microservice architecture with docker
  • Team spirit
  • Listening to others (not! my idea is the best)
  • Keeping a project/software alive and used in prod through excellence customer support (most exclusively)
  • “The art must not suffer” as attitude in the team
  • Thinking wide!
  • Dev engineering into requirements
  • Problems clearly and explicitly reported (e.g. Toyota)
  • Using most recent libraries and ability to upgrade
  • Right tools for the job
  • Frequent availability of “something” working (like a daily build that may be incomplete functionality, but in principle works)
  • Specification by example
  • Setting up technical environment for new software, new team members quickly introduced to the project (clean, straightforward set up)
  • Conscious pursuit of Technical Excellence by the team through this being discussed in retros and elsewhere
  • Driver for a device executed on the device
  • Continuous learning (discover new tech), methodologies
  • Automatic deployment, DevOps tools use CI, CD, UT with TDD methodology, First implementation of CD in 2011 in the project I worked on, Multi-layered CI grid, CI env for all services, Continuous Integration and Delivery (daily use tools to support them), Continuous Integration, great CI
  • Measure quality (static analysis, test coverage), static code analysis integrated into IDE
  • Fail fast approach, feedback loop
  • Shader stats (statistical approach to compiler efficiency)
  • Lock less multithreaded scheduling algorithm
  • Heuristic algorithm for multi threaded attributes deduction
  • It is easy to extend the product without modifying everything, modularity of codebase
  • Learn how to use something complex (in depth)
  • Reuse over reinvention/reengineering
  • Ability to predict how a given solution will work/consequences
  • Good work with small effort (efficiency)
  • Simple design over all in one, it’s simple to understand what that technology really does, architecture of the product fits on whiteboard
Categories: Blogs

CQRS and REST: the perfect match

Jimmy Bogard - Wed, 06/01/2016 - 22:02

In many of my applications, the UI and API gravitate towards task-oriented UIs. Instead of “editing an invoice”, I “approve an invoice”, with specialized models, behaviors and screens just for accomplishing that task. But what happens when we move from a server-side application to one more distributed, to be accessed via an API?

In a previous post, I talked about the difference between entities, resources, and representations. It turns out that by removing the constraint around entities and resources, it opens the door to REST APIs that more closely match how we’d build the UI if it were a completely server-side application.

With a server side application, taking the example of invoices, I’d likely have a page to view invoices:

GET /invoices

This page would return the table of invoices, with links to view invoice details (or perhaps buttons to approve them). If I viewed invoice details, I’d click a link to view a page of invoice details:

GET /invoices/684

Because I prefer task-based UIs, this page would include links to specific activities you could request to perform. You might have an Approve link, a Deny link, comments, modifications etc. All of these are different actions one could take with an invoice. To approve an invoice, I’d click the link to see a page or modal:

GET /invoices/684/approve

The URLs aren’t important here, I could be on some crazy CMS that makes my URLs “GET /fizzbuzzcms/action.aspx?actionName=approve&entityId=684”, the important thing is it’s a distinct URL, therefore a distinct resource and a specific representation.

To actually approve the invoice, I fill in some information (perhaps some comments or something) and click “Approve” to submit the form:

POST /invoices/684/approve

The server will examine my form post, validate it, authorize the action, and if successful, will return a 3xx response:

HTTP/1.1 303 See Other
Location: /invoices/684

The POST, instead of creating a new resource, returned back with a response of “yeah I got it, see this other resource over here”. This is called the “Post-Redirect-Get” pattern. And it’s REST.


Not surprisingly, we can model our REST API exactly as we did our HTML-based web app. Though technically, our web app was already RESTful, it just served HTML as its representation.

Back to our API, let’s design a CQRS-centric set of resources. First, the collection resource:

GET /invoices

HTTP/1.1 200 OK
    "id": 684,
    "invoiceNumber": "38042-L-275-684",
    "customerName": "Jon Smith",
    "orderTotal": 58.85,
    "href": "/invoices/684"
    "id": 688,
    "invoiceNumber": "33453-L-275-688",
    "customerName": "Maggie Smith",
    "orderTotal": 863.88,
    "href": "/invoices/688"

I’m intentionally not using any established media type, just to illustrate the basics. No HAL or Siren or JSON-API etc.

Just like the HTML page, my collection resource could join in 20 tables to build out this representation, since we’ve already established there’s no connection between entities/tables and resources.

In my client, I can then follow the link to see more details about the invoice (or, alternatively, included links directly to actions). Following the details link:

GET /invoices/684

HTTP/1.1 200 OK
  "id": 684,
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "shippingAddress": "123 Anywhere"
  "lineItems": [ ]
  "href": "/invoices/684",
  "links": [
    { "rel": "approve", "prompt": "Approve", "href": "invoices/684/approve" },
    { "rel": "reject", "prompt": "Reject", "href": "invoices/684/reject" }

I now include links to additional resources, which in the CQRS world, those additional resources are commands. And just like our HTML version of things, these resources can return hypermedia controls, or, in the case of a modal dialog, I could have embedded the hypermedia controls inside the original response. Let’s go with the non-modal example:

GET /invoices/684/approve

HTTP/1.1 200 OK
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "href": "/invoices/684/approve",
  "fields": [
    { "type": "textarea", "optional": true, "name": "comments" }
  "prompt": "Approve"

In my command resource, I include enough information to instruct clients how to build a response (given they have SOME knowledge of our protocol). I even include some display information, as I would have in my HTML version. I have an array of fields, only one in my case, with enough information to instruct something to render it if necessary. I could then POST information up, perhaps with my JSON structure or form encoded if I liked, then get a response:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 303 See Other
Location: /invoices/684

Or, I could have my command return an immediate response and have its own data, because maybe approving an invoice kicks off its own workflow:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve/3506
  "id": 3506,
  "href": "/invoices/684/approve/3506",
  "status": "pending"

In that example I could follow the location or the body to the approve resource. Or maybe this is an asynchronous command, and approval acceptance doesn’t happen immediately and I want to model that explicitly:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 202 Accepted
Location: /invoices/684/approve/3506
Retry-After: 120

I’ve received your approval request, and I’ve accepted it, but it’s not created yet so try this URL after 2 minutes. Or maybe approval is its own dedicated resource under an invoice, therefore I can only have one approval at a time, and my operation is idempotent. Then I can use PUT:

PUT /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve

If I do this, my resource is stored in that URL so I can then do a GET on that URL to see the status of the approval, and an invoice only gets one approval. Remember, PUT is idempotent and I’m operating under the resource identified by the URL. So PUT is only reserved for when the client can apply the request to that resource, not to some other one.

In a nutshell, because I can create a CQRS application with plain HTML, it’s trivial to create a CQRS-based REST API. All I need to do is follow the same design guidelines on responses, pay attention to the HTTP protocol semantics, and I’ve created an API that’s both RESTful and CQRSful.

Categories: Blogs

New SAFe 4.0 Introduction Whitepaper

Agile Product Owner - Wed, 06/01/2016 - 20:56

Greetings everyone!

We told you it’s coming, and many of you have been asking about it, so I’m happy to announce that the first whitepaper on SAFe 4.0 has been published, and is available for you to download at

The official title is SAFe 4.0 Introduction: Overview of the Scaled Agile Framework for Lean Software and Systems Engineering.

The white paper distills the essence of SAFe from hundreds of website pages to just twenty-five, and provides a high level overview of the Framework, including its values, principles, practices, and strategy for implementation.

The white paper should be helpful in several ways:

  • Educate leaders, managers, and executives about how to apply SAFe for enterprise-class Lean-Agile software and systems development
  • Prepare students in the basics of SAFe prior to taking a course (e.g. Implementing SAFe 4.0 with SPC4 certification, Leading SAFe 4.0, etc.)
  • Help people understand SAFe 4.0 in greater detail, especially those who haven’t taken a 4.0 SAFe class

We look forward to your comments about the white paper and how the community might leverage it to support their SAFe transformations.

Stay tuned for updates about our SAFe Distilled, book, which I’m co-authoring with Dean Leffingwell and will be published by Pearson. I’m working with our publisher to see if we can share draft chapters for community feedback.

Always be SAFe,
–Richard Knaster, SAFe Fellow

Categories: Blogs

The Agile Chip

Leading Agile - Mike Cottmeyer - Wed, 06/01/2016 - 15:04

I’m a Detroit guy and I like cars, especially fast cars, and nowadays they have computer chips you can install that make your car go faster. Imagine that, just plug in a chip and press on the accelerator and vroooom!

I’m also an agile coach and I see patterns.  With most every agile transformation I see a pattern when we start to talk about velocity and sustainable pace.  Some folks really zero in on just the velocity part.  It feels like they just found a new chip for their car and with visions of caffeine infused programmers they are thinking about hammering their right foot on the accelerator.

Indy Cars

Going fast can be a beautiful thing. With these new action cams we can watch an Indy Car Champion take a car through its paces and it’s like art in motion. It is also a beautiful thing when an agile team starts getting great gains in velocity. There is art to that as well, and the teams that are achieving great gains in velocity travelled a long hard road to get there.

Teams that are test driving and producing both Clean and SOLID code from the outset as well as having fully automated test suites are rare. These teams have well-groomed backlogs and they manage their commitments to the organization based on both previous velocity and sustainable pace. They are craftsmanship focused and continuously improving both their practices and their craft. But it doesn’t happen overnight and it certainly isn’t as easy as installing “An Agile Chip”.

Teams need an opportunity to build up their skills and get a stable velocity before they start looking at going faster. And they need help too. They need well-groomed backlogs, dependency free Epics, a clear understanding of what “done” is, and a clear roadmap for wherever they are going. All three of these; backlogs, working tested software, and the strong teams that produce them, take some time to develop.

The backlog is like the roadmap for the journey, and all of the epics and stories are like gas in the tank. Working tested software, is very much like the car we are driving, and of course the team is the driver. Yes, the team is the driver and the team presses the accelerator.

So where does that leave a manager?  Especially a hands-on manager who is used to driving the team or committing to a specific velocity?

The team still needs you to be vested and committed to their success, but the role changes from being hands on, in the car, to more of an owner in the pits. Teams need someone to clear the way for them to go fast. They need someone to help alleviate organizational impediments that can slow them down. They need the commitment, but to a new cause.

Race tracks are designed for speed. There are very few rules around how you drive; you just need to concentrate on going fast. The roadway is clear of impediments and it’s kept clear. Pit crews help, spotters, telemetry, everything around the team basically, the whole environment, is set up to go fast.

A lot of organizations are set up like a busy City Street. There may be some architecture that can be simplified or modularized.  Process overhead can often be streamlined if not eliminated.  Build and Release Management may be ripe for automation.  Testing automation can be another opportunity, and the list goes on.

A vested owner, helping clear the way for the team, can produce incredible opportunity for the team to go faster because otherwise, the roadways are basically full of impediments and organizational stop lights that slow everything to a crawl. If the environment is not set up for speed you can have the fastest of Indy Race Cars and it’s not going to get anywhere. The team is just going to sit there revving its engine and wasting energy.

Today’s agile teams are brimming with brilliant knowledge workers who don’t need traditional management.  They have pride in what they do and they love seeing what they create getting used out there in the world.  They care about their customers, their teams and their craft, and they can be trusted with the race car.  Give them the keys, clear the roadway ahead and you’ll be pleasantly surprised at where they take you.

The post The Agile Chip appeared first on LeadingAgile.

Categories: Blogs

Beginnen met Agile Retrospectives

Ben Linders - Wed, 06/01/2016 - 11:01

agile retrospective introductieAgile teams doen Agile Retrospectives om te leren en hun manier van werken aan te passen om continu te verbeteren. Het artikel leren en continue verbeteren in agile geeft een introductie in retrospectives en beschrijft hoe je kunt zorgen voor veiligheid zodat mensen open en eerlijk kunnen zijn. Dit artikel beschrijft waarom je retrospectives doet en hoe je met retrospectives kunt beginnen.

Waarom doe je retrospectives?

Insanity is doing the same things and expecting different results.‘ Als je de problemen die je ondervindt op wilt lossen, en meer waarde wilt leveren aan je klanten, dan moet je de manier waarop je je werk doet veranderen. Dat is de reden waarom Agile het gebruik van retrospectives aanbeveelt: om teams te helpen om problemen zelf op te lossen en zich te verbeteren!


Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Wat maakt retrospectives anders, welke voordelen leveren ze op? Een voordeel is dat retrospectives de macht aan het team geven, ze empoweren het team. Aangezien de teamleden zelf de retrospective doen en samen beslissen welke verbeteracties ze gaan doen, zal er weinig weerstand zijn tegen de veranderingen die nodig zijn.

Een ander voordeel is dat de acties die in een retrospectieve zijn afgesproken worden uitgevoerd door de leden van het team: er is geen hand-over! Het team analyseert wat er gebeurd is, bepaalt de acties, en de teamleden doen ze. Dit is veel effectiever en ook sneller en goedkoper :-).

Deze voordelen maken van Agile retrospectives een betere manier om verbeteringen te doen. Retrospectives zijn een van de succesfactoren voor het effectief gebruiken van Scrum. Je kunt verschillende retrospectieve oefeningen gebruiken om waarde toe te voegen voor de business. En retrospectives zijn ook een geweldig hulpmiddel om stabiele teams te creëren en behouden en hen te helpen om flexibel en effectief te werken en daarmee werkelijk Agile en Lean te worden.

Hoe begin je met retrospectives?

Er bestaan verschillende manieren om retrospectives te introduceren in organisaties. Je kunt Scrum-masters en andere mensen die retrospectives zullen faciliteren trainen (bijvoorbeeld met een Waardevolle Agile Retrospectives-workshop) om ze te leren hoe ze retrospectives goed kunnen doen. In de training leren ze hoe ze retrospectives kunnen doen in de Agile-teams door de diverse oefeningen te gebruiken.

Ik ben zelf begonnen met Agile retrospectives in ‘stealth mode’ te doen in mijn projecten. Ik noemde ze geen retrospectives maar gebruikte de term ‘evaluaties’. In plaats van te wachten tot het einde van het project stelde ik voor aan de teams om ze iedere iteratie te doen (het project werkte volgens RUP met iteraties van vier tot zes weken). De acties die uit de evaluatie kwamen, namen we in de volgende iteratie meteen mee. Dit voelde voor de teams heel natuurlijk.

Welke manier je ook kiest, zorg ervoor dat je retrospectives frequent blijft doen en dat de acties die uit de retrospective komen ook uitgevoerd worden. Zelfs als alles goed lijkt te gaan zijn er altijd manieren te vinden om te verbeteren!

Aan de slag!

Wil je meer weten over retrospectives en aan de slag gaan met retrospectives in je teams? Luis Gonçalves en ik hebben het boek Getting Value out of Agile Retrospectives geschreven. Dit boek is vertaald door een team van vrijwilligers naar het Nederlands: Waardevolle Agile Retrospectives. Het boek helpt je om voordelen te halen uit het doen van retrospectives. Er is ook een online Retrospective Oefeningen Toolbox die je kunt gebruiken om je eigen waardevolle agile retrospectives ontwerpen.

UtrechtOp 21 juni geef ik de workshop Valuable Agile Retrospectives in Utrecht. In deze succesvolle workshop leer je de waarom, wat en hoe van retrospectives en oefen je in teams met diverse manieren om retrospectives uit te voeren.

Opmerking: Dit artikel is gebaseerd op een artikel dat eerder gepubliceerd is op Leren en verbeteren in Agile.

Categories: Blogs

Unix parallel: Populating all the USB sticks

Mark Needham - Wed, 06/01/2016 - 07:53

The day before Graph Connect Europe 2016 we needed to create a bunch of USB sticks containing Neo4j and the training materials and eventually iterated our way to a half decent approach which made use of the GNU parallel command which I’ve always wanted to use!

But first I needed to get a USB hub so I could do lots of them at the same time. I bought the EasyAcc USB 3.0 but there are lots of other ones that do the same job.

Next I mouunted all the USB sticks and then renamed the volumes to be NEO4J1 -> NEO4J7:

for i in 1 2 3 4 5 6 7; do diskutil renameVolume "USB DISK" NEO4J${i}; done

I then created a bash function called ‘duplicate’ to do the copying work:

function duplicate() {
  echo ${i}
  time rsync -avP --size-only --delete --exclude '.*' --omit-dir-times /Users/markneedham/Downloads/graph-connect-europe-2016/ /Volumes/NEO4J${i}/

We can now call this function in parallel like so:

seq 1 7 | parallel duplicate

And that’s it. We didn’t get a 7x improvement in the throughput of USB creation from doing 7 in parallel but it took ~ 9 minutes to complete 7 compared to 5 minutes each. Presumably there’s still some part of the copying that is sequential further down – Amdahl’s law #ftw.

I want to go and find other things that I can use pipe into parallel now!

Categories: Blogs

On Learning and Information - Elizabeth Keogh - Tue, 05/31/2016 - 17:32

This has been an interesting year for me. At the end of March I came out of one of the largest Agile transformations ever attempted (still going, surprisingly well), and learned way more than I ever thought possible about how adoption works at scale (or doesn’t… making it safe-to-fail turns out to be important).

The learning keeps going. I’ve just done Sharon L. Bowman’s amazing “Training from the Back of the Room” course, and following the Enterprise Services Planning Executive Summit, I’ve signed up for the five-day course for that, too.

That last one’s exciting for me. I’ve been doing Agile for long enough now that I’m finding it hard to spot new learning opportunities within the Agile space. Sure, there’s still plenty for me to learn about psychology,  we’re still getting that BDD message out and learning more all the time, and there’s occasional gems like Paul Goddard’s “Improving Agile Teams” that go to places I hadn’t thought of.

It’s been a fair few years since I experienced something of a paradigm shift in thinking, though. The ESP Summit gave that to me and more.

Starting from Where You Are Now

Getting 50+ managers of MD level and up in a room together, with relatively few coaches, changes the dynamic of the conversations. It becomes far less about how our particular toolboxes can help, and more about what problems are still outstanding that we haven’t solved yet.

Of course, they’re all human problems. The thing is that it isn’t necessarily the current culture that’s the problem; it’s often self-supporting structures and systems that have been in place for a long time. Removing one can often lead to a lack of support for another, which cascades. Someone once referred to an Agile transformation at a client as “the worst implementation of Agile I’ve ever seen”, and they were right; except it wasn’t an implementation, but an adoption. Of course it’s hard to do Agile when you can’t get a server, you’ve got regulatory requirements to consider, you’ve got five main stakeholders for every project, nobody understands the new roles they’ve been asked to play and you’re still running a yearly budgeting cycle – just some of the common problems that I’ve come across in a number of large clients.

Unless you’ve got a sense of urgency so powerful that you’re willing to risk throwing the baby out with the bathwater, incremental change is the way to go, but where do you start, and what do you change first?

The thing I like most about Kanban, and about ESP, is that “start from where you are now” mentality. Sure, it would be fantastic if we could start creating cross-functional teams immediately. But even if we do that, in a large organization it still takes weeks or months to put together any group that can execute on the proposed ideas and get them live, and it’s hard to see the benefits without doing that.

There’s been a bit of a shift in the Agile space away from the notion that cross-functional teams are necessarily where we start, which means we’re shifting away from some of the core concepts of Agile itself.

Dan North and Chris Matts, my long-time friends and mentors, have been busy creating a thing called Business Mapping, in which they help organizations match their investments and budgets to the capacity they actually have to deliver, while slowly growing “staff liquidity” that allows for more flexible delivery.

Enterprise Services Planning achieves much the same result, with a focus on disciplined, data-driven change that I found challenging but exciting: firstly because I realise I haven’t done enough data collection in the past, and secondly because it directs leaders to trust maths, rather than instincts. This is still Kanban, but on steroids: not just people working together in a team, but teams working together; not just leadership at every level, but people using the information at their disposal to drive change and experiment.

The Advent of Adhocracy

Professor Julian Birkenshaw’s keynote was the biggest paradigm shift I’ve experienced since Dave Snowden introduced me to Cynefin, and those of you who know how much I love that little framework understand that I’m not using the phrase lightly.

Julian talks about three different ages:

The Industrial Age: Capital and labour are scarce resources. Creates a bureaucracy in which position is privileged, co-ordination achieved by rules, decisions made through hierarchy, and people motivated by extrinsic rewards.

The Information Age: Capital and labour are no longer scarce, but knowledge and information are. Creates a meritocracy in which knowledge is privileged, co-ordination achieved by mutual adjustment, decisions made through logical argument and people motivated by personal mastery.

The Post-Information Age: Knowledge and information are no longer scarce, but action and conviction are. Creates an adhocracy in which action is privileged, co-ordination is achieved around opportunity, decisions are made through experimentation and people are motivated by achievement.

As Julian talked about this, I found myself thinking about the difference between the start-ups I’ve worked with and the large, global organizations.

I wondered – could making the right kind of information more freely available, and helping people within those organizations achieve personal mastery, give an organization the ability to move into that “adhocracy”? There are still plenty of places which worry about cost per head, when the value is actually in the relationships between people – the value stream – and not the people as individuals. If we had better measurements of that value, would it help us improve those relationships? Would we, as coaches and consultants, develop more of an adhocracy ourselves, and be able to seize opportunities for change as and when they become available?

I keep hearing people within those large organizations make comments about “start-up mindset” and ability to react to the market, but without having Dan and Chris’s “staff liquidity”, knowledge still becomes the constraint, and without having quick information about what’s working and what isn’t, small adjustments based on long-term plans rather than routine experimentation around opportunity becomes the norm.

So I’m going off to get myself more tools, so that I can help organizations to get that information, make sense of it, and create that flexibility; not just in their products and services, but in their changes and adoptions and transformations too.

And I’ll be thinking about this new pattern all the time. It feels like it fits into a bunch of other stuff, but I don’t know how yet.

Julian Birkenshaw says he has a book out next year. I can’t wait.

Categories: Blogs

Empathy and the Sponsored User

Empathy is an important attribute in Design Thinking. In order to solve our customer's problems, we really need to understand them. We need to walk in their shoes. But there's a limit to how far we can take this. We can spend hours talking to an astronaut, but we will never truly understand what it's like to walk in space.

Agile has this idea of the Product Owner, but you don't see much written in agile about empathy. One approach that can help you get past the empathy hurdle mentioned above is to find a key user (or users) to be part of your team. Some organizations call this a Sponsored User, the organization leading the project sponsors this person't participation in order to get their direct input into the product.

The sponsor user becomes one part of the multi-disciplinary team. Their input is important, but it isn't the only input. While they may understand the customer perspective, you need to balance all the project constraints, especially the time and cost it may take to implement some of the sponsored user's ideas. Don't lose sight of your minimal viable product (MVP) in trying to make the sponsored user happy.

You may also have more than one sponsored user, depending on the breadth of the solution you are trying to provide. If your current release has two or three major themes or epics, you could have a different sponsored user for each epic. Contrast this to the idea of having a single product owner responsible for the overall solution.

So when empathy isn't enough, make the user part of the team in order to keep the direction of your product moving the right way.
Categories: Blogs

Using RabbitMQ To Share Resources Between Resource-Intensive Requests

Derick Bailey - new ThoughtStream - Mon, 05/30/2016 - 13:30

A question was asked on StackOverflow about managing long-running, resource intensive processes in a way that does not hog up all resources for a given request. That is, in a scenario where a lot of work must be done and it will take a long time, how can you have multiple users served and handled at the same time, without one of them having to wait for their work to begin?


This can be a difficult question to answer, at times. Sometimes a set of requests must be run serially, and there’s nothing you can do about it. In the case of the StackOverflow question, however, there was a specific scenario listed that can be managed in a “fair” way, with limited resources for handling the requests.

The Scenario: Emptying Trashcans

The original question and scenario are as follows:

I’m looking to solve a problem that I have with the FIFO nature of messaging severs and queues. In some cases, I’d like to distribute the messages in a queue to the pool of consumers on a criteria other than the message order it was delivered in. Ideally, this would prevent users from hogging shared resources in the system. Take this overly simplified scenario:

  • There is a feature within an application where a user can empty their trash can
  • This event dispatches a DELETE message for each item in trash can
  • The consumers for this queue invoke a web service that has a rate limited API

Given that each user can have very large volumes of messages in their trash can, what options do we have to allow concurrent processing of each trash can without regard to the enqueue time? It seems to me that there are a few obvious solutions:

  • Create a separate queue and pool of consumers for each user
  • Randomize the message delivery from a single queue to a single pool of consumers

In our case, creating a separate queue and managing the consumers for each user really isn’t practical. It can be done but I think I really prefer the second option if it’s reasonable. We’re using RabbitMQ but not necessarily tied to it if there is a technology more suited to this task.

Message Priorities And Timeouts?

As a first suggestion or idea to consider, the person asking the question talks about using message priorities and TTL (time-to-live) settings:

I’m entertaining the idea of using Rabbit’s message priorities to help randomize delivery. By randomly assigning a message a priority between 1 and 10, this should help distribute the messages. The problem with this method is that the messages with the lowest priority may be stuck in the queue forever if the queue is never completely emptied. I thought I could use a TTL on the message and then re-queue the message with an escalated priority but I noticed this in the docs:

Messages which should expire will still only expire from the head of the queue. This means that unlike with normal queues, even per-queue TTL can lead to expired lower-priority messages getting stuck behind non-expired higher priority ones. These messages will never be delivered, but they will appear in queue statistics.

This should generally be avoided, for the reasons mentioned in the docs. There’s too much potential for problems with messages never being delivered.

Timeout is meant to tell RabbitMQ that a message doesn’t need to be processed – or that it needs to be routed to a dead letter queue so that it can be processed by some other code. 

Priorities may solve part of the problem, but would introduce a scenario where files never get processed. If you have a priority 1 message sitting back in the queue somewhere, and you keep putting priority 2, 3, 5, 10, etc. into the queue, the 1 might not be processed.

For my money, I would suggest a different approach: sending delete requests serially, for a single file. 

Single Delete-File Requests

In this scenario, I would suggest taking a multi-step approach to the problem using the idea of a “saga” (aka a long-running workflow object).

When a user wants to delete their trashcan, you send a single message through RabbitMQ to a service that can handle the delete process. That service would create an instance of the DeleteTrashcan saga for that user’s trashcan.

The DeleteTrashcan saga would gather a list of all files in the trashcan that need to be deleted. Then it would start doing the real work by sending a single request to delete the first file in the list.

With each request to delete a single file, the saga waits for a response to say the file was deleted.

When the saga receives the response message to say the previous file has been deleted, it sends out the next request to delete the next file.

Once all the files are deleted, the saga updates itself (and any other part of the system, as needed) to say the trash can is empty.

How This Handles Multiple Users

When you have a single user requesting a delete, things will happen fairly quickly for them. They will get their trash emptied soon, as all of the delete-file requests in the queue will belong to that user:

u1 = User 1 Trashcan Delete Request

When there are multiple users requesting a delete, the process of sending one delete-file request at a time means each user will have an equal chance of having their request handled next.

For example, two users could have their requests intermingled, like this:

u1 = User 1 Trashcan Delete Request

u2 = User 2 Trashcan Delete Request

Note that the requests for the 2nd user don’t start until some time after the requests from the first user. As soon as the second user makes the request to delete the trashcan, though, the individual delete-file requests for that user start to show up in the queue. This happens because u1 is only sending 1 delete-file request at a time, allowing u2 to have requests show up as needed.

Over-all, it will take a little longer for each person’s trashcan to be emptied, but they will see progress sooner rather than later. That’s an important aspect of people thinking the system is fast / responsive to their request.

But, this setup isn’t without potential problems.

Optimizing: Small File Set vs Large File Set

In a scenario where you have a small number of users with a small number of files, the above solution may prove to be slower than if you deleted all the files at once.

After all, there will be more messages sent across RabbitMQ – at least 2 for every file that needs to be deleted (one delete request, one delete confirmation response)

To optimize this, you could do a couple of things:

  • Have a minimum trashcan size before you split up the work like this. below that minimum, you just delete it all at once
  • Chunk the work into groups of files, instead of one at a time (maybe 10 or 100 files would be a better group size, than 1 file at a time)

Either (or both) of these solutions would help to improve the over-all performance of the process by batching the work and reducing the number of messages being sent. You would need to do some testing in your actual scenario to see which of these (or maybe both) would help and at what settings, though.

Beyond the batching optimization, there is at least one additional problem you may face – too many users.

The Too-Many-Users Problem

If you have 2 or 3 users requesting deletes, it won’t be a big deal. Each will have their files deleted in a fairly short amount of time.

But if you have 100 or 1000 users requesting deletes, it could take a very long time for an individual to get their trashcan emptied – or even started! 

In this situation, you may need to have a introduce level controlling process where all requests to empty trashcans would be managed.

This would be yet another Saga to rate-limit the number of active DeleteTrashcan sagas.

For example, if you have 100 active requests for deleting trashcans, the rate-limiting saga may only start 10 of them. With 10 running, it would wait for one to finish before starting the next one.

This would introduce some delay to processing the requests. But it has the potential to reduce the over-all time it takes to delete an individual trashcan.

Again, you would need to test your actual scenario to see if this is needed and see what the limits should be, for performance reasons.

Scaling Up and Out

One additional consideration in the performance of these resource-intensive requests is that of scaling up and out.

Scaling up is the idea of buying “larger” (more resources) servers to handle the requests. Instead of having a server with 64GB of memory and 8 CPU cores, perhaps a box with 256GB of memory and 32 CPU cores would perform better and allow you to increase the number of concurrent requests.

While scaling up can increase the effectiveness of an individual process instance, it becomes expensive and has limitations in the amount of memory, CPU, etc.

Scaling out may be a preferable method in some cases, then.

This is the idea that you buy many smaller boxes and load balance between all of them, instead of relying on one box to do all the work. 

For example, instead of buying one box with 256GB memory and 32 cores, you could buy 10 boxes with 32GB of memory and 4 cores each. This would create a total processing capacity that is slightly larger than the one box, while providing the ability to add more processing capacity by adding more boxes. 

Scaling your system up and/or out may help to alleviate the resource sharing problem, but likely won’t eliminate it. If you’re dealing with a situation like this, a good setup to divide and batch the processing between users will likely reap rewards in the long-run. 

Many Considerations, Many Possible Solutions

There are many things that need to be considered in resource intensive requests.

  • The number of users, the number of steps that need to be taken
  • The resources used by each of those steps
  • Whether or not individual steps can be done in parallel or must be done serially
  • … and so much more

Ultimately, the solution I describe here was not suitable for the person that asked, due to some additional constraints. They did, however, find a good solution using networking “fair queueing” patterns. 

You can read more about fair queueing with this post and this video.

Because each scenario is going to have different needs and different challenges, you will likely end up with a different solution, again. That’s ok.

Design patterns, such as Sagas and fair queueing, are not meant to be all-in-one solutions. They are meant to be options that should be considered.

Having tools in your toolbox, like Sagas and batch processing, however, will give you options to consider when looking at these situations. 

Categories: Blogs


Henrik Kniberg's blog - Mon, 05/30/2016 - 11:08


Categories: Blogs

Classical Agile Estimating

Agile Estimator - Sun, 05/29/2016 - 17:08

Release Burn Down Chart

Is agile too new to talk about “classical” agile estimating? Not really. The agile manifesto was copyrighted in 2001. I became aware of agile development in 2003. In either case, agile has been around for more than 10 years. This means that there has been time for people to develop true expertise in agile software development. It also means that it is well past the time we need to consider some refinements to the parts of agile development that do not work as well as they could. The focus of this blog is obviously the estimating portions of the agile development process. I will summarize them here. I will also explain why some of the techniques do not work as well as we though they would. I will also talk a little bit about some fundamental principles, like user stories. This is done to insure that we are all picturing the same things when we talk about estimating them. Most of what is written will apply to any agile approach, but some of the terminology is based on Scrum. Face it, Scrum is to agile methodologies as Microsoft Office is to productivity software. Like it or not, it is the clear winner.

Scrum, and every other agile development methodology, is driven by user stories. They usually have the format type of user, will be able to perform action. For example, as a salesman, I can input customer information. Some people elaborate on a user story by adding a reason. For example, as a salesman, I can input customer information so that I can track my interactions until the sale closes. In 2003, we stressed that agile developers should be co-located with their users. The user story cards were simply used to spur conversations between the developers and the users. They were written on cards because they were not intended to be permanent documents. Today, most agile development is distributed. User stories are kept in Microsoft Office documents or in specialized agile planning tools. However, the principles are still the same. The user stories are not intended to be requirements documents. They are really more important as planning documents. User stories are explained in two of Mike Cohn’s books: User Stories Applied and Agile Estimating and Planning. The second book talks about how they are used for project estimating.

In Agile Estimating and Planning, Mike Cohn gives and interesting case study where a software company undertakes to develop a new game. They establish an initial set of user stories. However, their estimate is more driven by their experience developing a different game. For the most part, they are using the same development team and development technologies. The rules of the game are different, but they can extrapolate. The case study uses games that are not completely explained. But for illustration purposes, suppose the team had just programmed a checker playing game and now it was going to attempt a chess game. There are two different checkers pieces, a piece and a piece that has been turned into a king. Chess has six different types of pieces. Some have different types of moves depending upon board configurations, like the castling of a king. In any case, there would be some ways in which you could make analogies between the two games in order to estimate how long the new game might take to develop. This is estimating by analogy, but int only works if you have similar projects, developed by similar teams in a similar fashion. It does not help a product owner sitting in his office with a few dozen user stories before the team has been established or the initial technical decisions have been made.

In the case study just mentioned, the team is in place and they have experience with a similar project. Developing initial user stories is not too difficult. This is also true when planning an enhancement project. The underlying application is available. It is somewhat straight forward to write user stories to describe the enhancements. But what about a new application? What about the time before the team is hired or assigned?  At this point, the user stories must be written by the users. They can be lead through the process by the product owner. In any case, developing the initial user stories is hard, and estimating them is harder. My thesis, An approach to Early Lifecycle Estimating for Agile Projects, describes a collaborative technique called Elf Poker which develops user stories and an estimate. Note that my thesis committee would not let me call it a game. However, most teams in this situation use some type of workshop technique to develop the user stories and then use planning poker to assign story points to each of the end user stories.

User stories are usually estimated in terms of story points. Story points represent the amount of effort that will be necessary to implement a user story. However, it is not measured in hours or any other unit of time. It is merely a relative measure. For example, if one story is assigned 8 story points, then a story that could be implemented in one-half of the time would be 3 or 4 or 5 story points. Why the imprecision associated with story points. It is by design. Story points can only be 0 (for extremely small), 1, 2, 3, 5, 8, 13, 20, 40, or 100. Some practitioners use a value of 4 instead of both 3 and 5. In either case, the objective is to have relative sizes, but not to bog down trying to assign values like 25 vs. 30. This is usually more precision than the team can realistically assign anyway. Agile teams usually assign use case points by playing a game like planning poker. Each player gets cards with the possible story point values from above. Each round begins by having someone read a story card, the team asks questions and discusses their assumptions, and then each person places the card corresponding to their choice for story points in front of them. When everyone has picked a value, the cards are turned over. In the rare cases where everyone agrees, the value is assigned. Other times, the people with the lowest and highest value explain what they thought and then everyone puts out their possibly new values. This may happen a two or three times in an effort to reach consensus. If no consensus is reached, the dealer chooses a value by averaging the players’ choices or by taking the most popular answer.

In agile development, an application is developed in a series of releases. I have lost track of the number of releases that Microsoft Excel has been through. No one knows how many more to expect. A release requires a certain set of user stories to be implemented. Each story has an estimate in story points. Thus the release requires a certain number of story points to be implemented.  Each release is developed in a series of iterations or sprints. The sprints are fixed in terms of calendar time, usually one or two weeks. If  sprints in your organization are two weeks long, then when the two weeks are done, the sprint is done. One facet of agile estimating is to decide how many sprints will be necessary to implement the next release. If you are planning to complete 10 story points worth of user stories in a sprint, the product owner chooses the stories to work on. If all the stories are not completed, then the unfinished ones go back into the product backlog. If they are done early, the product owner selects additional stories to be worked on. In any case, if you knew the number of story points that would be implemented in each release, you could develop a burn down chart like the one at the top of this post. You would begin with the number of story points that had been implemented in the completed iterations. Then you would forecast the number that would be delivered in future sprints. As Joe Bergin said in our Pace University agile courses years ago, “the best forecast of tomorrow’s weather is today’s.” Use the velocity from your most recent sprint to figure out how many more sprints will be required.

Each sprint must also be planned. Developers are typically not tasked with implementing user stories. The user stories are decomposed into technical tasks that must be accomplished. Each technical task is assigned to a developer or two. There is an ideal time associated with each task. Kent Beck defined ideal time as the amount of time the task would “take without distractions or disasters.” No one can do 80 hours of task work in a 2 week sprint. The team must figure out what its velocity, or number of ideal hours accomplished, has been in previous sprints and apply that number to this sprint. Some teams develop a burn down chart similar to the above, with cumulative task hours remaining on the vertical access and number of days in the sprint on the horizontal one.

Sometime during the sprint, there is a point where the team looks at their backlog of user stories. This is usually a meeting called story time or story grooming. The development teams will straighten out the user stories. Sometimes they break a story into smaller stories because they understand the situation better. They will re-estimate the stories. This step is necessary because the team must validate that the sprint they are involved with is consistent with the needs of the release as a whole. Development teams usually hate this step. At the very least, it distracts they from the sprint that they are focused on. Even worse, it calls on them redo their estimates. The developers usually do not like doing estimates. They are usually not good at it. There are two reasons that they are not usually good at it. First, they are trained developers, not trained estimators. Second, because they are on the team, they can fall victim to a phenomenon called groupthink. They cannot be depended on to deliver an impartial estimate.

Groupthink can best be illustrated with a personal story. Years ago, I was asked to look at a large development project and decide if and when it might be completed. As part of my process for doing this, I would meet with the team leads and try to get a feel for the functionality that they were working on. Often, I did not know the team leads despite the fact that they worked for the same company that I did.  They often worked for other offices. However, I knew one of the leads. He was intelligent and I felt I could depend on him to be truthful. As I interviewed him, I realized that he did not understand the functionality he was attempting to develop. This is not unheard of. He realized the same thing. At that point, he said that it was too bad that I was not going to be in town the following week. I asked if he was having some meeting to establish the functionality. No, he said, the system would be in production and he could show me how it worked. He was a victim of groupthink. The team bought into this utterly impossible schedule. No one could picture anything else. I figured they were a year away from production. The client killed the project in five more months.

Retrospectives are part of the agile process. We must look at the way that we develop systems and decide what is working and what is not. It is time to look at the estimating portions of agile development. Does it need to be repaired? If it is not broke, then there is no need to fix it. Or, is it one of the distractions that prevents us from getting 80 hours of task work done during a two week sprint? Is estimating always necessary? When it is necessary, who should do it? How? Please let me know what your teams are doing that is different from the approach described here.

Categories: Blogs

What Is Success, In Software?

Derick Bailey - new ThoughtStream - Fri, 05/27/2016 - 18:23

When we look at a project and attempt to determine whether or not it is successful, there are many aspects and considerations that have to be judged. It can be difficult to judge, as well – and often depends on your perspective in relation to the software.

Are you the customer? Or the developer? The marketer, or the support staff?

Ultimately, the perspective that we have will shape our focus, and having the right focus can make or break the success of our software.

So, what is success in software?

Find out in this episode of Thoughts On Code.

Categories: Blogs

A Dream Deferred

This is a hard blog post to write. I’ve actually pondered for some time whether I should write this down and publish it or just bottle it up and deal with it like we did with a similar situation two and a half years ago. I’m not one to shame companies, stir the pot up and make a scene. But I still feel some seething anger about what I perceived as discrimination then and I feel just as angry about this new situation I’m writing about today. After some reflection I’ve decided to simply tell this story and my hope is that, while venting, perhaps find some positive resolution. You may say I’m twisting events or that I’m over-reacting. You may put my wife’s character and capabilities into question. It is what is and sometimes I think it’s worth it to just share a story. To put today’s story into context I think it is best to start from the beginning, talk about the experience I bottled up and then describe today’s scenario. I’ll draw my own conclusion while you may draw your own.

My wife is a very ambitious woman. Born in a refugee camp by two parents who met after losing their loved ones under the Khmer Rouge regime she was born a fighter. She survived a bout of lung cancer when she was only two years old and to this day has only one lung. She graduated college with a Bachelor’s in Mathematic and in Computer Science. When we worked together I was easily attracted to her ambition, intelligence and genuine concern for others. She always gives a twenty dollar bill to the homeless on the street. When she encounters a veteran or military personnel in uniform in public she always thanks them for their service. But I always loved how great she was to work with… we had so much fun attending conferences together and even gave a joint presentation on Behavior Driven Development with Scala at Strange Loop back in 2010.

Where I’m going with this is my wife has very specific career goals. Sure it can be fun to be an engineer but at one point or another one has a natural desire to move up. The employer we both worked at in the past unfortunately had a very flat structure with little upward momentum. So she began hunting and landed an engineering team lead position at another local company. She spent her days mentoring junior developers on practices like test driven development, object oriented design, patterns, etc. She also provided career guidance and even sent one of her developers to management training and put him in charge of some projects to give him experience. As a side story, he went on to become a team lead himself who I hear has been doing a pretty good job. Suffice to say, she really enjoyed this position and often said she was “living the dream.”

Conflict and Discord

Looking back I think sometimes the source of the conflict between Than and her manager was the fact that Than’s ambition is very aggressive and she’s not beyond doing what it takes to get things done. I think an amusing story is when her team had five different product owners say “this is a top priority from our CEO” in different meetings, she got back to her desk, picked up the phone and called the company CEO directly and chatted with him about it and how each five priorities can’t be the top and they need to be prioritized. He really appreciated her bluntness and worked with her to re-prioritize a bit. Her manager gave her a comment that the next time she should go through her.

It probably also didn’t help that her team was very strong willed and unafraid to go against Cargo Cult practices and shake up the status quo. But I always heard good things independently from people on her team. They really looked up to her and respected how much she stood up for them. She never backed down and was always willing to fight for her team when she had to. She never got a bad review and members of the executive team really liked her. Her manager once confided in her that sometimes she felt somewhat intimidated by Than because of her knowledge, experience and aggressiveness.

Something Strange Comes This Way

Than’s team kept growing and at one point was the largest team. I think they had a good ten or twelve people and I know at least two or three had specifically requested a transfer to be on her team. However as other teams began to work on new projects members started being “borrowed” off her team. It began downsizing. Six. Four. Two. This started becoming a little stressful to her. What began as an innocuous “Brian has great knowledge of the LMD subsystem so we’ll loan him out to this project to rewrite LMD” had trickled to no team projects… just support tickets. My wife sensed something was up… perhaps a downsize? She told her manager in her one on one that if there was no reason for her team to exist and they were keeping it place just to let her keep her position they should just disband it. And her manager did with a sigh of relief.

I keep looking back on this incident. Out of six or so teams the only team to be downsized and eventually disbanded was the one with a strong, female, dark skinned Asian. None of the other Caucasian males (who I might add had a lot less experience than my wife does) had this situation closing in on them. However I think I could have accepted if they had used this opportunity to perhaps promote my wife to a position better suited for her. Perhaps the Agile Coach position to transform the department in the same way she had transformed her team? They were looking to add that position and even the CTO mentioned that he thought she’d be a good fit.

In what will go down as perhaps one of the puzzling “promotions” I’ve ever witnessed, she was moved to a Business Analyst position. While this position was dressed as a promotion (after all, her salary remained the same, right?) the truth is that she’d be on the same level as her previous direct report; a business analyst that she successfully trained. I still find this highly puzzling, perhaps her manager thought that she’d be a good fit because she did so well training her business analyst? Her days were pretty much filled with nothing but doing business analyst type tasks and continuing to work the support queue her team used to own (but all by herself).

It should come as no surprise that this was a miserable state of affairs and after about two or three months she turned in her resignation letter. Like I said Than is very ambitious and is not content to what is a dead end. And we could both see that this recent “promotion” was nothing more than a dead end. We moved on. She found an agile coaching position that was located two hours east in St. Louis and gladly took it as an opportunity to move forward in her career.

Over that year I kept feeling anger but remained cool. “Cooler heads always prevail.” is the motto I’ve always lived by. But I cannot help but keep looking back and wondering. Was this what discrimination looks like? She never had a bad review. Everyone on her team thought she was fantastic. Yet at the end of the day she… a dark skinned Asian female who had clawed her way up from poverty, the daughter of elderly refugees… was the only team lead to face a demotion. I was so mad. I wanted to breath smoke. I wanted to grab someone and yell “She deserves better than that. For all her hard work, this just is not how you should treat someone!”

But every time I reflect on the situation, it is a permanent gray area. Maybe it was discrimination. Maybe it was just dumb luck. If we cut it another way, perhaps a team had to be cut and it just had to be hers? Why should another team lead have to suffer just because they were born white and male? There’s also the nagging feeling that of all the team leads, Than as the one who came into conflict the most with her manager. Anyhow this whole situation was topped off with Than’s mother passing away and, three weeks after the funeral, her manager’s response to her resignation letter was “I knew this was coming sooner or later.”

Oh well. As Frank Underwood would say, I don’t cry over what happened in kindergarten.

Cooler Heads Prevail

Than loved her new job. She felt like she was making a difference again. As an engineering coach she put together classes to teach TDD, patterns and all that good stuff. She worked with various teams across multiple data centers to organize their work into backlogs and remove road blocks. She got lots of praise from upper management. Her career was moving again. She even showed me a letter, written by someone who had attended her classes, state that he was able to take what he learned in her class and land a different job with his salary doubled. It was really touching to see how much of a difference she was making in other’s lives.

We did have a difficulty of her living in St. Louis during the week and being home only on the weekend but we worked through it. Richard lived with her in St. Louis while Caitlynn and I spent time together working on her speech issues. We cherished our time together as much as possible on the weekends and, when school schedule would allow it, Caitlynn and I would pack up and drive down to St. Louis and I’d work from T-Rex or a random coffeeshop and meet up with her afterwards. Sure it was hard but we made it work.

Unfortunately the contract took the turn that some contracts just do. The consulting company was bought out by a larger company. The contract’s future was put into doubt. If she remained she might be looking at having to consult on site somewhere very far away. Just to give an idea of where she might wind up, the other coach she worked with was sent to Minnesota. After some clever accounting (more on that one day) we made it so that she could simply take the summer off to rebalance and figure out what she wanted to do next. Before we did this though she did take an interview at another company. As she walked out to her car the recruiter called and told her that they were very impressed. The client would love to build a team around her. When could she start? We discussed it at length and in the end she decided it’d be best to take the summer off so she could refocus. A small sabbatical of sorts.

Summer Revelations

That summer Than found out an interesting revelation that really wasn’t quite a revelation given her entire life to this point. She was just way too ambitious to be a stay at home mom. Now this isn’t meant in any shape to be an attack on those who do chose to stay at home. But she just couldn’t tolerate not bringing income into the household. I kept trying to convince her to perhaps work on a startup idea or something but she just couldn’t focus being in a completely work free situation for the first time in her life. Her parents were too elderly to work and when she turned 18 she was the sole provider of her household. She moved them out of Section Eight housing and into a trailer working at Target while attending college. And after she landed her first job out of college she moved them into their first new construction home at the age of 23. Being without a job and not being the breadwinner of the family was a completely alien experience for her.

At the end of the summer she decided to re-enter the field. As luck would have it, around this time the same recruiter who had tried to hire her before the summer started reached out. “Are you still available?” They still hadn’t filled that position and still felt that Than was the perfect fit. I drove down to St. Louis with her and worked on my laptop in a Starbucks while she met with the recruiter and Director of Engineering. They spoke for about an hour and were all smiles and excitement. As we left Than offered to buy me a nice bottle of scotch to celebrate, she had accepted their offer and was going to head up a Quality Engineering team at this new client. They’d build a team around her and after a trial period convert her to a full time manager.

Life Goes On

The remainder of the year was somewhat uneventful. She loved her new job and her new co-workers. With the salary she was now commanding she decided to simply drive back and forth (a total commute of four hours) every day because moving her career forward was worth it. We did look at houses in St. Louis but in the end just really loved our home and our daughter loved her school and friends. So we stayed put and she continued commuting. The engineering team had some growing pains here and there but they worked though them. Her director became VP of engineering. We attended dinners and bourbon tastings with the product owners she worked with. Life was good.

After some time, her contract had reached the point where she was able to convert over to full time. HR pushed really hard to get her to convert but she kept holding out. She wasn’t sure if she wanted to put roots down or not. We discussed it and in the end she decided well, it has been a nice place to work. It pays well. And they do have that sweet $10,000 signing bonus (repayable if you leave earlier than two years) that could definitely help with our deck restoration project. Plus it was a true management position that she could grow in. After much deliberation she signed the employment offer and returned it. This is where things took a quite distinct turn for the worse.

A New Director

Only a week after she converted to a full employee she received the news. A certain rockstar programmer that her employer had been trying to recruit for some time had finally accepted an offer. He’d take on the VP of Engineering’s previous director role. She was a bit uneasy about this because her VP had told someone else that if he kept up the good work he might transition him into this role and this seemed like a broken promise. She was also a little displeased that the team was 100% uninvolved with his hiring process but after meeting with him for lunch she walked away feeling that this guy was pretty alright.

By the new director’s second day he told Than that he didn’t really see a need for her team and would be disbanding it. She’d convert to an engineer and he told her, straight up, that he thinks she’ll do good being in engineering. When she told him this would be a step backwards quite a bit since the past 4 years she’s been grooming herself for management he told her “Well, if you do good and prove yourself there might be a path forward for some kind of promotion in the future.” When she pulled up in the driveway that evening I met her in the driveway and saw she still had tears in her eyes. She felt cheated. Lied to. To take a management position and have the rug pulled out from under her only three weeks later. Eleven years of engineering experience, four years leading teams and coaching, and to be told in the end that she needs to prove herself. What more does one have to prove?

Again I have a gnawing feeling. I want to push it out of my head, I want to make the excuses. “Flat organization structures are all the rage these days,” I think. Maybe he’s just doing this because that’s his philosophy. Then more circumstantial evidence builds up. None of the other managers are being demoted and my wife feels like she’s being picked on. She’s texted multiple teammates asking if she’s being petty by pushing back against her demotion. They’ve all told her not at all and even added that they’re surprised she’s not fighting back harder.

Are we coming face to face with discrimination again? Am I imagining things? I want to believe that her director is just oblivious to what he’s doing to her. It’s easy to meet someone, so far removed from their engineering origins, and come to a conclusion that they’re not that smart. That they have something to prove. He doesn’t know us. He doesn’t know my wife’s past. He doesn’t know that she co-founded a technical user group with me 6 years ago that is still going strong. He doesn’t know she was a judge at startup weekend that had it’s first place winner actually land $5.5 million in VC funding today. He doesn’t know that she’s very respected amongst so many people in our field.

Earlier today I read a very long blog post about discrimination another woman faced at a company and I began seeing some familiar narratives. Both her director and VP are projecting the perception that she’s the one causing problems. That somehow complaining about being demoted only three weeks after signing on is being “petty”. She’s being singled out as the one who is stirring the pot. Today they made the announcement that she’s transitioning to engineering in two weeks and she went to the bathroom and just cried. She feels so defeated and even asked me if she’s really that bad? Should she just exit this field altogether?

In a lot of ways I’m reminded of a poem by Langston Hughes that I first heard during my freshman year of college that really put the hook in me.

What happens to a dream deferred?

Does it dry up
like a raisin in the sun?
Or fester like a sore—
And then run?
Does it stink like rotten meat?
Or crust and sugar over—
like a syrupy sweet?

Maybe it just sags
like a heavy load.

Or does it explode?

The Road Forward

Sometimes our dreams and career goals have their ups and downs. Sometimes we take two steps forward and then have to take three back. Like I told my wife while calming her, only 4 years ago I found myself exiting a one-on-one with my manager and taking a long walk down the road wondering if maybe I should pursue a different career. Maybe I’m somehow just not cut for this field. Thankfully I lucked out because after that moment a friend’s startup really started to take off and they reached out asking me if I’d like to be their first engineering hire. If I hadn’t been in such a low point of my career I likely would have said no and bypassed what has been the best 4 years of my career.

Back to my wife’s situation, to say my anger has been simmering is an understatement. Yeah I lodge a few angry tweets here and there as I’m wont to do. But being negative really doesn’t solve much. Twitter is full of people who I can look at a long list of complaints of injustice and that’s just all they focus on day in and day out. Sure simmering in anger might feel good but it’s just not constructive in the long run.

I advised her that the situation definitely is toxic and it’s time to move on. But I just wish there was a way to make her get past the feeling of worthlessness. It sticks for awhile. It stuck two years ago and it’s sticking now. Everywhere I look in our industry we talk about diversity. “How can we attract more women to this field?” we ask ourselves curiously.

Like I said, perhaps this is all circumstantial. Perhaps you’ll say my wife just isn’t good at what she does. Maybe you’ll say like her director told her that she needs to prove herself. Maybe you’ll say she’s being petty. I’ve come to my conclusion and my conclusion is sexism. Discrimination. Call me a liar and social justice warrior of sorts if it makes you sleep better at night. I’ve simply chosen to call a spade a spade and will call it out.

We won’t take this lying down though and we won’t go silently into the night. We’re survivors. We’re a team and we don’t stop fighting. I’ll be dedicating the next several days to finding her something even better than what she thought she had with this lousy position. You might think less of me for sitting down and airing all of this but so be it. I stand up for those I love. I also can’t help but think that perhaps just distilling all my feelings about this situation into words will be more constructive than silently bottling it up like last time. I don’t want my daughter to ever have to face situations where it really is debatable if one is facing a demotion for anything other than performance. I’m not terribly worried though… my wife and I are both fighters. I am one hundred percent confident that she’ll come out on top.

These situations have left me with a final thought. In a field that has so many problems attracting and retaining diversity it is hard to imagine why when one is faced with so many set backs that leaving the industry is anything else but a logical conclusion. I don’t mean to be yet another voice beating a dead horse but it’s really time we started doing better.

Categories: Blogs

The Diagnosis and Treatment of Bimodal IT

Leading Answers - Mike Griffiths - Thu, 05/26/2016 - 05:35
Is it just me, or does Bimodal IT sound like a mental health condition? Unfortunate name aside, it has been adopted by companies reluctant to embrace agile but looking for a halfway-house / best-of-both-worlds solution. In my last post I... Mike Griffiths
Categories: Blogs

Agile Coach Camp

Leading Answers - Mike Griffiths - Thu, 05/19/2016 - 02:38
I attended this event last year and enjoyed it... We would like to invite Agile Practitioners to Agile Coach Camp Canada - West, an Open Space Conference to be held in Vancouver, BC on the weekend of June 17-19, 2016.... Mike Griffiths
Categories: Blogs

PMBOK v6 Work

Leading Answers - Mike Griffiths - Thu, 05/19/2016 - 01:52
As you probably know the Exposure Draft of the PMBOK v6 Guide is making its way through review at the moment. I have been working with the PMI on some iterative, adaptive and agile content and look forward to when... Mike Griffiths
Categories: Blogs

DSDM Video

Leading Answers - Mike Griffiths - Thu, 05/19/2016 - 01:47
I get the feeling that DSDM is considered by many people outside of the UK as the uncool, out-of-touch great-uncle of agile. While somewhat related to modern agile, it is kind of forgotten about or dismissed as outdated or not... Mike Griffiths
Categories: Blogs