Skip to content

Feed aggregator

Growing Up

Portia Tung - Selfish Programming - Sat, 09/05/2015 - 20:50

Birthday Baby

A New Season’s Greetings

September has always been a special month for me. It’s a month of new beginnings, ranging from the start of a new season to that of a new school year, throbbing with the promise of excitement and adventure.

September is also the month of my birthday and I’ve had to make special preparations this year given I’m about to turn 40. The big 4-0. And I think this impending event has blown a circuit or two in my head.

Birthday Baby

Looking back, I now realise I not only suffered from acute thrisis (mid-life crisis in my thirties), I also had a similar short-circuiting experience just before I turned 20. And it turns out I’m not alone.

According to one theory about “9-Enders“, people who’s current age ends in a “9” such as “19”, “29”, “39”, “49”, 9-Enders find themselves searching for answers to the big questions: “What is the meaning of life? What is my purpose? What does it mean to be happy? What makes me happy? What’s my life plan?”

What’s more, research shows that most people respond in one of two ways when confronted by such questions. One group will become more determined to make the most of their life while the other group concludes that “my life sucks” and grows increasingly despondent.

Becoming Better with Age

Just as September marks the end of something old, September marks the start of something new. So long as we continue to stretch ourselves by expanding our comfort zone, we can keep going strong.

If you’ve become unstuck with the big questions or even small ones, or simply want to feel re-energised and inspired to get a move on with your dreams, join my friends and I in a bunch of fun-packed and thought-provoking conference sessions in the UK this autumn:

Agile Cambridge: 30 September, Cambridge, UK.

Agile Tour London: 23 October, London, UK.

Categories: Blogs

Measuring and Reporting with LeanKit

Whether you’re a team member, project manager or executive, find out how LeanKit’s reports and analytics give you the insights you need to measure and improve your delivery success. About This Webinar During this session, Alex Glabman, Product Implementation Manager at LeanKit, takes a look at the following roles: Teams: Continuously measure and improve using Lean […]

The post Measuring and Reporting with LeanKit appeared first on Blog | LeanKit.

Categories: Companies

Discover Agile: A New Way to Work

Rally Agile Blog - Fri, 09/04/2015 - 16:56

In an interview with SD Times a few days ago (“Don’t do agile, be agile”), Rally VP of Engineering and former agile coach Ryan Polk called out what is, for many companies, the elephant in the room: if you’re not seeing the results you’d hoped for with agile, you might be doing it wrong.

(Flickr, CC)

As Ryan explains it,

“Human behavior … tends to gravitate toward the easiest practices, not the best practices. Agile is a set of hard practices. They actually take discipline. They take understanding how your teams are working and evolving.”

If your organization is suffering from unrealistic plans, unstaffed priorities, quality issues, customer dissatisfaction, delayed delivery, or low morale, then you’re ready for a new way to work. If you think you’ve evolved toward the easy practices instead of the best, it might be time for a level-set. And if you’re new to agile, it might be time for a tutorial on the basics.

Agile, done correctly, promises a range of benefits: faster time to market, increased productivity, fewer defects, cost savings, and better employee engagement. We can help you get started with a strong foundation of disciplined practices, executive buy-in, realistic goals, and motivated teams. 

Visit our Discover Agile page to learn more about agile and get started on the path to a better way of working.

Categories: Companies

Isomorphism vs Universal JavaScript

Xebia Blog - Fri, 09/04/2015 - 08:50

Ever since Spike Brehm of Airbnb popularized the term Isomorphic JavaScript people have been confused about what exactly it means to have an Isomorphic JavaScript application and what it takes to build one. From the beginning there were people opposing the term, suggesting alternatives such as monomorphic or polymorphic, whatever that all means. Eventually Michael Jackson (the React guy) suggested the term Universal JavaScript and most people seem to prefer it and proclaim “Isomorphic” to be dead.

To reopen the discussion, JavaScript guru Dr. Axel Rauschmayer recently asked the question: Is Isomorphic JavaScript a good term? I’ve already left a comment explaining my view of things, but I’d like to explain a little more. I used make the distinction between Functional Isomorphism and Technical Isomorphism. In my talk at XebiCon I explained the difference. Having the server render the HTML on first page load is the functional part, the thing that provides for a better user experience. The technical part is where we use the same code in both environments, which no user ever asked for, but makes a developer’s life easier (at least in theory).

Continue reading at

Categories: Companies

Robot Framework - The unsung hero of test automation

Xebia Blog - Fri, 09/04/2015 - 06:47

The open source Robot Framework (RF) is a generic, keyword- and data-driven test automation framework for acceptance test driven development (ATDD). As such it stands alongside similar, but more well-known frameworks, like FitNesse, Cucumber, et alia. The (relative) unfamiliarity of the testing community with the RF is undeserved, since the RF facilitates powerful and yet simple test automation against a variety of interfaces and features some distinct advantages when compared to those other frameworks.

In a series of blogposts, we would like to make a case for the Robot Framework, by showing its greatness through a number of hands-on examples from my upcoming workshop. Next to demonstrating its advantages and strengths we will also expose some of its drawbacks and limitations, as well as touch upon certain risks that flow from harnessing some of its unique features.

Our first three posts will give an introductory overview of the RF, laying the conceptual foundation for the remainder of the series. Therefore these three articles will not concentrate on practical, hands-on examples or instructions, but instead have a more theoretical feel. Moreover, several of the fundamental concepts laid out in them, are applicable not only to the RF, but to most (if not all) test automation frameworks. Consequently, these first three posts target those that miss a basic understanding of test automation (frameworks) in general and/or of the RF in particular. The remainder of the series will be also of interest to more seasoned automation engineers.

We will first look into the basic architecture that underlies the framework and discuss the various components that it is being comprised of. In the second post we will discuss the nature of the keyword-driven approach that the RF entails. The third post will detail a typical test automation work flow.

For a first-hand experience of the pros and cons of the RF, you might want to join the first Robot Framework meetup in the Netherlands.

Robot Framework stack

The RF is written in Python. Consequently, it runs on Python, Jython or IronPython. The framework can thus be used with any OS that is able to run any of these interpreters, e.g. Windows, Linux or OS X. Unless you have a specific reason to do otherwise, the RF should run on Python. A typical situation that would require e.g. Jython, would be automating against a Java application or implementing your own RF test library in Java (more on this in a later post). A disadvantage of running on Jython is that quite a few of the low-level test libraries within the RF ecosystem will not be available. Moreover, running in Jython will slap you with a performance penalty. Fortunately, in the mentioned situations, one could still run the stack on Python, through the usage of the so-called Remote Library Interface mechanism, that can be harnessed to connect the Python stack to an application and/or a test library running in a JVM (on the same or a remote system). We will be addressing the latter subject, as well, in one of our follow-up articles.

A possible, though highly simplified, set-up of an automation framework is the following:

Generic framework design

Generic framework high-level design

Green signifies framework components whereas grey refers to components or artefacts, such as test code and product code, that are to be created by the development organization. The numbers indicate the order in which a typical test execution run would flow (more on this in the third post). The framework components are typical of all of today's test automation frameworks. Obviously, this schema is a simplification of a real-life set-up, which would result in a more complex infrastructural model so as to account for topics such as:

  • a possible distributed setup of the test engine and/or test driver
  • parallel testing against a variety of interfaces (e.g. against REST and some UI) or against a multitude of product configurations/stacks/databases
  • integration within a continuous delivery pipe line and with the test code repository
  • etc.

Mapping these generic components onto concrete run-times within the RF stack, we get the following:

Robot Framework high-level design

Robot Framework high-level design

The RF itself functions as the central framework engine. It is the core framework, that is being augmented by various tools and libraries that have been developed within the RF ecosystem, to form the larger, broader framework. (To be precise, in the given example, Selenium Webdriver does not belong to the RF ecosystem. But most of the other available low-level test libraries do.)

Let’s elaborate somewhat on the various components of the framework stack.

Test editor

The test editor is what we use to write, maintain and structure our automation code with. Test code not only consists of test cases, but also of various layers of abstractions, such as re-usable functions (keywords), wrappers, object-maps and global variables.

In the case of the RF, the editor can be anything, ranging from the simplest of text editors to a full-blown IDE. The Robot Framework comes with various editors, such as the RF Integrated Development Environment (RIDE), and with several plug-ins for popular IDE’s and text editors such as Eclipse, IntelliJ, Atom, TextMate or even Vim. But of course, you could also use a separate text editor, such as Notepad++. Which editor to use may depend on factors such as the required complexity of the test code, the layers to which one has to contribute (e.g. high-level test cases or re-usable, low-level test functions), the skill set of the involved automaton engineers (which may be business stakeholders, testers or developers) or simply personal taste.

Depending on the editor used, you may additionally benefit from features such as code completion, syntax highlighting, code extraction, test cases management and debugging tools.

Note that ‘official’ test runs are typically not initiated from within the editor, but through other mechanisms, such as build steps in a CI server or a cron job of some sort. Test runs are initiated from within the editor to test or debug the test code.

Test engine

The test engine, in our case the RF,  is the heart of the framework.

That is, it is the central component that regulates and coordinates and, as such, ties all components together. For instance, some of the tasks of the engine are:

  • Parsing the test case files, e.g. removing white spaces, resolving variables and function calls and reading external files containing test data (such as multiple username/password pairs)
  • Controlling the test driver (e.g. Selenium Webdriver)
  • Catching and handling test library return values
  • Error handling and recovery
  • Aggregate logs and reports based on the results
Test driver

A test engine, such as the RF, is a generic framework and, as such, cannot itself drive a specific type of application interface, may it be UI (e.g. mobile or web) or non-UI (e.g. a SOAP service or an API). Otherwise it would not be generic. Consequently, to be able to drive the actual software under test, a separate layer is required that has the sole purpose of interacting with the SUT.

The test driver (in RF terms a 'test library' or 'low-level test library') is the instance that controls the SUT. The driver holds the actual intelligence to make calls to a specific (type of) interface. That interface may be non-UI (as would be the case with testing directly against a SOAP-service, http-server, REST-API or jdbc-database) or UI (as would be the case with testing against a web UI or Windows UI).

Examples of test drivers that are available to the RF are Selenium Webdriver (web UI), AutoIt (Windows UI) or the RF test libraries: Android library, Suds library (SOAP-services), SSH library, etc.

The latter are examples of ‘native’ RF test libraries, i.e. libraries that have been developed against the RF test library interface with the express intent of extending the RF. Some of these RF test libraries in turn re-use (that is, wrap) other open source components. The http library, for instance, reuses the Python ‘requests’ http client.

The former are existing tools, developed outside of the RF ecosystem, that have been incorporated into that ecosystem, by creating thin layers of integration code that make the external functionality available to the framework. Which brings us to the integration layer.

Integration layer

The main responsibility of the integration layer is to expose the functionality, as contained within an external tool or library, to the rest of the framework, mainly the engine and editor. Consequently,  the integration layer can also form a limiting factor.

Through the integration layer, the test code statements (as written in RFW syntax) are ‘translated’ into parameterized instructions that adhere to the syntax of the external tool. For instance, in the case of Selenium Webdriver, the RF integration library (called ‘Selenium2Library’) consists of a set of Python (module) files, that contain small functions that wrap one or more Webdriver functions. That is, these Python functions contain one or more Webdriver API-compliant calls, optionally embedded in control logic. Each of these Python functions is available within the framework, thus indirectly providing access to the functions as are exposed by the Webdriver API.

For example, the following function provides access to the Webdriver click() method (as available through the webelement interface):

def click_element(self, locator):
 self._info("Clicking element '%s'." % locator)
 self._element_find(locator, True, True).click()

Within your editor (e.g. RIDE), the function ‘Click element’ can be used in your test code. The editor will indicate that an argument $locator is required.

These Python functions, then, are basically small wrappers and through them the integration layer, as a whole, wraps the external test driver.

As mentioned before, an integration layer is not necessarily part of the stack. Test drivers (test libraries) that have been written directly against the RF library API, do not require an integration library.

Our next post will elaborate on the keyword-driven approach to test automation that the RF follows.

Categories: Companies

Does Your Culture Require Your Demise? Pig & Chicken 3 [Agile Safari]

Agile For All - Bob Hartman - Thu, 09/03/2015 - 23:24


Tweet the Agile Safari Cartoon!

Pig & Chicken Part 3 is about the bigger picture. It is about the culture of your organization and about memes that exist within that culture. Beyond just delivering a ranked list of “stuff”, can you commit to being your best self each day?  What does it mean for YOU to commit to the organization? What is expected of you? What do you expect from each other? I’m not just talking tangibly… what does expectation feel like where you work? Is it sustainable?

Recap: Pig & Chicken Part 1 is about letting teams focus and figure out how to do the work. Pig & Chicken Part 1 (the ‘classic’) the pigs are supposed to represent the team and the chickens are supposed to be “everyone else.” In the past, some people compared the chickens to management, and it led to people saying “we have to keep the chickens out” (and much worse). Pig & Chicken Part 2 is about being all in as an organization, so everyone is in sync with the top priorities of learning and delivering value. In Pig & Chicken Part 2, we are either all in this thing together or we aren’t — make a choice and decide what organizational commitment means. 

We Commit All the Time

Commitment is a big word in agile and life in general. We commit in personal relationships. We commit to family and friends. We commit at work. We commit all the time, frequently without thinking about it much. When we commit at work, it frequently happens without thinking about it.

  • We have “official” commitments where people commit to getting a bunch of work done by a date.
  • We have estimates about when features will be done, which become commitments in many cases.
  • We commit to each other all the time both formally and informally.
  • In agile we are supposed to commit to things (yes, even if you say forecast).

Take 30 seconds and think of one or two commitments that you have made. Are you all just marching to the beat of some meme that has been there “forever?” Are you committed in an irresponsible way? Do you and others even realize it? Are there memes or currents that are pulling you away from shore?

What Does Commitment Mean at Amazon?

The current pile of articles and discussions about Amazon caught my attention and while this article is not intended to be about Amazon, there were a lot of tie-ins. For those of you who have not been sucked into this story, I see a lot of relevance with this idea of culture and memes.  I’ll point you to some articles, in case you want more info, but here is my short version…

  1. It “started” with the article (by Jodi Kantor and David Streitfeld) in the NY Times titled “Inside Amazon: Wrestling Big Ideas in a Bruising Workplace.” Among other ideas (e.g. toil long and late, sabotage each other, etc.), the one that Amazon just cuts the bottom ranked performers each year seemed odd, at least to me. This reminded me of the old GE approach of cutting the bottom 10%. The premise was that if everyone is ranked on a bell curve and you have some so called A players and some so called F players, you cut the F players and you are in better shape. It seems logical. However, this is heavily dependent on how people are ranked and on ranking people individually. It also presumes that some portion of your people suck. So IF you hire awesome people, are you okay firing the least awesome percentage of them? It can also create a system where individuals are competing against each other, at the expense of effectiveness. Finally, it does not seem to jive with Amazon’s principle No. 5: “Hire and Develop the Best.” If you are just cutting the so-called bottom and they are awesome, that seems like you have issues with your leadership. If people are not “the best” yet, don’t you have to also look at your leadership, since they are not helping people develop? Obviously, I don’t work there, so maybe this is going on, but reading story after story (and there are thousands of comments on these and other articles), at a minimum, leadership has some responsibility!
  2. So was the story true? An Amazon employee posted an article on LinkedIn refuting it (in his experience). The NY Times responded to him and other criticism with a clear, and to the point assessment of the article. Ultimately, the initial story seems to hold up.
  3. In yet another article (by David Streitfeld and Jodi Kantor) titled “Jeff Bezos and Amazon Employees Join Debate Over Its Culture,” They include some excerpts from Mr. Bezos to employees, where he told workers “I don’t recognize this Amazon and I very much hope you don’t either.” I read that as ‘this is not the culture I know at Amazon.’ And, in a letter to employees, Bezos states that Amazon would not tolerate the “shockingly callous management practices” described in the article. He urged any employees who knew of “stories like those reported” to contact him directly. Even if it’s rare or isolated, our tolerance for any such lack of empathy needs to be zero.” {my emphasis}  This last sentence really struck me. This does not appear to be a guy who is playing around. This does not appear to be someone ducking the issue.
  4. Finally, in “Work Policies May Be Kinder, But Brutal Competition Isn’t” Noam Scheiber points out that “The account appeared to put Amazon at odds with recent workplace trends, but the reality experts say, is not nearly so neat: Grueling competition remains perhaps the defining feature of the upper echelon in today’s white-collar workplace.

Amazon is a large company, with 180,000 employees. There does not seem to be a belief that these stories are false. So how does it come to be that this kind of thing can happen in an organization like Amazon. I can understand it at a company with a CEO who would never ever consider making some of the statements that Mr. Bezos does (e.g. Even if it’s rare or isolated, our tolerance for any such lack of empathy needs to be zero.”) But for a company that has a CEO like him, how did it get that far?

When Did Commitment Have to Mean Your Demise?

If you work at an organization where the culture is leading to your demise, at what point do you have to make a choice? Or are you just trapped there? If you are a manager at one of those organizations – what is it like for you? Are you okay with that type of organization? Even everyone is in agreement (like in the second panel of the cartoon), does that make it right? Part of this is the culture of “being busy.” Does that actually work? We know that working 70 hours a week does not produce better results (“The Research Is Clear: Long Hours Backfire for People and for Companies). I’ve personally been struggling to write. I realized that I need a lot of slack for my brain to reach that point where writing flows. A lot more than I thought. When I’m traveling a lot, I am just not in that space where the ideas flow and connect in a way that I’m happy with. I can certainly blast out a few pages of thoughts, but it does meet my quality standards!

Y’all are so busy being busy you aren’t getting anything done!” — Peter Saddington

Bring in the Hero!

Talking with Allison Pollard on this topic, she pointed out the craziness of needing a hero or heroics in every sprint. Do you want to be a hero? Is the whole team heroes? Do you wear costumes like these people?

#Agile2015 this will be interesting! Superhero party…

— Jake Calabrese (@jcalabrese) August 6, 2015

I’ve been the hero.  I’ve saved the day. I’ve worked on major holidays. I’ve stayed up all night to fix problems in a code base or with down data lines. I’ve watched a lot of others do it as well. I HAD TO! Who else could do it? Or at least that was the story I told myself. I’ve gotten exhausted by it. I’ve watched others burn out or worse destroy their health or their personal lives. And sure, there are situations where that is required. They are real and at least some of the ones I’ve experienced were like that.  Yes I’ll work all night so that the stores can open on time and help the customers.  But it is a very slippery slope.

This bullshit about constantly saying “well… we just have to get through ‘this one thing'”, is absurd!

Sustainable Pace

The cartoon points out the insanity of how often we fall into a culture or a meme where we will sacrifice almost anything and often ourselves. Sure, I have not heard anyone actually say “team, we are going to put so much into this effort that we are literally not going to be around for the results.” But, in practice I see people doing this all the time. This is not a sustainable pace. I’ll say it again, this is not a sustainable pace!

How is serving yourself up on a plate sustainable?!?

Perhaps you should consider some changes if you are effectively destroying yourself to deliver. Mike Vizdos says “Focus. Deliver.” I like that! But he would never say “Focus. Destroy Yourself. Deliver Yourself on a plate!” How the hell would that make any sense? How does “Deliver Once” make any sense???! It’s sad. And it’s not helpful in the medium or long term…  Likely not even in the short term! What culture do you actually have? There is the one you want and the one that exists. There are memes within that culture. How are you staying aware and engaged enough to know? If your organization’s plan is to thrive or survive by ‘taking out’ it’s employees one project, weekend, or evening at a time — It may be time to get some help and make some changes!!

Tweet the Agile Safari Cartoon!

Subscribe to receive emails with my new posts and Agile Safari cartoons and Follow me on Twitter or Google+ to stay in touch!

The post Does Your Culture Require Your Demise? Pig & Chicken 3 [Agile Safari] appeared first on Agile For All.

Categories: Blogs

Are we tabby cats trying to emulate cheetahs?

AvailAgility - Karl Scotland - Thu, 09/03/2015 - 19:30

Credit: Dennis Church

Credit for the title of this post goes to Sam Murphy, Section Editor at Runners World UK. Those of you who have seen me recently we will probably know that as well as being an advocate of Lean and Agile, I also have a passion for running, and I subscribe to Runners World. Sam used this title for an article of hers in the September issue, which struck me as having lots of overlaps with how I go about coaching and consulting in businesses. The gist of it was that when training, rather than trying to copy what elite athletes do, we should find out what works for ourselves. Sound familiar?

Here’s some quotes:

Dr Andy Franklyn Miller … concluded that ‘a very unique and customised strategy is used by each swimmer to excel’. And if that’s the case, is looking at what the elites are doing and aiming to replicate it the best way to maximise our own sporting success? Or are we tabby cats trying to emulate cheetahs?

Is a very unique and customised strategy used by each successful organisation to excel? Is looking at what these organisations are doing and aiming to replicate it the best way to achieve success? Or are we tabby cats trying to emulate cheetahs?

Dr George Sheehan, runner and philosopher, said “We are all an experiment of one.’


Ultra runner Dean Karnazes … writes “I always encourage people to try new things and experiment to find what works best for them.’

It seems athletic training is not so dissimilar to building a successful organisation! Rather than just copying what we may have seen or read about working elsewhere, we should encourage organisations to try new things and be experiments of one. That’s what Strategy Deployment is all about!

Or (to close with the same quote Sam closed her article with) as Karnazes also said:

‘Listen to everyone, follow no-one.’

Categories: Blogs

SonarLint for Visual Studio 1.2.0 Brings Code Fixes to the IDE

Sonar - Thu, 09/03/2015 - 16:36

SonarLint for Visual Studio version 1.2.0 was released this week. In this version we focused on improving the user experience by adding code fading and fixes. Code fading makes some issues less obtrusive, and code fixes are concrete suggestions for solving specific issues in the code. This means that when an analyzer identifies an issue in the code, the IDE can propose an automatic fix for it. We’ve added fixes for 17 rules, and the best part is that the user can choose to fix all issues of the same type all at once for the whole solution, which can immensely speed up paying down technical debt.

Analyzers and code fixes are integrated into Visual Studio 2015 natively thanks to Roslyn. As you would expect, the issues we raise show up in the Error Window. As part of improving the user experience, now some issues on redundancies or useless code just show up as faded text (note the fading in the image below). So these less serious issues aren’t uselessly cluttering the Error Window any more.

Redundancies fall into the “easy to fix automatically” category. For example, for the above issue (Rule S3254) we could simply remove the redundant code. In Visual Studio, to check and accept the proposed code fix, you should hover over the issue location and expand the lightbulb tooltip. Or you can move the caret to the line of the issue, and hit Ctrl+. (There is a good video on Channel9, which shows the navigational shortcuts in Visual Studio 2015.) The tooltip window contains all the available options for the issue. There can be multiple possible fixes for an error, so you can choose which one to apply. For the above case, there is only one fix, which is to remove the redundant arguments. Also note in the image below that at the bottom of the preview window you can choose to apply this fix to all issues in the document/project/solution.

One of my favorite code fixes is the one that simplifies conditional statements. Consider the following code, where the true and false branches of the conditional statement are very similar.

Here, the code fix for Rule S3240 proposes using the ?? (null-coalescing) operator instead of the 8-line conditional statement. The code fix provider is clever enough to recognize that this if can be simplified to ?? and not just to the ?: (ternary) operator, and it only proposes the simpler one.

These are just two of the new code fixes. There are 16 more available (one rule has two fixes). We also added two new rules and fixed some bugs in this version. You can see the full list of rules at the new version-specific landing page. A slight restructuring of the SonarLint website, means that each release now has its own landing page, which summarizes the changes since the previous version.

Categories: Open Source

New Case Study: Medical Technology Leader Leverages SAFe to Free Teams from Silos, Align the Enterprise

Agile Product Owner - Thu, 09/03/2015 - 16:14

elektaWhen we learn about companies like Elekta who are in the very serious business of improving the lives of people facing cancer or brain disorders, we are inspired. And when we heard that they had adopted SAFe and wanted to share their story, we were excited to hear about their experience.

As one of the world’s leading medical technology innovators, Elekta provides solutions that touch over 100,000 patients a day worldwide. Headquartered in Stockholm, they employ around 3,800 employees globally in 30 countries. With teams working in several time zones, and members having different backgrounds, their challenge was to create an environment where teams could better align with global priorities and with each other.

Several years ago they turned to Scrum, but in their attempt to scale up they saw that the teams were operating in silos, which created a host of issues with dependency, integration, visibility, and alignment throughout the enterprise. Wanting to address all areas of the enterprise, Elekta took a holistic view and introduced SAFe to their Scrum teams, launching their first Agile Release Train (ART) in 2013. Soon thereafter, they expanded to the Program level and trained all of their teams.

Today, Elekta is running 4 ARTs with 20 teams across three continents. Their SAFe journey continues, and has already delivered significant gains and improvements in several areas. There’s a lot to learn from Elekta’s experience, which they’ve summarized in a PowerPoint, including some great speaker notes.

You can view the Elekta study here

Many thanks to Elekta’s Director of Engineering, Petrine Herbai, Manager of Engineering, Lars Gusch, and our Gold Partner, Rally Software; we appreciate all the great information you have shared, and look forward to hearing more about your continuing journey of transformation.

Stay SAFe!

Categories: Blogs

Agile Workshops at the first Agile Greece Summit

Ben Linders - Thu, 09/03/2015 - 14:52
I will be giving two workshops at the Agile Greece Summit in September, on Valuable Agile Retrospectives and Getting More out of Agile and Lean. There are still some tickets available, book your seat now! Continue reading →
Categories: Blogs

Agile Software Development Process: 90 Months of Evolution

TargetProcess - Edge of Chaos Blog - Thu, 09/03/2015 - 10:43

Three years ago I wrote an article that describes the changes in our Agile software development processes from 2008 to 2012. Three more years have passed by and our processes were not set in stone. Here I want to provide you with 90 months of changes in our product development practices, company culture, structure and engineering practices. Hope you will find it interesting and learn from our mistakes.

Read the article: Agile Software Development Process: 90 Months of Evolution

Teams structure evolution

Categories: Companies

Scrum as an agent of culture change part 2

Agile For All - Bob Hartman - Thu, 09/03/2015 - 05:23

In part one of this series, we defined culture. We also described why it is both critical and hard to work on. Finally, we left you with a teaser that there is a pretty good pattern we’ve seen for how to kind of hack your culture for the better.

Scrum as a Culture Change Agent

I have seen a pattern emerge at several organizations where I’ve worked or coached. A team starts using Scrum. When they use it effectively, they start to think differently about the way they work. They build different social structures. They value different outcomes. They alter decision making structures. And the culture starts to change. We are rewarding different behavior, at least for that team. If they can stick with it long enough to solidify the new set of behaviors, the culture change lasts. When those teams are successful, other teams in the organization start wondering what they’re up to. They learn about scrum, try it out, and if they do it purposefully, the pattern repeats itself. Scrum becomes an organizational change agent.Scrum-Culture-SpreadScrum is based on a cultural mindset that is often different from the existing cultures where it is introduced. In my experience, there are at least four specific characteristics of scrum that help it to spread within an organization: Quality, Value, Team Focus, and Trust. Below, we take a quick look at each of these aspects of Scrum, describe why each matters to the key stakeholders in any effort, and how it helps to spread a new cultural mindset with its related behavior patterns.

Using Scrum to Improve Quality

The Adobe Audition teams started using Scrum in 2005, and I acted as the ScrumMaster. Our goal was to get to releasable quality each four week sprint throughout the course of an 18 month release. While Scrum felt much better to the team, some people in our organization asked if we had any hard data. That’s tough to get, since many of the things we would have tracked prior to using scrum didn’t make sense to track anymore. One exception was the total number of open bugs at any given point. I created a graph showing the total number of open bugs over time for the previous release compared to the most recent release using scrum:

The graph shows that we had 33% of the open bugs at the peak compared to our previous release. Of course the real measure of quality was in the response from users, who consistently rated the product highly on ease of use and reliability. We had customers that trusted it enough to use very early “pre-release” builds for real audio production work, where errors and usability would have cost them time and money.

The graph had a big impact on other teams. When they saw that we had kept our bug count so low, they got curious. They became even more interested when, during the “end game”, the last few months prior to the big release, our team was out playing whiffle ball on the lawn while they were in full-on crisis mode trying to get their products ready for release, working nights and weekends.

Stakeholder Benefits of Improved Quality
  • Employees have a stronger sense of satisfaction in their work. Nobody wants to build a shoddy product – scrum tends to give teams permission to do the right thing even in the face of pressure to go faster.
  • Customers are more satisfied with the product and can provide better feedback on the actual features rather than just defects, leading to stronger adoption and higher loyalty.
  • Executives get higher revenue at a lower cost. There is strong anecdotal evidence that quality issues become significantly more expensive to fix the longer we wait. Scrum’s focus on getting to releasable quality every sprint leads to a lower overall cost to build the product. The higher customer satisfaction leads to greater adoption and revenue.
  • Shareholders get higher ROI as the organization improves, leading to higher valuation of the company.
Cultural Impact of Improved Quality

Almost every company says that it values quality. Scrum gives a simple rule that helps to enforce it: get releasable every sprint. When teams follow this rule, “quality is the top priority” often shifts from something that we say but don’t follow through on to the reality of our everyday work. When quality is valued, people also take more pride in their work. There are no longer any cultural barriers to doing the right thing. People start to trust that there is alignment between what we say and what we do. This leads to increased trust between management and employees as well as higher engagement all around.

Higher quality also results in fewer crises. In the example of the Audition team, it helped us reinforce the value of working at a sustainable pace. No extra hours were required.  This experience led directly to three other product teams adopting scrum for their next releases.

Using Scrum to Deliver Value Quickly and Continuously

In a traditional project, we invest money over time, and don’t deliver any real value to the business or users until the whole thing is wrapped up and delivered. On a Scrum project, we can deliver value almost immediately, and then again after each sprint. This approach is great for getting early technical and user feedback, but it also makes great economic sense. Let’s look at the difference between a traditional six month project and one using scrum.Value Delivery Scrum vs Traditional

Teams using Scrum begin delivering value to customers after just a few weeks, accumulating value throughout the remainder of the project and beyond. Teams using a traditional approach deliver no value until the overall project is complete. This creates gaps both in time to market and accumulated value over time. Finally, the traditional project is accumulating risk and sunk cost that is much greater than that of the scrum team. Notice the gap between the red (cost) line and the blue (traditional value) line. The growing gap represents investment with no return, which is correlated directly with business risk.

Stakeholder Benefits of Improved Value Delivery
  • Employees get a much tighter feedback loop with customers, leading to stronger awareness of what problems are most important to solve, and to seeing their solutions actually make a difference for their customers. Empathy for user’s problems is a key determinant in the success of any product or service. Theresa Amabile found that the highest predictor of engagement is “small wins“. Early value delivery connects the team directly to multiple small wins.
  • Customers get their problems solved much earlier, and can provide feedback along the way. They don’t have to wait months or years for the highest priority solutions to be delivered.
  • Executives are measured and compensated based on their ability to increase shareholder value, which means they need to deliver as much value as early as possible at the lowest cost possible. Scrum helps executives meet this goal, making both them and their shareholders happy.
  • Shareholders love the improved time to market and better product/market fit based on early feedback. They also benefit from decreasing risk and opportunity cost by increasing revenue over the course of the development effort, rather than waiting until a big bang release.
Cultural Impact of Improved Value Delivery

Scrum is often descried as an empirical framework, meaning that it provides transparency with frequent opportunities to inspect and adapt. Delivering working products and services early creates a cultural shift from predicting to testing hypotheses in the market. This shift to an empirical approach significantly reduces power imbalance between the managers and the teams. It creates a new mindset of “let’s try it out and see what we learn”, rather than a mindset of “I made this prediction, you’d better make it happen”.

Using Scrum to Create Team Focus

Traditional organizations tend to structure around areas of expertise. For a software organization, this might mean that all of the coders have one boss, the testers a different boss, the product managers a different one, etc. These different silos of expertise then need to coordinate their efforts to deliver value. This silo approach also tends to result in a focus on improving a single piece of the value stream over optimizing the flow of value through the value stream. Coders focus on churning out more code faster, testers focus on executing more and better tests, etc. This can lead to queues of work between the stages of development, something that we know from Theory of Constraints and Lean Thinking (see a nice comparison of the two here) will lead to quality problems, slower delivery, more difficulty adapting to change, and higher frustration as an

From a social standpoint, the traditional approach tends to create teams of similar experts, who work together to get better at their area of expertise, but often end up working in isolation on their piece of the puzzle. Scrum focuses on creating cross-functional teams made up of experts in all of the different areas required to deliver value. This shift to cross-functionality then shifts the focus from “improve my area” to “improve our ability to deliver value”. The entire focus of the team switches to a broader perspective, and in my experience, this leads to a more rewarding outcome. After all, if I got into computer programming, I probably did it because I loved solving interesting problems in a way that made someone’s life easier. Scrum re-connects the work I do writing code to how it solves a customer’s problem.

This shift in focus from “me” to “team” also asks people to work on their social connections in a way that the “throw on the headphones and code away on my assigned task” approach doesn’t. Social Connection is critical to a healthy, happy life. Recent studies in neuroscience, psychiatry, and psychology are showing that humans are meant to be social, that it creates resilience to stress, resistance to addiction, and higher life satisfaction in general. Scrum teams learn to balance the need for individuals to get into flow states with the need to keep connected to each other.

A recent study confirmed what many team experts have been teaching for decades: effective teams voice and work through differing opinions about how to get work done, but do so in a way that focuses on the problem to be solved, not the people solving the problem. Scrum tends to emphasize this approach, leading to higher engagement, team satisfaction, and better business results.

Stakeholder Benefits of Improved Team Focus
  • Employees get stronger connection to each other and to how their work affects customers and users, resulting in higher engagement and job satisfaction.
  • Customers get better results from teams focused on delivering value to them, rather than getting better at their area of expertise.
  • Executives benefit from higher engagement of their people, higher retention, and higher customer satisfaction.
  • Shareholders benefit in the same way as executives.
Cultural Impact of Improved Team Focus

Shifting the focus from “me” to “we” is a key cultural benefit of scrum. No longer are we solitary heroes or zeroes, but engaged members of teams focused on delivering value to customers.

Using Scrum to Build Trust

When teams regularly deliver value, sprint after sprint, the traditional “plan and control” efforts become superfluous.

  • We no longer need weekly status reports to make sure things are on track, we simply go to the sprint review and see what the team has built.
  • We no long try to shift “resources” around to deal with the crisis-du-jour, we simply pass urgent needs on to a stable team, whose process is built to address such work.
  • As teams build the technical infrastructure to do automated testing and continuous integration, we trust that we can change plans whenever we discover a more important need.
  • As teams work together, inspecting and adapting their process to improve value delivery, they build trust amongst each other that they’re all in it for the same shared purpose, and will contribute wholly to the effort.

The net result of all of this is trust, perhaps the most powerful catalyst of cultural change.


Trust allows people to feel safe trying something new. Maybe it will work, maybe it won’t, but with trust, we’re willing to give it a shot. The fear of failure is mitigated. Without the willingness to experiment with new ideas, behaviors, and processes, we are simply allowing water to flow down the same path as always. We need to dig a little canal, maybe dam up another area, throw some rocks in another – something to see if we can find a better way to deliver value.

We want our culture to exhibit what author Nassim Taleb calls “antifragile” characteristics; that is, the culture improves in the presence of stress and change, rather than breaking (fragile), resisting (robust), or flexing for a moment only to return back to normal (resilient). If we aren’t experimenting, we are not improving, and scrum is built to experiment and build the trust required to try new things. It’s one of the few reliable patterns I’ve seen for creating lasting cultural change in an organization.

Want Help?

If you are interested in trying this out in your organization, Agile For All has helped hundreds of organizations implement scrum as a first step to improving the overall work culture, helping leaders balance culture and strategy. Feel free to contact us, or email me directly at to learn more about how we can help!

The post Scrum as an agent of culture change part 2 appeared first on Agile For All.

Categories: Blogs

How to Know if TDD is Working

Powers of Two - Rob Myers - Thu, 09/03/2015 - 03:10
How will you know if TDD is working for your teams, program, or organization?

I've noticed that small, independent teams typically don't ask this.  They are so close to the end-points of their value-stream that they can sense whether a new discipline is helping or hindering.

But on larger programs with multiple teams, or a big "roll-out" or "push" for quality practices, leaders want to know whether or not they're getting a return on investment.  Sometimes they ask me, point-blank: "How long before I recoup the cost of your TDD training and coaching?" There are a lot of variables, of course; and knowing when you've reached break-even is going to depend on what you've already been measuring.  Frankly, you're not going to be able to measure the change in a metric you're not already measuring.  Nevertheless, you may be able to tell simply by the morale on the teams. In my experience, there's always a direct correlation between happy employees and happy customers. Also, a direct correlation between happy customers and happy stakeholders.  That's the triple-win:  What's truly good for customers and employees is good for stakeholders.

So I've assembled a few notes about quality metrics.

Metrics I like
(Disclaimer: I may have my "lead" and "cycle" terminology muddled a little.  If so I apologize. Please focus on the simplicity of these metrics.  I'll fix this post as time allows.)

Here are some metrics I've recommended in the past.  I'm not suggesting you must track all of these.
  • Average lead time for defect repair: Measure the time between defect-found and defect-fixed, by collecting the dates of these events.  Graph the average over time.
  • Average cycle time for defect repair: Measure the time between decide-to-fix-defect and defect-fixedby collecting the dates of these events. Graph the average over time.
  • A simple count of unfixed, truly high-priority defects.  Show-stoppers and criticals, that sort of thing.  Graph the count over time.
Eventually, other quality metrics could be used.  Once a team is doing well, Mean Time Between Failures (MTBF), which assumes a very short (near-zero) defect lead time, can be used.

On one high-performing team I worked on way back in 2001, we eventually focused on one metric:  "Age of Oldest Defect."  It really got us to dig into one old, ornery, hard-to-reproduce defect with a ridiculously simple work-around (i.e., "Please take a deep breath and resubmit your request" usually did the trick, which explains why we weren't compelled to fix it for quite some time).  This bug was a great representation of the general rule of bug-fixing:  Most bugs are easy to fix once found, but very difficult to locate!  (Shout out to Al Shalloway of Net Objectives for teaching me that one.)

I also suggest that all teams keep an eye on this one:  Average cycle &/or lead times for User Stories, or Minimal Marketable Features. On the surface, this sounds like a performance metric.  I suppose if the work-items are surely arriving in a most-important-thing-first order, then it's a reasonable proxy for "performance."  But its real purpose is to help diagnose and resolve systemic (i.e., "process") issues.

What’s truly important about measuring these:
  1. Start measuring as soon as possible, preferably gaining some idea of what things look like before making broad changes, e.g., before I deliver my Essential Test-Driven Development course, and follow-on TDD coaching, to your teams.
  2. The data should be collected as easily as possible: Automatically, or by an unobtrusive, non-managerial, third party. Burdening the team with a lot of measurement overhead is often counterproductive:  The measurement data suffers, productivity suffers, morale suffers.  
  3. The metrics must be used as "informational" and not "motivational": They should be available to team, first and foremost, so that team can watch for trends. Metrics must never be used to reward or punish the team, or to pit teams within the same program or organization against each other. 
If you want (or already have) highly-competitive teams, then consider estimating Cost of Delay and CoD/Duration (aka CD3, estimated by all involved "levels" and "functions"), customer conversions, customer satisfaction, and other Lean Startup metrics; and have your whole organization compete against itself to improve the throughput of real value, and compete against your actual competitors.

A graph sent (unsolicited) to me from one client. Yeah, it'd be great if they had added a "value" line. Did I mention unsolicited? Anyway, there's the obvious benefit of fewer defects.  Also note that bugs-found is no longer oscillating at release boundaries. Oscillation is what a system does before tearing itself apart.Metrics I didn't mentionVelocity:Estimation of story points and the use of velocity may be necessary on a team where the User Stories vary considerably in size.  Velocity is an important planning tool that gives the team an idea of whether the scope they have outlined in the release plan will be completed by the release date.

Story points and velocity (SPs/sprint) give information similar to cycle time, just inverted.

To illustrate this:  Often Scrum teams who stop using sprints and release plans in favor of continuous flow will switch from story points per sprint to average cycle time per story point. Then, if the variation in User Story effort diminishes, they can drop points entirely and measure average cycle time per story.

The problem with using velocity as a metric to track improvements (e.g., the use of TDD) is this:  As things improve, story-point estimates (an estimate of effort, not time) may actually drop for similar stories.  We expect velocity to stabilize, not increase, over time.  Velocity is for planning; it's a poor proxy for productivity.
Code coverage:You could measure code-coverage, how much of the code is exercised via tests, particularly unit-tests, and watch the trends, similar to the graph above (they measured number-of-tests).  This is fine, again, if used as an informational metric and not a motivational metric.  Keep in mind that it's easy for an informational metric to be perceived as motivational, which makes it motivational.  The trouble with code-coverage is that it is too much in the hands of those who feel motivated to improve it, and they may subconsciously "game" the metric.

About 10 years ago, I was working with a team who had been given the task of increasing their coverage by 10% each iteration.  When I got there, they were at 80%, and very pleased with themselves.  But as I looked at the tests, I saw a pattern:  No assertions (aka expectations)!  In other words, the tests literally exercised the code but didn't test anything.  When I asked the developers, they looked me in the eyes, straight-faces, and said, "Well, if the code doesn't throw an exception, it's working."

Of course, these junior developers soon understood otherwise, and many went on to do great things in their careers. But they really did think, at the time, they were correctly doing what was required!

The metrics that I do recommend are more difficult to "game" by an individual working alone.  Cycle-times are a team metric.  (Yes, it's possible a team could conspire to game those metrics, but they would have to do so consciously, and nefariously.  If you don't, or can't, trust your team to behave as professionals, no metric or engineering practice is going to help anyway.  You will simply fail to produce anything of value.)

Please always remember:  You get what you measure!

Categories: Blogs

Framing the Question

George Dinwiddie’s blog - Thu, 09/03/2015 - 02:25

“I need this project done by date D and within cost budget C. Now calculate an estimate on the project.”

A friend of mine used this example to illustrate anchoring bias in estimation. Note, however, that he doesn’t make the question explicit. Further conversation revealed that he had in mind that the date and cost should be the output of the estimation. With that assumption, that statement preceding the request will definitely anchor the answer, and realizing that this bias is likely will call into question whatever estimate is given.

Given the stated need, however, I would reframe the call for an estimate from “When will this project be done and how much will it cost” to “What is the likelihood that the project can be done within these constraints?” When the time and money constraints are already known, it’s somewhat disingenuous to ask that these values be estimated. If the estimate exceeds the budget, then it’s likely that a negotiation will follow. Such a negotiation helps bring the estimate in line with the budget, but generally does little to bring the actual costs in line.

Reframing the question is much more helpful. It should provoke a response along the lines of “We feel 80% confident that we can meet the time and money budget based on these assumptions…. We have identified … as risks that would endanger meeting the budget and which should be monitored closely. Another possibility is that the response is of the form “There is a 10% chance that this project can be accomplished within the time and money constraints. If you’d like, we can explore finding a subset of the project that has a higher likelihood of completion but still have value.”

Either of these answers is more useful for the person with the budget who wants to decide whether to invest in the project or not. Sometimes the best outcome for a project is to kill it before starting when it’s unlikely to result in a happy conclusion. Other times we know to proceed cautiously, and can keep an eye on variables that will indicate whether we’re on a path to success or failure given the stated constraints. We should have alternate plans that we can turn to in the event that one or more of the risks materialize.

Such an estimate doesn’t guarantee success, but it does give us a feel for the merits of pursuing the endeavor. The associated assumptions and risks provide information to help us understand whether or not we’re still on track. This probabilistic estimate is much more helpful than is an estimate of the values.

Not all situations benefit from an estimate of the probabilities. We should think about what sort of answer would really help us, and ask the question that will produce an answer of that sort. Asking for “an estimate” without knowing and communicating the form of answer that will be useful is unlikely to give us appropriate information.


Categories: Blogs

Call for GOAT15 Speakers

Notes from a Tool User - Mark Levison - Wed, 09/02/2015 - 22:05

Gatineau-Ottawa Agile Tour 2015The Call for Presentations for GOAT 15 has just opened. The 2015 edition of the Gatineau-Ottawa Agile Tour, affectionately known as GOAT, will take place on November 23rd, and be a value-packed conference for the 300+ professionals expected to attend this year.

Would you like to present a case study or a report on your experience implementing Lean or Agile within your business or workplace? Have you discovered a workshop so useful that you would like to share it with others? Would you like to take the opportunity to share your knowledge on Lean or Agile? If yes, Agile Ottawa organizers would love to hear what you have to offer.

Submit your presentation ideas at Deadline for submissions is September 30.

Categories: Blogs

New download – SAFe in 8 Pictures

Agile Product Owner - Wed, 09/02/2015 - 15:13

Hello everyone,

SAFe Foundations has been the cornerstone PowerPoint presentation for you to introduce the Scaled Agile Framework in your enterprise. Now there is a new, lightweight tool for introducing: (1) the levels, (2) the people, (3) the backlogs, (4) the cadence, (5) code quality, (6) relentless improvement, (7) economic prioritization, and (8) the full Big Picture.

The idea for this deck came about last year when I was presenting SAFe to an executive team. Three slides into my 35-slide deck, they asked me to turn off the projector. Standing in front of a 3 x 5 foot image of the Big Picture, we spent the remaining two hours discussing different dimensions of SAFe: the levels, the people, the backlogs…  The dynamic approach resonated.  I began using this same simple approach again and again.

SAFe in 8 Pictures deck

There are no bullets: only 8 pictures and 16 words. The graphical nature of “SAFe in 8 Pictures” allows you to tailor your talk to any audience. Behind it are robust speaker notes to get you started quickly. From there, you can continue building your knowledge with other content from this site.

Download SAFe in 8 Pictures 



Categories: Blogs

RabbitMQ – Best Practices For Designing Exchanges, Queues And Bindings?

Derick Bailey - new ThoughtStream - Wed, 09/02/2015 - 13:30

A question was asked on StackOverflow about best practices for RabbitMQ exchanges, queues and bindings. While this question was technically “off topic” for StackOverflow, I answered it anyways because it’s a common set of questions and offers insight in to a few points of confusion when starting out with RabbitMQ. 

Ex q bind

One Exchange, Or Many?

The core parts of the question include:

I’am looking for best practices for the design of the system regarding topics/queues etc. One option would be to create a message queue for every single event which can occur in our system, for example:


I think it is not the right approach to create hundreds of message queues, isn’t it?

Isn’t it better to just have something like “user-service-notifications” as one queue and then send all notifications to that queue? I would still like to register listeners just to a subset of all events, so how to solve that?

My second questions: If I want to listen on a queue which was not created before, I will get an exception in RabbitMQ. I know I can “declare” a queue with the AmqpAdmin, but should I do this for every queue of my hundreds in every single microservice, as it always can happen that the queue was not created so far?

It’s a difficult set of questions to answer, but also a common set. There are a lot of options and possibilities for which type of exchange is used, and how the bindings are configured for routing messages. 

One Or Many Exchanges?

I generally find it is best to have exchanges grouped by object type / exchange type combinations. In the example of user events, you could do a number of different things depending on what your system needs.

Exchange Per Event

In one scenario, it might make sense to have an exchange per event as you’ve listed. you could create the following exchanges

| exchange     | type   |
| user.deleted | fanout |
| user.created | fanout |
| user.updated | fanout |

This would fit the pub/sub pattern of broadcasting events to any listeners, with no concern for what is listening.

With this setup, any queue that you bind to any of these exchanges will receive all messages that are published to the exchange. this is great for pub/sub and some other scenarios, but it might not be what you want all the time since you won’t be able to filter messages for specific consumers without creating a new exchange, queue and binding.

Exchange Per Object Type

In another scenario, you might find that there are too many exchanges being created because there are too many events. you may also want to combine the exchange for user events and user commands. this could be done with a direct or topic exchange:

| exchange     | type   |
| user         | topic  |

With a setup like this, you can use routing keys to publish specific messages to specific queues. For example, you could publish user.event.created as a routing key and have it route with a specific queue for a specific consumer.

| exchange     | type   | routing key        | queue              |
| user         | topic  | user.event.created | user-created-queue |
| user         | topic  | user.event.updated | user-updated-queue |
| user         | topic  | user.event.deleted | user-deleted-queue |
| user         | topic  | user.cmd.create    | user-create-queue  |

With this scenario, you end up with a single exchange and routing keys are used to distribute the message to the appropriate queue. notice that i also included a “create command” routing key and queue here. this illustrates how you could combine patterns through.

Registering Messages For Specific Consumers

Later in the question set, this person asks:

I would still like to register listeners just to a subset of all events, so how to solve that?

If you need specific messages to go to specific consumers, you do this through the routing and bindings. Trying to filter once the message is in the queue, is an anti-pattern in RabbitMQ.

This leaves you with options for how you would set up the pre-filtering of messages, using routing.

By using a fanout exchange, you would create queues and bindings for the specific events you want to listen to. each consumer would create it’s own queue and binding.

By using a topic exchange, you could set up routing keys to send specific messages to the queue you want, including all events with a binding like This binding uses a wildcard to send all events to the queue for this binding. 

There are still other options and configurations, of course. Without knowing the specifics of a given scenario, it is hard to give a solid answer.

When To Declare Queues And Bindings

The last question in this set deals with when to create your queues.

If I want to listen on a queue which was not created before, I will get an exception in RabbitMQ. I know I can “declare” a queue with the AmqpAdmin, but should I do this for every queue of my hundreds in every single microservice, as it always can happen that the queue was not created so far?

There are two ways to do this:

  1. Pre-define the exchanges, queues and bindings
  2. Define them at runtime
Pre-Define Exchanges

In my experience, pre-defining exchanges queues and bindings becomes difficult. RabbitMQ and the AMQP specification allow and often require configuration to be done through the communication protocol itself.

You don’t need a third party application to configuration the layout of your RabbitMQ installation. You can use the Web Administration plugin for this if you want. But, in my experience, this leads to problems with large applications. You will run in to a scenario where the queue name and binding cannot be pre-determined.

There is some use for this, though. There will be times when you need an exchange to always be around or you want to do some quick testing to make sure things are set up correctly. The Web Admin plugin is very useful, here.

Dynamically Defined Exchanges

Because of the way AMQP works, and the need to define queues and bindings dynamically, I find it best to have each message consumer declare the queues and bindings it needs before trying to attach to it. This can be done when the application instance starts up, or you can wait until the queue is needed. Again, this depends on what your application needs.

I tend to prefer to wait until the application needs it… but there are some cases where the queue and binding should be there before the code runs, to ensure all possible messages are caught. In that scenario, having the queues pre-defined at application startup can help.

Learn Through Others’ Experiences

I know the answers I’m providing are rather vague and full of options, rather than real answers. Ultimately, there is no right or wrong answer for which exchange type and configuration to use without knowing the specifics of each system’s needs.

Truthfully, you could use any exchange type for just about any purpose. There are tradeoffs with each one, and that’s why each application will need to be examined closely to understand which one is correct.

If you’re interested in learning more about the tradeoffs and how to make decisions around these questions, I’ve written a small eBook that covers these topics. This book takes a rather unique perspective of telling stories to addresses many of these questions, though sometimes indirectly. 

Step inside the mind of another developers and learn how to make decisions about RabbitMQ Structure and Layout.

Categories: Blogs

Agile in the Real Trenches

TV Agile - Wed, 09/02/2015 - 13:22
The Agile principle of “trusting work to the self-organizing, self-managing teams” is radically different to the military doctrine of “strict top-down hierarchy, command & control”. Yet almost a 100 years ago this idea was tried out on many battlefields and the consequences are still felt today. Video producer:
Categories: Blogs

The Essential Skill of a Good Product Owner

Scrum Expert - Wed, 09/02/2015 - 13:17
There are many books and trainings that describe what good Product Owner should be in Scrum team. I have been a Product Owner for more than 5 years now. During this time I learned that there is only one skill that is essential to good Product Owner. I am ready to share it with you together with few “stories from the trenches” that will illustrate ...
Categories: Communities

MSBuild SonarQube Runner now available on Visual Studio Online

Sonar - Wed, 09/02/2015 - 09:13

The MSBuild SonarQube Runner TFS 2015 Build Tasks are now available out of the box on Visual Studio Online, and even on Hosted Build Agents! This means that SonarQube analysis can now be enabled in a few clicks on any Visual Studio Online project without having to install anything!

I could tell you more, but Jean-Marc Prieur from Microsoft has already done such a beautiful job that you should just read what he wrote in his Visual Studio ALM blog post.

Categories: Open Source

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.