Skip to content

Feed aggregator

How we (un)plan the future

TargetProcess - Edge of Chaos Blog - Fri, 10/14/2016 - 10:58

We have made some huge changes in our prioritization and planning process this year. In a nutshell, we have switched to open allocation. Here is the story.

Old way: boards, feature ranking, top-down approach

During the last several years we used to have a Product Board. This was a committee that focused on annual product plans. It consisted of up to a dozen people with various roles from sales to developers. We discussed our product strategy and set high-level goals (like "increase the market share at the enterprise market"). We created a ranking model that we used to prioritize features and create roadmaps:

Features ranking model

It kinda worked, but at some point I understood that somehow we pushed more and more features into Targetprocess, making it even more complex and heavy. Many people inside the company were not happy with this direction and they did not believe in it. Large customers demanded complex features like more flexible teams management, people allocation, an advanced QA area, etc. These are all good features, but we, as a company, somehow lost the feeling of end-user experience. Some simple things like search, navigation, performance, and simplicity were buried under fancy new features. This year, we put an end to that approach.

We want to create a tool that is pleasant to use. A tool that boosts your productivity and is almost invisible. A tool that saves your time. To achieve this goal, we have to go back to the basics. We should fix and polish what we have in Targetprocess already (and we have a lot) and then move forward with care to create new modules and explore new possibilities.

We have disbanded the Product Board, removed feature prioritization, done away with the top-down approach to people/team allocation, and replaced it with a few quite simple rules.

New way: Product Owner, Initiatives, and Sources

The Product Owner sets a very high level strategic theme for the next 1-2 years. Our current theme is very simple to grasp:

Focus on pains and misfits

Basically, we want to do anything that reduces complexity, simplifies basic scenarios like finding information, improves performance, and fixes your pain points in our product.

This does not mean that we will not add new features. For example, the current email notification mechanism is really outdated, so we are going to replace it and implement in-app notifications. But, most likely, we will not add new major modules into Targetprocess in the near future. Again, we are focusing on existing users and their complaints.


Our people have virtually full freedom to start an Initiative that relates to the strategic theme. An Initiative is a project that has start/end dates, a defined scope and a defined team. It can be as short as 2 weeks with a single person in the team or as large as 3 months with 6-8 people in a team.

There are just three simple rules:

  1. Any person can start an Initiative. The Initiative should be approved by the Product Owner and the Technical Owner (we plan to use this approval mechanism for some time in order to check how the new approach goes). The Initiative should have a deadline defined by the Team.
  2. Any person can join any Initiative.
  3. Any person can leave an Initiative at any time.
Sources and Helpers

A Source is the person who started the Initiative. He or she assembles the team, defines the main problem the Initiatives aims to solve, and is fully responsible for the Initiative's success. The Source can make all final functional decisions, technical decisions, etc. (Remember, Helpers are free to leave the Initiative at any time, so there is a mechanism to control poor leadership).

A Helper is a person who joins an Initiative and is committed to help complete it by the agreed deadline. He or she should focus on the Initiative and make it happen.

The Initiative deadline day is pretty significant. Two things should happen on the deadline day:

  • The Source makes a company-wide demo. They show the results to the whole company and explain what the team has accomplished.
  • The Initiative should be live on production.

As you see, freedom meets responsibility here. People are free to start Initiatives and work on almost anything, but they have to meet their deadlines and deliver the defined scope. This creates significant peer pressure, since you don't want to show bad results during the demo.

This process was started in July. We still have a few teams finalizing old features, but the majority of developers are working in the Initiatives mode now. Here's a screenshot of the Initiatives currently in progress:

Initiatives timeline

The Initiatives in the Backlog are just markers; some of them will not go into development, and there is no priority here. Next is the Initiatives Kanban Board:

Initiatives Kanban Board

You may ask, how do we define what is most important? The answer is: it does not matter. If we have a real pain from customers, and we have a few people that really want to solve this problem — it will be solved. Nobody can dictate a roadmap, nobody can set priorities, even the Product Owner. The Product Owner can start their own Initiatives (if they can get enough Helpers) or decline some Initiatives (if it takes tooooo long or doesn't fit the strategic theme).

As a result, we don't have roadmaps at all. We don't discuss priorities. And we can't provide answers to your questions like "when will you have a better Git integration". We can only make promises about things already started (you can see some of them above). All the people inside our company care about making our customers happy with the product, and now they have been enabled with real power to react faster and help you.

We can also promise that Targetprocess will become easier, faster, and more useful with every new release.

Categories: Companies

MediatR Pipeline Examples

Jimmy Bogard - Thu, 10/13/2016 - 21:02

A while ago, I blogged about using MediatR to build a processing pipeline for requests in the form of commands and queries in your application. MediatR is a library I built (well, extracted from client projects) to help organize my architecture into a CQRS architecture with distinct messages and handlers for every request in your system.

So when processing requests gets more complicated, we often rely on a mediator pipeline to provide a means for these extra behaviors. It doesn’t always show up – I’ll start without one before deciding to add it. I’ve also not built it in directly to MediatR  – because frankly, it’s hard and there are existing tools to do so with modern DI containers. First, let’s look at the simplest pipeline that could possible work:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
        _inner = inner;

    public TResponse Handle(TRequest message)
        return _inner.Handle(message);

Nothing exciting here, it just calls the inner handler, the real handler. But we have a baseline that we can layer on additional behaviors.

Let’s get something more interesting going!

Contextual Logging and Metrics

Serilog has an interesting feature where it lets you define contexts for logging blocks. With a pipeline, this becomes trivial to add to our application:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
        _inner = inner;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, typeof(TRequest).FullName))
            return _inner.Handle(message);

In our logs, we’ll now see a logging block right before we enter our handler, and right after we exit. We can do a bit more, what about metrics? Also trivial to add:

using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
using (Metrics.Time(Timers.MediatRRequest))
    return _inner.Handle(request);

That Time class is just a simple wrapper around the .NET Timer classes, with some configuration checking etc. Those are the easy ones, what about something more interesting?

Validation and Authorization

Often times, we have to share handlers between different applications, so it’s important to have an agnostic means of cross-cutting concerns. Rather than bury our concerns in framework or application-specific extensions (like, say, an action filter), we can instead embed this behavior in our pipeline. First, with validation, we can use a tool like Fluent Validation with validator handlers for a specific type:

public interface IMessageValidator<in T>
    IEnumerable<ValidationFailure> Validate(T message);

What’s interesting here is that our message validator is contravariant, meaning I can have a validator of a base type work for messages of a derived type. That means we can declare common validators for base types or interfaces that your message inherits/implements. In practice this lets me share common validation amongst multiple messages simply by implementing an interface.

Inside my pipeline, I can execute my validation my taking a dependency on the validators for my message:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validators)
        _inner = inner;
        _validators = validators;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
            var failuers = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
            if (failures.Any())
                throw new ValidationException(failures);
            return _inner.Handle(request);

And bundle up all my errors into a potential exception thrown. The downside of this approach is I’m using exceptions to provide control flow, so if this is a problem, I can wrap up my responses into some sort of Result object that contains potential validation failures. In practice it seems fine for the applications we build.

Again, my calling code INTO my handler (the Mediator) has no knowledge of this new behaviors, nor does my handler. I go to one spot to augment and extend behaviors across my entire system. Keep in mind, however, I still place my validators beside my message, handler, view etc. using feature folders.

Authorization is similar, where I define an authorizer of a message:

public interface IMessageAuthorizer {
  void Evaluate<TRequest>(TRequest request) where TRequest : class

Then in my pipeline, check authorization:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
            if (failures.Any())
                throw new ValidationException(failures);
            return _inner.Handle(request);

The actual implementation of the authorizer will go through a series of security rules, find matching rules, and evaluate them against my request. Some examples of security rules might be:

  • Do any of your roles have permission?
  • Are you part of the ownership team of this resource?
  • Are you assigned to a special group that this resource is associated with?
  • Do you have the correct training to perform this action?
  • Are you in the correct geographic location and/or citizenship?

Things can get pretty complicated, but again, all encapsulated for me inside my pipeline.

Finally, what about potential augmentations or reactions to a request?

Pre/post processing

In addition to some specific processing needs, like logging, metrics, authorization, and validation, there are things I can’t predict one message or group of messages might need. For those, I can build some generic extension points:

public interface IPreRequestHandler<in TRequest>
    void Handle(TRequest);
public interface IPostRequestHandler<in TRequest, in TResponse>
    void Handle(TRequest request, TResponse response);
public interface IResponseHandler<in TResponse>
    void Handle(TResponse response);

Next I update my pipeline to include calls to these extensions (if they exist):

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;
    private readonly IEnumerable<IPreRequestProcessor<TRequest>> _preProcessors;
    private readonly IEnumerable<IPostRequestProcessor<TRequest, TResponse>> _postProcessors;
    private readonly IEnumerable<IResponseProcessor<TResponse>> _responseProcessors;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer,
        IEnumerable<IPreRequestProcessor<TRequest>> preProcessors,
        IEnumerable<IPostRequestProcessor<TRequest, TResponse>> postProcessors,
        IEnumerable<IResponseProcessor<TResponse>> responseProcessors
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;
        _preProcessors = preProcessors;
        _postProcessors = postProcessors;
        _responseProcessors = responseProcessors;

    public TResponse Handle(TRequest message)
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
            foreach (var preProcessor in _preProcessors)
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
            if (failures.Any())
                throw new ValidationException(failures);
            var response = _inner.Handle(request);
            foreach (var postProcessor in _postProcessors)
                postProcessor.Handle(request, response);
            foreach (var responseProcessor in _responseProcessors)
            return response;

So what kinds of things might I accomplish here?

  • Supplementing my request with additional information not to be found in the original request (in one case, barcode sequences)
  • Data cleansing or fixing (for example, a scanned barcode needs padded zeroes)
  • Limiting results of paged result models via configuration
  • Notifications based on the response

All sorts of things that I could put inside the handlers, but if I want to apply a general policy across many handlers, can quite easily be accomplished.

Whether you have specific or generic needs, a mediator pipeline can be a great place to apply domain-centric behaviors to all requests, or only matching requests based on generics rules, across your entire application.

Categories: Blogs

The Docker Management Cheatsheet

Derick Bailey - new ThoughtStream - Thu, 10/13/2016 - 18:31

I’ve been doing a lot with Docker in the last few months, and it’s become a staple of my development tool set at this point.

Unfortunately, it’s also a bit difficult to remember all the different commands and options that I use, even when I use them on a regular basis.

To help me and hopefully others that are getting started with Docker, I put together a cheatsheet that lists the most common commands and options, for managing images and containers.

And I’ve made this cheatsheet available for free, for everyone!

Docker management cheatsheet

Download the Docker management cheatsheet from WatchMeCode

The post The Docker Management Cheatsheet appeared first on

Categories: Blogs

Agile Africa – Lean Coffee

Growing Agile - Thu, 10/13/2016 - 11:11
Some meaty topics came up in the first Lean Coffee at Agile Africa, expertly facilitated by Josh Lewis as usual. As we have blog posts on most of these – I’ve included them below. Accountability in Teams Distributed teams Balancing planned work vs Urgent Requests Scrum vs Kanban Who is the “business”?  
Categories: Companies

Announcement: New Leadership Training – First in Canada!

Learn more about transforming people, process and culture with the Real Agility Program Certified Agile Leadership (CAL 1) Training Michael Sahota - Profile Picture (2016)Introduction:

Advanced training for leaders, executives and change agents working in Agile environments.

Your success as a leader in an Agile organization requires looking beyond Agile itself. It requires a deep understanding of your organization and your own leadership path. To equip you for this journey, you will gain a strong foundation in understanding organizational culture. From there, you will learn key organization and leadership models that will allow you to understand how your organizational culture really works.

Now you are ready to start the journey! You will learn about organizational growth – how you may foster lasting change in your organization. Key is understanding how it invite change in a complex system. You will also learn about leadership – how you may show up more effectively. And how to help others.

Learning Objective(s):

Though each Certified Agile Leadership course varies depending on the instructor, all Certified Agile Leadership courses intend to create awareness of, and begin the journey toward, Agile Leadership.

Graduates will receive the Certified Agile Leadership (CAL 1) designation.

See Scrum Alliance Website for further details.

Agenda: Agenda (Training Details)

We create a highly interactive dynamic training environment. Each of you are unique – and so is each training. Although the essentials will be covered in every class, you will be involved in shaping the depth and focus of our time together. Each learning module is treated as a User Story (see photo) and we will co-create a unique learning journey that supports everyone’s needs.

The training will draw from the learning areas identified in the overview diagram.

Organizational Culture

“If you do not manage culture, it manages you, and you may not even be aware of the extent to which this is happening.” – Edgar Schein

  • Why Culture? Clarify why culture is critical for Organizational Success.
  • Laloux Culture Model: Discuss the Laloux culture model that will help us clarify current state and how to understand other organizations/models.
  • Agile Culture: Explore how Agile can be seen as a Culture System.
  • Agile Adoption & Transformation: Highlight differences between Agile Adoption and Transformation.
  • Dimensions of Culture: Look at key aspects of culture from “Reinventing Organizations”. Where are we and where might we go?
  • Culture Case Studies: Organizational Design: Explore how leading companies use innovative options to drive cultural operating systems.
Leadership & Organizational Models
  • Theory X – Theory Y: Models of human behaviour that are implicit in various types of management systems.
  • Management Paradigms: Contrast of Traditional “Modern” Management practices with Knowledge worker paradigm.
  • The Virtuous Cycle: Key drivers of success emergent across different high-performance organizational systems.
  • Engagement (Gallup): Gallup has 12 proven questions linked to employee engagement. How can we move the needle?
  • Advice Process: More effective decision-making using Advice Process. Build leaders. Practice with advice cards.
  • Teal Organizations: Explore what Teal Organizations are like.
Leadership Development
  • Leading Through Culture: How to lead through culture so that innovation and engagement can emerge.
  • VAST – Showing up as Leaders: VAST (Vulnerability, Authentic connection, Safety, & Trust) guides us in showing up as more effective leaders.
  • Temenos Trust Workshop: Build trust and charter your learning journey. Intro version of 2 day retreat.
  • Compassion Workshop: How to Use Compassion to Transform your Effectiveness.
  • Transformational Leadership: See how we may “be the change we want to see” in our organizations.
  • Leading Through Context: How to lead through context so that innovation and engagement can emerge.
  • Leadership in Hierarchy: Hierarchy impedes innovation. Listening and language tips to improve your leadership.
Organizational Growth
  • Working With Culture: Given a Culture Gap. What moves can we make? Work with Culture or Transformation.
  • Complex Systems Thinking: Effective change is possible when we use a Complex Systems model. Cynefin. Attractors. Emergent Change.
  • Healthy “Agile” Initiatives: How to get to a healthy initiative. How to focus on the real goals of Agile and clarify WHY.
  • People-Centric Change: The methods we use to change must be aligned with the culture we hope to foster. How we may change in a way that values people.
  • Transformation Case Study: Walkthrough of how a transformation unfolded with a 100 person internal IT group.
Audience: There are two main audiences that are addressed by this training: organizational leaders and organizational coaches. The principles and practices of organizational culture and leadership are the same regardless of your role. Organizational leaders include executives, vice presidents, directors, managers and program leads. Organizational coaches include Agile coaches, HR professionals, management consultants and internal change leaders. “The only thing of real substance that leaders do is to create and manage culture.” – Edgar Schein Facilitator(s): Michael_Sahota Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!

The post Announcement: New Leadership Training – First in Canada! appeared first on Agile Advice.

Categories: Blogs

Agile Tour Vienna, Austria, November 12 2016

Scrum Expert - Thu, 10/13/2016 - 09:30
The Agile Tour Vienna conference is a one-day event that aims at bringing together experts and practitioners interested in Agile software development and Scrum project management. The main conference language is German, but there are some talks in English. In the agenda of the Agile Tour Vienna conference you can find topics like “Effective User Stories”, “(How) do I do “real” Scrum?”, “Agile 1×1”, “Mobile Testing in Agile Context”, “Scaling Agile Delivery: Turning the Lights On”, “A UX Toolkit for the Product Owner”, “Given/When/Then-ready sprint planning”, “Test automation without a headache: Five key patterns”. Web site: Location for the Agile Tour Vienna conference: FH Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria
Categories: Communities

Agile Tour Bangkok, Thailand, October 28-29 2016

Scrum Expert - Thu, 10/13/2016 - 09:00
The Agile Tour Bangkok is a two-day conference focused Agile software development and Scrum. The main objective of this conference is to promote to the Agile software development community in Thailand. Local and international Agile practitioners will share their rich experiences during the Agile Tour Bangkok. In the agenda of Agile Tour Bangkok conference you can find topics like “Mindmaps: A killer way to increase your test coverage”,”Agile Without a Name”, “Integrating Agile Concept Through Education”, “Advanced Lean Agile. Scrum beyond the Guide. Beyond Basics. Beyond a single framework. No Dogmas!”, “Enterprise Agile Journey with DST”, “Enterprise Scaling Strategy”, “Agile Business (Transform Your Organization Through Agile)”, “The Principles Of Agile Metrics”, “Agile and Waterfall from the view of Plato and Aristotle”, “Practical Guide for First Time Product Owners”, “Do and Don’t for Continuous Delivery”, “Agile transformation – The role of a QA”, “Enterprise Agile Pitfalls”. Web site: Location for the Agile Tour Bangkok conference: Montien Riverside Hotel, 372 Rama 3 Road, Bangkhlo, Bangkok 10120, Thailand
Categories: Communities

Continuous Improvement in Lean

Continuous improvement is one of the pillars of a Lean environment. It sounds pretty lofty, doesn’t it? “I...

The post Continuous Improvement in Lean appeared first on Blog | LeanKit.

Categories: Companies

How to use Kanban in Human Resources?

Kanbanery - Wed, 10/12/2016 - 19:37

Artykuł How to use Kanban in Human Resources? pochodzi z serwisu Kanbanery.

Make your recruitment process easy peasy.

Working in a Human Resources Department is not only about dealing with hundreds of people every day but also with thousands of documents. You most likely had days when you didn’t know where to start; your phone was ringing non-stop, and your inbox was close to exploding. Now imagine a lovely morning where you are sitting with a cup of coffee, open your computer and see all of your work in one place. Or… not even in one place – IN ONE BOARD. How? Stay with me for one second; I will show you what Kanbanery can do for you.

Recruitment process in one place

As an HR expert, you already know that some of your processes have particular steps. Now, what if you could visualize it on your online Kanban board? Like here:

kanban boards for

Of course, if you need more or fewer columns or you want to change the name, you can do it very quickly. You can adjust the number of columns to your needs and name them as you wish. To limit your work in process and avoid the stress that comes with thousands of pending tasks, put up a limit called capacity. It will show you how much you can still take on.

kanban board

Are you lost?

Have you ever sent an email to the wrong person? Or an attached job description that was incorrect? Don’t worry; it happens to everyone

Categories: Companies

What Is RabbitMQ? What Does It Do For Me?

Derick Bailey - new ThoughtStream - Wed, 10/12/2016 - 17:33

Wes Bos asked a question on twitter that threw me off a bit. 

@derickbailey I’ve never understood what rabbitmq is / what it’s for. Do you have a post or something that explains what I would use it for?

— Wes Bos (@wesbos)

October 10, 2016

It was a simple question, but I realized that it was one I have never really answered in a blog post or any other material.

What is rmq

So, what is RabbitMQ?

It’s a message broker that makes distributed systems development easy. 

But that’s a terrible answer – it doesn’t really tell you what it does or why you should care about it.

This answer also brings up more questions for anyone that isn’t already familiar with messaging systems. And if you’re already familiar with the concepts, then you probably know what RabbitMQ is and does.

To understand RabbitMQ, look at jQuery AJAX calls.

Takes this code as an example:

This is a pretty standard looking AJAX call made with jQuery.

It’s also a perfect example of how you’re already using most of the concepts that RabbitMQ encapsulates. 

An AJAX call is distributed computing with messages.

You have a web browser on someone’s computer, and a web server sitting somewhere on the internet.

When the browser makes the AJAX call through jQuery, it takes the “data” parameter and passes it to the web server.

The server looks the URL that was requested, the data provided and does some work based on all of that. 

The server will send some kind of response back to the browser – whether it is an immediate response saying that the work was done, or just a “200 ok” saying the message was received, or whatever else.

Additional work may be done on the web server, without the browser knowing about it.

This is distributed computing.

You’re moving some of the work from one system (the computer with the browser) to another (the web server).

Think of RabbitMQ as the back-end AJAX.

If an AJAX call is distributed computing for web browsers, then RabbitMQ is distributed computing for servers.

Instead of dealing with HTTP requests that may be exposed to the internet, RabbitMQ is more often used for back-end services.

There are some key differences, of course. But this is less an analogy than it is a direct parallel – a different implementation of the same basic idea. 

Some of these parallels and differences include the following:

AJAX RabbitMQ HTTP AMQP jQuery.ajax RMQ message “producer” (SDK / API) HTML form encoded data JSON documents Web Server RMQ Server / Message Broker API Endpoint / URL Exchange, Routing Key, Queue Route / request handler
(e.g. MVC controller / action) RMQ message “consumer” (SDK / API)

There’s more subtlety and stark contrast in this comparison then I am explaining in this simple table, but this should give you an idea of how to start thinking about RabbitMQ.

There’s also a lot of new terminology to learn with RabbitMQ (and distributed systems), as with any tech that is new to you. But most of these terms and specifics don’t matter right now.

The one thing that does matter, though, is the message broker. 

An AJAX call is a “brokerless” model.

The browser makes the AJAX request directly to the web server that will handle the request. To do this, the browser must know about the server.

Client server

In the world of messaging, this is called a brokerless model. There is no broker in between the system requesting the work, and the system doing the work.

This is common in browsers because the web page and JavaScript that is loaded in the browser probably came from that server in the first place. There isn’t a need for a third party to sit in between the browser and the web server.

RabbitMQ is a “brokered” model.

The RabbitMQ server itself, sits in between the code that makes a request and the code that handles the request.

Rmq broker

Because of this, the code that produces messages does not know anything about the code that consumes message.

This third party – the RabbitMQ server – sits in between the message producer and consumer. This allows a complete decoupling of the two services.

While this does add some complexity, it also provides a lot of opportunity for improving system architecture, increasing robustness, allowing multiple systems, languages and platforms to work together and more.

So… What does RabbitMQ do?

RabbitMQ allows software services to communicate across logical and physical distance, using JSON documents.

But that’s still a dry, boring answer. 

RabbitMQ allows you to solve a new class of problems in today’s “web scale” world.

  • Push work into background processes, freeing your web server up to handle more users
  • Scale the most frequently used parts of your system, without having to scale everything
  • Handle what would have been catastrophic crashes, with relative ease
  • Deal with seemingly impossible response time requirements for webhooks
  • Allow services to be written in different languages, on different platforms
  • … and so much more
No, RabbitMQ does not do these things directly.

The benefits that you get from RabbitMQ are really the benefits of distributed computing and messaging.

RabbitMQ happens to be a good choice for implementing these types of features and requirements, giving you all the benefits of a message based architecture.

Its’ certainly not the only choice, but it’s the one I’ve generally used for the last 5 years and my software and architecture are better because of it.

The post What Is RabbitMQ? What Does It Do For Me? appeared first on

Categories: Blogs

How to Do a Good Product Demo

TV Agile - Wed, 10/12/2016 - 14:46
Working in middle to large companies means working with several Scrum teams, different departments and having often far from ideal working communication processes. Product Management usually is only one piece of the greater picture and has to serve multiple stakeholders. Keeping the core stakeholders informed about the latest releases and achievements mostly works. But it […]
Categories: Blogs

REPL, Scrum & the Wright Brothers

Scrum Expert - Wed, 10/12/2016 - 14:39
Jeff Sutherland makes a convincing argument of using Scrum outside of software development in his recent book Scrum: The Art of Doing Twice the Work in Half the Time. A full 111 years before Sutherland published this book, the Wright Brothers practiced scrum to outmaneuver better funded and more entrenched competitors. REPL driven development is a technique that comes to business software creation from the scientific computing community. REPL driven development takes the benefits of small iterations and immediate feedback and marries it with the change-proofing benefits of test-driven development. Taking the lessons of Sutherland and the Wright Brothers back to software, we will see how REPL-driven development aligns perfectly with their techniques. At the end of the session, you will know how to write code faster and better, and have a heck of a lot of fun doing it. This session will be using F# in Visual Studio 2015. If you have learned some F# and are eager to start using it in your day job, this session will use some techniques help you achieve that goal. Video producer:
Categories: Communities

The Simple Leader: Discovering Lean

Evolving Excellence - Wed, 10/12/2016 - 10:22

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


Ten years after graduating from college with a chemical engineering degree, I was doing pretty well. I had progressed up through the engineering ranks at a Fortune 50 medical device company, moved into operations, and was running a business unit responsible for a high-profile drug infusion pump product. Although work consumed much of my life in Silicon Valley, I was able to balance it with friends and recreation. Then I got a call from corporate headquarters asking me if I would like to run a large factory in Salt Lake City. To a young career-oriented guy, it sounded like a great opportunity. I would have my own operation, live in a new city, and take a big step up in the company. Without asking a single question, I said yes. (Soon thereafter, I learned an important lesson about why you ask questions when offered new opportunities. Questions can be good things, and the more you ask before getting yourself into an unknown situation, the better.)

In Salt Lake City, I had to oversee a molding operation with sixty heavy presses running at full tilt, twenty-four hours a day, seven days a week, every day of the year. The plant was already three months behind schedule when I got there, and it was falling further and further behind every day. Additionally, the operation came with some unique extracurricular challenges. It supplied components to some critical downstream plants, which made it very visible in the eyes of executives, who closely monitored our output and put a lot of pressure on us to improve results. If that weren’t enough, there were also some questionable “activities” occurring during the night shifts. Soon after arriving, I realized that sleep would be a scarce luxury for a while.

Luckily, I had one huge asset in my favor: a bunch of talented folks equally frustrated with the situation and eager to find solutions. Looking for new ideas, I poked around on the internet and discovered something called “Lean manufacturing,” also known as the “Toyota Production System,” as well as an organization called the Association for Manufacturing Excellence (AME). AME put me in contact with two of their board members, David Hogg and Dan McDonnell, who helped me analyze the situation and what Lean concepts could be applied. Shortly thereafter, I began collecting and sharing Lean concepts with my staff. (As I studied Lean, I collected many resources that would later turn into a side endeavor called Superfactory.)

As a team, we learned how to describe value from the perspective of the customer and how to focus on flow, reduce inventory, and streamline processes. We experimented and failed often, but our efforts soon led to success. We discovered the concept of the “quick changeover,” a method that reduces down- time when preparing the machines to manufacture different parts. This allowed our operators to be much more productive, and after a year, we had caught up and were even finding ways to get rid of antiquated equipment while increasing production. When the new presses that headquarters had purchased to add capacity (spending millions of dollars in the process) finally arrived, we did not even need them. Lean really worked, and in the real world no less.

Seeing our results, my passion for Lean grew rapidly, but the company wasn’t at a point in its evolution to fully embrace it. In hindsight, this was a lesson on the importance of executive commitment to the Lean transformation process—great improvements can be made at lower levels, but a true organizational transformation requires a cultural change driven from the top. After a frustrating couple of additional years, I decided to leave this company. (Interestingly, the company is now known for its Lean prowess—perhaps the early efforts by our team did have some residual impact after all.)

In 2000, I moved back to California to run a facility recently purchased by a large telecom equipment manufacturer. When I came on board, the operation had an order backlog of nearly a year, and the long lead times were costing the company significant business. Once again, the pressure was on to improve operations. By implementing Lean methods, our team increased output from
$500,000 to $5,000,000 a month in less than six months, using the same floor space, equipment, and people as before. Once again, our Lean transformation efforts were doing great things for the company.

Unfortunately, around the middle of 2001, we began to experience a few order cancellations. Little did we know that this was the edge of the cliff that many technology companies went over later that year. The drop off came so fast that we were still hiring when we began planning our first layoff. On September
10th, I laid off the entire operations group, including myself (a painful experience, although the events of the next day would put that pain into a different perspective). The remaining operations were consolidated into the corporate facility several hundred miles away (a decidedly non Lean operation) and our product line was soon shut down, demonstrating that without executive leadership support, Lean transformations are very fragile.

Although the companies I worked for did not believe in Lean, my own confidence with it had grown to the point that instead of looking for a new job, I got together with a couple friends and started a contract manufacturing company. We thought we could leverage the power of Lean to tackle the difficult jobs that no one else wanted. We also thought that since we had such a compelling business model we would have no problem finding business. On the first point, we were correct, but on the second one, not so much. Before having to find customers for myself, I was always somewhat envious of the jet-setting lifestyle of my friends in sales and marketing, never understanding why they were paid so well. My two business partners and I, all operations grunts, learned what selling is all about—the hard way. After three years of basically paying our employees but never ourselves, we decided to admit we had learned our lesson. We shut the company down and went our separate ways.

Even though our contract manufacturing operation did not prosper, my knowledge of Lean continued to pay great dividends. Over the years, the list of Lean resources I had been collecting morphed into one of the largest and most comprehensive websites on Lean (, bringing me into contact with Lean specialists from around the world. I also joined the AME board of directors. After shuttering our company, I leveraged the wealth
of contacts from those activities to join the consulting world. One of the contacts helped me find some contract work at a medical device company a short drive from my home on California’s Central Coast. One thing led to another, and I soon found myself as the president of the company, overseeing plants in California and Michigan.

In contrast to my earlier experiences, the long-term vision, commitment, and patience of the owners of my new company provided me the opportunity to try some radical Lean experiments over the eight years I was there. We reorganized the company into value streams, developed incredible teams, and even eliminated budgets. Thanks in large part to Lean improvements, we were successful enough to build a large new facility (in expensive California, no less) during the middle of a recession. We also turned traditional outsourcing thinking on its head by shipping products from California to China and India.

Along the way, I learned many valuable lessons, including how to use many of the same Lean concepts to become personally more productive. Later in this book, I’ll be sharing these lessons, from both personal and professional perspectives.

Categories: Blogs

Targetprocess v.3.10.1: New Visual Reports Editor, more Batch Actions

TargetProcess - Edge of Chaos Blog - Wed, 10/12/2016 - 09:24
Beta of our new Visual Report Editor

On-Demand users will be able to switch to the beautiful beta of our new Visual Report Editor from Report Settings.

With this editor, you will have a lot of useful and easy options:

  • You can drag-n-drop fields in reports to edit them and explore your data 
  • Aggregate fields, and make title and sorting changes with one click
  • Important milestones and threshold lines can be added from field annotations
  • Browse and edit chart data with the built-in visual editor
  • Easy custom calculations via a formulas editor with full autocomplete 

Read more at this dedicated release post for the new Report Editor

Batch Actions: Text, Number custom fields, batch change Effort, link Relations

In v.3.10.1  you can reset or set new efforts for a group of selected cards. Only roles, who are common for all selected cards, are available in a batch action panel. Here you can set a new effort number or reset it to zero. If you see a certain number in the Effort field, it means that all cards in the selection have the same effort set. The 'Set new' placeholder means there are different Effort value sets in different cards (if the selection is for a given role).

batch effort

You will also be able to batch attach any Targetprocess entity to a group of cards as a Relation.

batch link relatioin

We've also added support for two more types of Custom Fields (in addition to drop-down lists) in the Batch Actions panel: Text and Number. We're now working on Date, Checkbox and Multiple Selection Custom Fields support.

Clone Dashboards

Somehow, the Clone action was missing from Dashboards. We fixed this inequity - you will now be able to make a copy of any Dashboard.

Clone Dashboard

Fixed Bugs
  • Fixed Custom Field Constraints mashup to support custom teams workflow states
  • Fixed '0' drop-down list value wich couldn't be set properly
  • Fixed 'Share report' option
  • Fixed ban on adding users if their email is the same as a deleted user had
  • Screen Capture Extension: login issue fixed
  • Fixed problem with Copy to project / Convert not working if there is a deleted user assigned to a card or its child entity
  • *.m4a attachments now download correctly
  • Fixed Dark skin for TV to show view name properly
  • The *.Count().* DSL filter supports more complex predicates. Example: 'UserStories.Count(Feature.Name.Contains('Web') and Effort > 0) > 5'
  • Added the option to open URLs with the Targetprocess site domain from Custom fields in a current window
Categories: Companies

New Visual Report Editor

TargetProcess - Edge of Chaos Blog - Wed, 10/12/2016 - 08:43

We are happy to announce the beta release of our new Visual Report Editor for all On-Demand accounts (with the exception of private clouds, for now).

You may find the following features to be useful:

  • Editing reports and visually exploring your data can now be done simply with interactive fields and drag-n-drop
  • Aggregate fields, and make title and sorting changes with one click
  • Important milestones and threshold lines can be added using field annotations
  • Browse and filter chart data using the built-in visual editor UI
  • Custom calculations can be done easilly via a formulas editor with full autocomplete. This will bring your creation of complex calculations to the next level
  • Low-level data details are available within the tooltip data reveal extension

Here's how you can try out the new Report Editor:


We will really appreciate your feedback on this beta release of our new editor. What do you like about it? What could be improved? Let us know what you think at

Categories: Companies

What is Structured Logging?

BigVisible Solutions :: An Agile Company - Tue, 10/11/2016 - 23:15


For a long period in my development career I assumed logging was only good for applications that didn’t have a UI. We only ever used it for diagnosing error conditions. Once the software reached production, we’d get notified whenever operations reported a stack trace. Sometimes we’d know right away what the problem was. More often we wouldn’t have enough information about the sequence of events that led to the problem, and the resolution would involve a lot of inspired guessing and detective work.

Then I noticed that Thoughtworks recommended adopting something called “Structured Logging” in their Technology Radar in May of 2015. What the heck was that all about? Clearly I was missing something important…

I have since come to understand that logging can — and probably should — be used for much more than just error diagnostics. A well-designed logging system can also be used for business analytics (e.g., how is the system being used) as well as for system monitoring (e.g., what kind of load are we experiencing).

Currently I am a member of an Agile development team. As such we take responsibility for doing automated testing, deployment and monitoring of the systems we develop. Designing a useful and effective logging system can and does make these tasks much, much simpler. In this article I advocate the use of structured logging in your project right from the very beginning.

What Can Structured Logging Do For Us?

Customers often feel that support for business analytics is a much lower priority than adding new features. After all, we can always add that stuff later, right? My experience is that by the time ‘later’ comes, the effort to modify the software has become prohibitive… unless you’ve been using structured logging all along. In that case, adding business analytics support turns out to be very easy to do. In many cases, all you need to do is to create some canned reports using the log analysis tools.

Here are a few other questions that structured logging can help us answer easily if it is implemented from the very beginning:

  • Diagnostics
    • What caused this stack trace?
    • What was the sequence of events that led up to this request failing unexpectedly?
  • Analytics
    • Who is using our service?
    • What does usage look like over time?
    • What are our customers using our system to do?
  • Monitoring
    • How long is it taking to process a request?
    • How much available memory is there?
Practical Concerns

There is a big data explosion going on in our industry. There is massive growth in machine and infrastructure size. The ability to spot errors and correlate information from distributed systems is becoming critically important. Our logging system needs to provide the ability to trace operations across many machines and systems. That is only really practical if we use a standard logging format, make our logs machine-readable, and introduce tokens that identify operations that cross machine boundaries.

Logging Format

So what is a good format to make our logs machine-readable (as well as human readable)? While there are alternatives — XML, name-value pairs, fixed width columns, etc — the most obvious format for a machine-readable logging event format is JSON. And it turns out that there are quite a few tools available that are happy to consume JSON logs. Rather than have intermediate tools like logstash convert our logs to JSON, it seems more efficient to just log in JSON in the first place.

What Should we be Logging?

Standard logging libraries like Simple Logging Facade for Java (SLF4J) already include a lot of useful information: timestamp, pid, thread, level, loggername, etc. We just need to extend this list with attributes that are specific to our software.

If we want to follow an operation across multiple machines and systems, we need to include an identifying token in the logs. This is sometimes called a request id or a transaction id. Inclusion of this token will allow our logging tools to extract only those events that relate to that request — even though multiple systems may be involved in servicing it.

Values like a transaction id will need to be transported between our systems for this to work. Some ways to do this are custom HTTP headers, additional message fields, database columns, etc.

Standardized Logging Levels

The various components of our system need to be consistent about the use of logging levels. Typically, a system in production is going to use the INFO logging level, while in development it will use the DEBUG logging level. With this as a starting point, here’s a suggestion for when the other levels should be used:

  • ERROR: Unexpected errors. E.g., loss of connectivity, logic errors, misconfiguration.
  • WARN: Expected errors. E.g., User authentication failed.
  • INFO: Anything we need to see in a production system related to business analytics, diagnostics or monitoring. E.g., User request started/completed, resource usage snapshots, timing information.
  • DEBUG: Stuff developers need to see. E.g., entry to different architectural layers with parameters.
  • TRACE: Stuff developers might turn on temporarily. E.g., complex branching logic, dumps of data structures.
Some Logging Tools

There is a growing set of tools available for doing this kind of log analysis. Here are some examples:

All of these can consume and search logging events in JSON format. However, installing and using these tools is outside the scope of this article.

In my next blog, I’ll discuss implementing structured logging in Groovy.

The post What is Structured Logging? appeared first on SolutionsIQ.

Categories: Companies

Building New Capacity

Learn more about transforming people, process and culture with the Real Agility Program

One concept that is integral to BERTEIG’s vision is for the company to grow organically through systematic capacity-building of its team…Which is one reason why I attended Coach’s Camp in Cornwall, Ontario last June. However, I discovered that my understanding of coaching in an Agile environment was totally out to lunch, a universe away from my previous experiences of being an acting and voice coach.

Doing a simulation exercise in a workshop at Coach’s Camp, I took the role of coach and humiliated myself by suggesting lines of action to a beleaguered Scrum Master. I was offering advice and trying to solve his problems – which is, I learned, a big no-no. But I couldn’t quite grasp, then, what a coach actually does.

Despite that less-than-stellar attempt, I was curious to sign up for Scrum Alliance’s webinar called “First Virtual Coaching Clinic,” September 13, 2016. They had gathered a panel of three Certified Enterprise Coaches (CEC’s): Michael de la Maza, Bob Galen, and Jim York.

The panel’s focus was on two particular themes: 1) how to define and measure coaching impact, and, 2) how to deal with command and control in an organization.

The following are some of the ideas I absorbed, which gave me a clearer understanding of the Agile coaching role.

Often, a client is asking a coach for a prescription, i.e. “Just tell me/ us what to do!” All three panel members spoke about the need for a coach to avoid being prescriptive and instead be situationally aware. A coach must help a customer identify his/her own difficulties and outcomes correctly, and work with them to see that achieved. It’s helpful to share stories with the client that may contain two or three options. Be as broad as possible about what you’ve seen in the past. A team should ultimately come up with their own solutions.

However, if a team is heading for a cliff, it may be necessary to be prescriptive.

Often people want boundaries because Agile practices are so broad. Menlo’s innovations ( was suggested as a way to help leaders and teams play. Providing people with new experiences can lead to answers. What ultimately matters is that teams use inspection and adaptation to find practices that work for them.

A good coach, then, helps a client or team find answers to their own situation. It is essential that a coach not create unhealthy dependancies on herself.

It follows that coaching impact can be measured by the degree of empowerment and courage that a team develops – which should put the coach out of a job. An example mentioned was a case study in 2007 out of Yahoo which suggested metrics such as ROI, as well as asking, “Does the organization have the ability to coach itself?”

Other indicators that can be used for successful coaching have to do with psychological safety, for example: a) on this team it is easy to admit mistakes, and, b) on this team, it is easy to speak about interpersonal issues.

When it comes to ‘command and control’ (often practiced by organizational leaders, but sometimes by a team member), the coaches offered several approaches. Many individuals are not aware of their own behaviors. A coach needs to be a partner to that client, and go where the ‘commander’ is to help him/her identify where they want to get to. Learn with them. Share your own journeys with clients and self-organizing teams.

A coach needs to realize that change is a journey, and there are steps in between one point and another. Avoid binary thinking: be without judgement, without a definition of what is right and wrong.

The idea of Shu Ha Ri was suggested, which is a Japanese martial arts term for the stages of learning to mastery, a way of thinking about how you learn a technique. You can find a full explanation of it on Wikipedia.

Coaching is a delicate process requiring awareness of an entire organization’s ecosystem. It requires patience and time, and its outcome ultimately means independence from the coach.

Have I built capacity as a potential Agile coach? Not in a tactical sense; I won’t be hanging out a shingle anytime soon. But at least I‘ve developed the capacity to recognize some do’s and don’t’s...

That’s right: capacity-building IS about taking those steps…

Watch Mishkin Berteig’s video series “Real Agility for Managers” using this link:

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!

The post Building New Capacity appeared first on Agile Advice.

Categories: Blogs

How to Measure Value

Agile is about delivering value, so it is important to understand the value you are delivering within an iteration or release. However, I seen many teams that focus on how many story points they deliver but not really thinking about the value of those story points.

So are there ways to measure value? The answer is, of course there is. One way would be to use the Planning Poker approach, but instead of estimating the effort for each story you can estimate the value of the story. In this case, the team doing the work would include the product owner, any sponsored users, and other users or subject matter experts. Just as you would start with estimating effort, you would pick one feature or user story and assign it an arbitrary value, such as eight points. Then you would go through and assign values to each of the remaining features or user stories relative to that one.

From here, there are a couple more things you can do to refine your value measurement. One approach would be to divide the value of a story by the story estimate. For example if you had a three story point story with a value of three, you would have a feature of 1 value point/story point. Likewise if you had a one story point story with a value of three you would get a answer of three value points/story point, which would indicate that this would be a more valuable story based on the value per points.

Another approach would be to calculate the cost of each user story. If you have a team that is pretty stable you can estimate how much it cost to perform an iteration. You should also know your velocity, so it's a simple math equation to come up with your cost per points. So for example if you had a five person team and average cost per person of $200/hour, you work that out for a two-week iteration you're looking at $80,000. if your velocity is 50 points then your cost per store is $1600 so an eight point story would cost $12,800.

So these are just a couple ways in which you can measure the value of what you're delivering. You could do something as simple as the T-shirt sizing approach, where a story may be small, medium, large, or extra-large in value. In any case you should look at someway to ensure that you are measuring the value you are delivering and not just the effort being completed.
Categories: Blogs

Forty8Fifty Labs Launches Splunk Connector for Atlassian JIRA

Scrum Expert - Tue, 10/11/2016 - 18:59
Forty8Fifty Labs has announced the Real-Time Splunk Connector for Atlassian JIRA Service Desk. The new solution leverages advanced analytics and reporting to provide a big picture view across all service desk environments so DevOps teams can better understand the incident trends that often create persistent or intermittent service issues. The Real-Time Splunk Connector for Atlassian JIRA Service Desk from Forty8Fifty Labs can trigger enhanced, real-time alerts within HipChat to arm development teams with the information and context they need to build system performance and reliability. Leveraging real-time Searches in Splunk to trigger configurable levels of service incidents associated with JIRA Service Desk, the new product empowers teams to close the loop on responses to issues taking place in their environment. Taking the value a step further, the new product also provides rich visual data analytics on both the operational occurrences taking place along with the Service Desk experience associated with it. This provides development and operations teams with an anytime, anywhere view of the state of their service request and delivers the detailed information needed to act fast. Key features of the Real-Time Splunk Connector for Atlassian JIRA Service Desk include: * Automated incident creation driven by real time search events * Relevant troubleshooting detail linked directly to incident from inception * Integrated alert notification to HipChat Rooms (when using HipChat) * Rich Visual Data Analytics of both Incidents Patterns and Service Desk Experience
Categories: Communities

DSDM Consortium Renamed as Agile Business Consortium

Scrum Expert - Tue, 10/11/2016 - 18:25
The DSDM Consortium, author and custodian of the world-leading DSDM Agile Framework, has announced a new identity and a major new Agile approach. The Agile Business Consortium unveiled its new name and look as it launched the Agile Business Change Framework, designed to support businesses and organisations in adopting Agile at any organisational level and on any scale. Agile Business Consortium CEO Mary Henson said: “Some years ago, Agile became the mainstream approach to software and systems development and in those fields it’s now used more than any other approach. Increasingly, businesses recognise the benefits of adopting Agile methods in many different parts of the enterprise but struggle to understand how to enable it, implement it and ensure robust governance. “The Agile Business Consortium has evolved to address those challenges and, particularly, to develop the Agile Business Change Framework – a new framework that will enable businesses and organisations to take an Agile approach wherever in the enterprise it’s needed and on whatever scale it’s needed.” Building on more than 20 years’ experience in developing DSDM’s highly successful project management and programme management frameworks, the Agile Business Consortium has been developing the new Framework with selected partners and early adopters such as PwC, Tata Consulting and others. It will continue to build a community of like-minded partners to progress the Framework and will provide supportive training through a network of accredited organisations.
Categories: Communities

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.