Skip to content

Blogs

MediatR Pipeline Examples

Jimmy Bogard - Thu, 10/13/2016 - 21:02

A while ago, I blogged about using MediatR to build a processing pipeline for requests in the form of commands and queries in your application. MediatR is a library I built (well, extracted from client projects) to help organize my architecture into a CQRS architecture with distinct messages and handlers for every request in your system.

So when processing requests gets more complicated, we often rely on a mediator pipeline to provide a means for these extra behaviors. It doesn’t always show up – I’ll start without one before deciding to add it. I’ve also not built it in directly to MediatR  – because frankly, it’s hard and there are existing tools to do so with modern DI containers. First, let’s look at the simplest pipeline that could possible work:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
    {
        _inner = inner;
    }

    public TResponse Handle(TRequest message)
    {
        return _inner.Handle(message);
    }
 }

Nothing exciting here, it just calls the inner handler, the real handler. But we have a baseline that we can layer on additional behaviors.

Let’s get something more interesting going!

Contextual Logging and Metrics

Serilog has an interesting feature where it lets you define contexts for logging blocks. With a pipeline, this becomes trivial to add to our application:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner)
    {
        _inner = inner;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, typeof(TRequest).FullName))
        {
            return _inner.Handle(message);
        }
    }
 }

In our logs, we’ll now see a logging block right before we enter our handler, and right after we exit. We can do a bit more, what about metrics? Also trivial to add:

using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
using (Metrics.Time(Timers.MediatRRequest))
{
    return _inner.Handle(request);
}

That Time class is just a simple wrapper around the .NET Timer classes, with some configuration checking etc. Those are the easy ones, what about something more interesting?

Validation and Authorization

Often times, we have to share handlers between different applications, so it’s important to have an agnostic means of cross-cutting concerns. Rather than bury our concerns in framework or application-specific extensions (like, say, an action filter), we can instead embed this behavior in our pipeline. First, with validation, we can use a tool like Fluent Validation with validator handlers for a specific type:

public interface IMessageValidator<in T>
{
    IEnumerable<ValidationFailure> Validate(T message);
}

What’s interesting here is that our message validator is contravariant, meaning I can have a validator of a base type work for messages of a derived type. That means we can declare common validators for base types or interfaces that your message inherits/implements. In practice this lets me share common validation amongst multiple messages simply by implementing an interface.

Inside my pipeline, I can execute my validation my taking a dependency on the validators for my message:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validators)
    {
        _inner = inner;
        _validators = validators;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
        {
            var failuers = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
                .ToList();
            if (failures.Any())
                throw new ValidationException(failures);
            
            return _inner.Handle(request);
        }
    }
 }

And bundle up all my errors into a potential exception thrown. The downside of this approach is I’m using exceptions to provide control flow, so if this is a problem, I can wrap up my responses into some sort of Result object that contains potential validation failures. In practice it seems fine for the applications we build.

Again, my calling code INTO my handler (the Mediator) has no knowledge of this new behaviors, nor does my handler. I go to one spot to augment and extend behaviors across my entire system. Keep in mind, however, I still place my validators beside my message, handler, view etc. using feature folders.

Authorization is similar, where I define an authorizer of a message:

public interface IMessageAuthorizer {
  void Evaluate<TRequest>(TRequest request) where TRequest : class
}

Then in my pipeline, check authorization:

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer
        )
    {
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
        {
            _securityHandler.Evaluate(message);
            
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
                .ToList();
            if (failures.Any())
                throw new ValidationException(failures);
            
            return _inner.Handle(request);
        }
    }
 }

The actual implementation of the authorizer will go through a series of security rules, find matching rules, and evaluate them against my request. Some examples of security rules might be:

  • Do any of your roles have permission?
  • Are you part of the ownership team of this resource?
  • Are you assigned to a special group that this resource is associated with?
  • Do you have the correct training to perform this action?
  • Are you in the correct geographic location and/or citizenship?

Things can get pretty complicated, but again, all encapsulated for me inside my pipeline.

Finally, what about potential augmentations or reactions to a request?

Pre/post processing

In addition to some specific processing needs, like logging, metrics, authorization, and validation, there are things I can’t predict one message or group of messages might need. For those, I can build some generic extension points:

public interface IPreRequestHandler<in TRequest>
{
    void Handle(TRequest);
}
public interface IPostRequestHandler<in TRequest, in TResponse>
{
    void Handle(TRequest request, TResponse response);
}
public interface IResponseHandler<in TResponse>
{
    void Handle(TResponse response);
}

Next I update my pipeline to include calls to these extensions (if they exist):

public class MediatorPipeline<TRequest, TResponse> 
  : IRequestHandler<TRequest, TResponse>
  where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> _inner;
    private readonly IEnumearble<IMessageValidator<TRequest>> _validators;
    private readonly IMessageAuthorizer _authorizer;
    private readonly IEnumerable<IPreRequestProcessor<TRequest>> _preProcessors;
    private readonly IEnumerable<IPostRequestProcessor<TRequest, TResponse>> _postProcessors;
    private readonly IEnumerable<IResponseProcessor<TResponse>> _responseProcessors;

    public MediatorPipeline(IRequestHandler<TRequest, TResponse> inner,
        IEnumerable<IMessageValidator<TRequest>> validator,
        IMessageAuthorizor authorizer,
        IEnumerable<IPreRequestProcessor<TRequest>> preProcessors,
        IEnumerable<IPostRequestProcessor<TRequest, TResponse>> postProcessors,
        IEnumerable<IResponseProcessor<TResponse>> responseProcessors
        )
    {
        _inner = inner;
        _validators = validators;
        _authorizer = authorizer;
        _preProcessors = preProcessors;
        _postProcessors = postProcessors;
        _responseProcessors = responseProcessors;
    }

    public TResponse Handle(TRequest message)
    {
        using (LogContext.PushProperty(LogConstants.MediatRRequestType, requestType))
        using (Metrics.Time(Timers.MediatRRequest))
        {
            _securityHandler.Evaluate(message);
            
            foreach (var preProcessor in _preProcessors)
                preProcessor.Handle(request);
            
            var failures = _validators
                .Select(v => v.Validate(message))
                .SelectMany(result => result.Errors)
                .Where(f => f != null)
                .ToList();
            if (failures.Any())
                throw new ValidationException(failures);
            
            var response = _inner.Handle(request);
            
            foreach (var postProcessor in _postProcessors)
                postProcessor.Handle(request, response);
                
            foreach (var responseProcessor in _responseProcessors)
                responseProcessor.Handle(response);
                
            return response;
        }
    }
 }

So what kinds of things might I accomplish here?

  • Supplementing my request with additional information not to be found in the original request (in one case, barcode sequences)
  • Data cleansing or fixing (for example, a scanned barcode needs padded zeroes)
  • Limiting results of paged result models via configuration
  • Notifications based on the response

All sorts of things that I could put inside the handlers, but if I want to apply a general policy across many handlers, can quite easily be accomplished.

Whether you have specific or generic needs, a mediator pipeline can be a great place to apply domain-centric behaviors to all requests, or only matching requests based on generics rules, across your entire application.

Categories: Blogs

The Docker Management Cheatsheet

Derick Bailey - new ThoughtStream - Thu, 10/13/2016 - 18:31

I’ve been doing a lot with Docker in the last few months, and it’s become a staple of my development tool set at this point.

Unfortunately, it’s also a bit difficult to remember all the different commands and options that I use, even when I use them on a regular basis.

To help me and hopefully others that are getting started with Docker, I put together a cheatsheet that lists the most common commands and options, for managing images and containers.

And I’ve made this cheatsheet available for free, for everyone!

Docker management cheatsheet

Download the Docker management cheatsheet from WatchMeCode

The post The Docker Management Cheatsheet appeared first on DerickBailey.com.

Categories: Blogs

Announcement: New Leadership Training – First in Canada!

Learn more about transforming people, process and culture with the Real Agility Program Certified Agile Leadership (CAL 1) Training Michael Sahota - Profile Picture (2016)Introduction:

Advanced training for leaders, executives and change agents working in Agile environments.

Your success as a leader in an Agile organization requires looking beyond Agile itself. It requires a deep understanding of your organization and your own leadership path. To equip you for this journey, you will gain a strong foundation in understanding organizational culture. From there, you will learn key organization and leadership models that will allow you to understand how your organizational culture really works.

Now you are ready to start the journey! You will learn about organizational growth – how you may foster lasting change in your organization. Key is understanding how it invite change in a complex system. You will also learn about leadership – how you may show up more effectively. And how to help others.

Learning Objective(s):

Though each Certified Agile Leadership course varies depending on the instructor, all Certified Agile Leadership courses intend to create awareness of, and begin the journey toward, Agile Leadership.

Graduates will receive the Certified Agile Leadership (CAL 1) designation.

See Scrum Alliance Website for further details.

Agenda: Agenda (Training Details)

We create a highly interactive dynamic training environment. Each of you are unique – and so is each training. Although the essentials will be covered in every class, you will be involved in shaping the depth and focus of our time together. Each learning module is treated as a User Story (see photo) and we will co-create a unique learning journey that supports everyone’s needs.

The training will draw from the learning areas identified in the overview diagram.

Organizational Culture

“If you do not manage culture, it manages you, and you may not even be aware of the extent to which this is happening.” – Edgar Schein

  • Why Culture? Clarify why culture is critical for Organizational Success.
  • Laloux Culture Model: Discuss the Laloux culture model that will help us clarify current state and how to understand other organizations/models.
  • Agile Culture: Explore how Agile can be seen as a Culture System.
  • Agile Adoption & Transformation: Highlight differences between Agile Adoption and Transformation.
  • Dimensions of Culture: Look at key aspects of culture from “Reinventing Organizations”. Where are we and where might we go?
  • Culture Case Studies: Organizational Design: Explore how leading companies use innovative options to drive cultural operating systems.
Leadership & Organizational Models
  • Theory X – Theory Y: Models of human behaviour that are implicit in various types of management systems.
  • Management Paradigms: Contrast of Traditional “Modern” Management practices with Knowledge worker paradigm.
  • The Virtuous Cycle: Key drivers of success emergent across different high-performance organizational systems.
  • Engagement (Gallup): Gallup has 12 proven questions linked to employee engagement. How can we move the needle?
  • Advice Process: More effective decision-making using Advice Process. Build leaders. Practice with advice cards.
  • Teal Organizations: Explore what Teal Organizations are like.
Leadership Development
  • Leading Through Culture: How to lead through culture so that innovation and engagement can emerge.
  • VAST – Showing up as Leaders: VAST (Vulnerability, Authentic connection, Safety, & Trust) guides us in showing up as more effective leaders.
  • Temenos Trust Workshop: Build trust and charter your learning journey. Intro version of 2 day retreat.
  • Compassion Workshop: How to Use Compassion to Transform your Effectiveness.
  • Transformational Leadership: See how we may “be the change we want to see” in our organizations.
  • Leading Through Context: How to lead through context so that innovation and engagement can emerge.
  • Leadership in Hierarchy: Hierarchy impedes innovation. Listening and language tips to improve your leadership.
Organizational Growth
  • Working With Culture: Given a Culture Gap. What moves can we make? Work with Culture or Transformation.
  • Complex Systems Thinking: Effective change is possible when we use a Complex Systems model. Cynefin. Attractors. Emergent Change.
  • Healthy “Agile” Initiatives: How to get to a healthy initiative. How to focus on the real goals of Agile and clarify WHY.
  • People-Centric Change: The methods we use to change must be aligned with the culture we hope to foster. How we may change in a way that values people.
  • Transformation Case Study: Walkthrough of how a transformation unfolded with a 100 person internal IT group.
Audience: There are two main audiences that are addressed by this training: organizational leaders and organizational coaches. The principles and practices of organizational culture and leadership are the same regardless of your role. Organizational leaders include executives, vice presidents, directors, managers and program leads. Organizational coaches include Agile coaches, HR professionals, management consultants and internal change leaders. “The only thing of real substance that leaders do is to create and manage culture.” – Edgar Schein Facilitator(s): Michael_Sahota Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Announcement: New Leadership Training – First in Canada! appeared first on Agile Advice.

Categories: Blogs

What Is RabbitMQ? What Does It Do For Me?

Derick Bailey - new ThoughtStream - Wed, 10/12/2016 - 17:33

Wes Bos asked a question on twitter that threw me off a bit. 

@derickbailey I’ve never understood what rabbitmq is / what it’s for. Do you have a post or something that explains what I would use it for?

— Wes Bos (@wesbos)

October 10, 2016

It was a simple question, but I realized that it was one I have never really answered in a blog post or any other material.

What is rmq

So, what is RabbitMQ?

It’s a message broker that makes distributed systems development easy. 

But that’s a terrible answer – it doesn’t really tell you what it does or why you should care about it.

This answer also brings up more questions for anyone that isn’t already familiar with messaging systems. And if you’re already familiar with the concepts, then you probably know what RabbitMQ is and does.

To understand RabbitMQ, look at jQuery AJAX calls.

Takes this code as an example:

This is a pretty standard looking AJAX call made with jQuery.

It’s also a perfect example of how you’re already using most of the concepts that RabbitMQ encapsulates. 

An AJAX call is distributed computing with messages.

You have a web browser on someone’s computer, and a web server sitting somewhere on the internet.

When the browser makes the AJAX call through jQuery, it takes the “data” parameter and passes it to the web server.

The server looks the URL that was requested, the data provided and does some work based on all of that. 

The server will send some kind of response back to the browser – whether it is an immediate response saying that the work was done, or just a “200 ok” saying the message was received, or whatever else.

Additional work may be done on the web server, without the browser knowing about it.

This is distributed computing.

You’re moving some of the work from one system (the computer with the browser) to another (the web server).

Think of RabbitMQ as the back-end AJAX.

If an AJAX call is distributed computing for web browsers, then RabbitMQ is distributed computing for servers.

Instead of dealing with HTTP requests that may be exposed to the internet, RabbitMQ is more often used for back-end services.

There are some key differences, of course. But this is less an analogy than it is a direct parallel – a different implementation of the same basic idea. 

Some of these parallels and differences include the following:

AJAX RabbitMQ HTTP AMQP jQuery.ajax RMQ message “producer” (SDK / API) HTML form encoded data JSON documents Web Server RMQ Server / Message Broker API Endpoint / URL Exchange, Routing Key, Queue Route / request handler
(e.g. MVC controller / action) RMQ message “consumer” (SDK / API)

There’s more subtlety and stark contrast in this comparison then I am explaining in this simple table, but this should give you an idea of how to start thinking about RabbitMQ.

There’s also a lot of new terminology to learn with RabbitMQ (and distributed systems), as with any tech that is new to you. But most of these terms and specifics don’t matter right now.

The one thing that does matter, though, is the message broker. 

An AJAX call is a “brokerless” model.

The browser makes the AJAX request directly to the web server that will handle the request. To do this, the browser must know about the server.

Client server

In the world of messaging, this is called a brokerless model. There is no broker in between the system requesting the work, and the system doing the work.

This is common in browsers because the web page and JavaScript that is loaded in the browser probably came from that server in the first place. There isn’t a need for a third party to sit in between the browser and the web server.

RabbitMQ is a “brokered” model.

The RabbitMQ server itself, sits in between the code that makes a request and the code that handles the request.

Rmq broker

Because of this, the code that produces messages does not know anything about the code that consumes message.

This third party – the RabbitMQ server – sits in between the message producer and consumer. This allows a complete decoupling of the two services.

While this does add some complexity, it also provides a lot of opportunity for improving system architecture, increasing robustness, allowing multiple systems, languages and platforms to work together and more.

So… What does RabbitMQ do?

RabbitMQ allows software services to communicate across logical and physical distance, using JSON documents.

But that’s still a dry, boring answer. 

RabbitMQ allows you to solve a new class of problems in today’s “web scale” world.

  • Push work into background processes, freeing your web server up to handle more users
  • Scale the most frequently used parts of your system, without having to scale everything
  • Handle what would have been catastrophic crashes, with relative ease
  • Deal with seemingly impossible response time requirements for webhooks
  • Allow services to be written in different languages, on different platforms
  • … and so much more
No, RabbitMQ does not do these things directly.

The benefits that you get from RabbitMQ are really the benefits of distributed computing and messaging.

RabbitMQ happens to be a good choice for implementing these types of features and requirements, giving you all the benefits of a message based architecture.

Its’ certainly not the only choice, but it’s the one I’ve generally used for the last 5 years and my software and architecture are better because of it.

The post What Is RabbitMQ? What Does It Do For Me? appeared first on DerickBailey.com.

Categories: Blogs

How to Do a Good Product Demo

TV Agile - Wed, 10/12/2016 - 14:46
Working in middle to large companies means working with several Scrum teams, different departments and having often far from ideal working communication processes. Product Management usually is only one piece of the greater picture and has to serve multiple stakeholders. Keeping the core stakeholders informed about the latest releases and achievements mostly works. But it […]
Categories: Blogs

Callbacks First, Then Promises

Derick Bailey - new ThoughtStream - Wed, 10/05/2016 - 16:52

When I’m looking at asynchronous JavaScript – network calls, file system, or whatever it may be – I don’t reach for promises, first. In fact, a promise is typically a last resort for me, relegated to specific scenarios.

Promise vs callback

I don’t mean to say promises are not useful. They certainly are useful and they are a tool that I know well. I think you should know them – but I don’t think you should use them as your default for async code.

Why not use promises, by default?

The TL;DR is that promises add another layer of complexity and potential bugs – especially around error handling. 

But, I’m not going to elaborate on the complexities much. Nolan Lawson did an excellent job of pointing out the problems with promises, already

He starts that post with a question, asking you to explain the difference in four uses of a promise.

This, to me, demonstrates the primary reason that I don’t reach for promises by default. There are a lot of potential mistakes to make.

I reach for promises when I see the need.

That need typically arrises in one of the following scenarios:

  • I need to wait for multiple async responses before continuing
  • I want to cache a response and skip doing the work multiple times
  • I want to chain methods together
  • Working with async generators

The Promise.all method is great for waiting on multiple async returns. It lets you pass in an array of promises and wait for all of them to complete before firing the callback.

Caching a response is also useful, though there are many ways of doing this (including the memoize function of lodash / underscore, and others).

Chaining methods together can often make existing promises easier to understand and modify, though I don’t typically do this unless I’m already working with promises.

And lastly, the current ES2015 (ES6) generators implementation facilitates better async patterns in our code, but only when we add a small library around the generators

I still prefer Node.js-style callbacks.

Having spent several years working in Node.js, I’ve grown to like the callback style where an error object is passed in as the first parameter. 

This code, in my experience, allows more flexibility while preventing the potential problems of swallowed and lost exceptions, that promises present.

Until we see the async / await syntax from ES(v.whatever) make it into Node and most browsers, I’ll probably continue to reach for the callback style as my go-to for async code. Even when that happens, it will take a long time for this callback style to move to the side (if it ever does) due to shear momentum of existing code. 

A promise is a tool you should understand, and use.

But I would not recommend using promises as your first choice for handling async JavaScript.

The post Callbacks First, Then Promises appeared first on DerickBailey.com.

Categories: Blogs

Hello world!

Agile Estimator - Wed, 09/28/2016 - 16:44

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Categories: Blogs

Ending the Nested Tree of Doom with Chained Promises

Derick Bailey - new ThoughtStream - Tue, 09/27/2016 - 13:30

Julia Jacobs recently asked a question in the WatchMeCode community slack, about some asynchronous code she wanted to clean up.

In this question, she wanted to know of good options for restructuring deeply nested promises without introducing any new libraries. Would it be possible to clean up the code and only use ES6 features?

Nested callback cleanup

It’s a common pattern and problem – the Nested Tree of Doom – not only with promises, but with JavaScript callbacks in general. 

The full question, is as follows

Hey all. Does anyone know any decent patterns for

a) async ES6 classes other than shoving everything into a native promise in a getter.

b) creating an async waterfall serialization flow with multiple functions using generators.  

I’ll create some gists with samples of the code I’m trying to refactor. I’m trying to stay away from libraries and stick with native ES6 goodness.

When I first read the question without looking at the example code, I wondered if promises were the right way to go.

Personally, I prefer the node style of callbacks as my go-to pattern for asynchronous code. Promises are definitely useful, but they are most useful in specific scenarios.

My initial response, however, was about ES6 generators.

the best way to take advantage of generators and async work is going to be with a small library [like co].

you can write your own with only a few lines of code, though

if you don’t like getters that return promises, can use a method with a callback parameter.

personally, i prefer callback methods over promises, until i have a specific use case for promises (like waiting for multiple things, only wanting to perform the action once but retrieve the value multiple times, chaining async ops, etc)

Once Julia posted her code sample, my advice quickly changed.

While I don’t reach for promises as my first choice, Julia’s scenario and code samples quickly changed my mind. 

Here is what she posted for the code in a hapi.js router:

It’s the result of two weeks of banging out a hapijs arch which parses a huge XML response from a Java API and maps it to a huge json contract with very strict corporate requirements.

Frankly, this code is darn near as beautiful an example as you can find, when it comes to nested callbacks and promises.

And I don’t mean “a beautiful mess” – I mean that Julia has written code that you dream of finding, when you are looking at restructuring.

Most of my own nested promises are garbage – closures; creating promises inside of callbacks; nested .catch and resolve statements and more of a mess than I care to show anyone.

This code is near spotlessly clean, already.

Unlike the mess that I tend to create with nested promises, the code Julia showed us is uniformly written.

It uses no closures.

It takes the results of the previous work and passes it directly to the next work, with nothing else.

And it does all of this through promises that are created by other functions.

The only thing this code really needs, is a small adjustment to remove the nested promises and turn them into chained promises.

As I noted in my response to Julia,

instead of doing `return SearchModel.get(json).then({ …` you can return the promise from `SearchModel.get` directly

`return SearchModel.get(json);`

Promises will take any return value from a `then` callback, and forward it to the next `then` for you

if the return value is a Promise itself, it will wait for that promise to resolve or reject

so the above code should be functionally the same just with less nesting

When you take the first few lines of the above code and apply this idea, it becomes this:

And this is where the magic begins.

By returning the promise from SearchModel.get, the code nesting has been reduced one level.

Apply this same pattern throughout the rest of the sample, and you reduce the nesting by one level, at each level of nesting.

The result is that there are never any nested promises!

But the magic doesn’t end here.

Reducing the nested promises is only the first steps in reorganizing this code.

I continued the example by talking about extracting named functions

if you really want to reduce this code further, extract each of those callbacks into a named function

then you can do `xml.tojsonhttpclient.get().then(transformit).then(mapit).then(…)`

This works because these functions are very clean. They take the result of the previous promise and pass it into the next promise. There’s no closures or other code, as I mentioned before.

The result of extracting the callbacks into named functions looks like this:

Now this is beautiful code!

But if we’re talking ES6, Let’s use ES6.

Shortly after posting this, there was some discussion on twitter, and Richard Livsey pointed out that ES6 arrow functions could be used with implicit returns, instead of extracting named functions.

@derickbailey great post. Another intermediate step could be to use implicit returns vs extracting functions

— Richard Livsey (@rlivsey)

September 27, 2016


With this post, he also provided an example that shows the use of arrow functions to achieve this:

The code I have above may be slightly cleaner when it comes to reading the chained promises, but this is a great option if you’re concerned about lines of code or having all those extra little functions laying around. 

The goal may not be to reduce the number of lines of code, though. 

In this scenario, the goal was to make the code easier to reason about and easier to change, not just reducing the number lines of code (though this may be an important aspect in other scenarios).

By taking what was already clean code and reorganizing it into chained promises instead of nested, the code became easier to follow.

When the functions were extracted and named, though, that’s when the code became easier to understand at a higher workflow level.

You no longer have to dive into the details of each promise’s callback to understand what happens next. Instead, you can look at the high level flow of chained promises and see a simplified name that represents what will happen next.

The code is easier to modify, as well. If you need to insert a new method, change a detail, remove a step, it’s all right there in the high level chaining. 

This type of restructuring is not always going to work, though.

When Julia brought this question to the WatchMeCode community slack, she was already in a good place. 

Most of the time when I’m looking at nested promises, I’m in much worse shape. 

If you face code that has closures around variables, re-using them between promise callbacks, for example, you’re going to run into problems.

If you have nested promises being created within the promise callbacks directly, or worse, you have nested promise chains being resolved and rejected, it can be incredibly difficult to fix.

You may find yourself in a situation where a promise chain is simply the wrong way to solve the problem.

But if you’re looking at code which is clean and concise like what Julia brought to us, or if you can move your code from where it is, to this clean and uniform state, then you should be able to take full advantage of chained promises.

The post Ending the Nested Tree of Doom with Chained Promises appeared first on DerickBailey.com.

Categories: Blogs

Does ES6 Mean The End Of Underscore / Lodash?

Derick Bailey - new ThoughtStream - Mon, 09/12/2016 - 21:24

If you look at the latest Chrome, Safari, Firefox and MS Edge browsers, you’ll notice the compatibility and feature implementation of ES6 is darn near complete. 

And for the majority of developers that still have to support older browsers? Babel and other pre-compilers have you covered.

There’s so much adoption among the latest browser versions, the latest node.js, and pre-compilers, that you don’t have much reason to ignore ES6 moving forward. 

But what does this mean for some of the old stand-by libraries that have not only provided great APIs and utilities, but also helped to define the feature set in ES6?

Underscore lodash trash

Tools like underscore and Lodash – are they now obsolete, with ES6 features replacing most of the utility belt that they had provided?

I have to admit – I rarely use underscore or Lodash anymore.

Other than my front-end code where I am still stuck in my Backbone / Marionette ways, I’m rarely installing underscore or Lodash into my projects, intentionally. 

Every now and then I see enough need to install one of them. But, more often than not I turn to the ES5 and ES6 counterparts for the methods that I use frequently.

For example, some underscore -> es5/es6 array methods:

  • each -> forEach
  • map -> map
  • reduce -> reduce
  • find -> find
  • filter -> filter
  • contains -> includes
  • etc, etc, etc

While the method signatures and behavior are not 100% the same, there is an unmistakable influence in the ES5/ES6 list of array methods, compared to the underscore and Lodash API.

Because of this, I have largely been able to do without these utility belts.

But I wondered if I was the exception or the rule.

I knew that my use of these libraries was limited compared to a lot of people.

After all, I’m rarely digging into the more functional / composition aspects of JavaScript. I don’t spend much time type checking with the various is* method, anymore, and frankly the various builds and advanced options of Lodash have often kept me away out of lack of time (effort) to understand them. 

I had assumptions about this, because of my own experience. But, I wanted to verify that my thoughts on “underscore is dead” were shared in the community.

So I sent out a poll on Twitter and asked,

Are you still using Lodash / underscore?

are you still using underscore / Lodash, now that ES2015 is available most places / with precompilers?

— Derick Bailey (@derickbailey)

August 31, 2016


The results shown in the poll were nothing like my own experience and expectations.

NewImage

Only 9% of the 236 responses said that ES6 (ES2015) makes Lodash / underscore obsolete.

Personally, I fell into the 17% of “yes, but rarely”.

But the overwhelming majority of respondents – nearly 75% of them – said they still use these libraries frequently, with nearly half the responses wondering they they wouldn’t use them.

With the responses pouring in, I had to know why. So I asked a follow-up question.

Why are you still using Lodash / underscore?

if you said yes, you are using underscore / Lodash still, what features / methods, and why?

— Derick Bailey (@derickbailey)

August 31, 2016

The responses were generally categorized into two things:

  • Methods on objects, instead of just Arrays
  • Functional / composition based coding

For example, there are a lot of methods in the ES5/6 Array object that can be applied to Objects, with these libraries.

@derickbailey AFAIK, es5 map/reduce/filter only work on arrays

— Sean Corrales (@seancorrales)

August 31, 2016

@derickbailey _.pick/omit, _.map for objects, _.isX, _.indexBy, _.sortBy, _.escape, _.debounce/throttle … I could keep going.

— Ben Vinegar (@bentlegen)

August 31, 2016

With the rise of functional programming, it’s also not surprising to see many people using these libraries in that paradigm.

@derickbailey Using Lodash/fp on almost all projects at work.

— Cyril Silverman (@CyrilSilverman)

August 31, 2016

@derickbailey also ES2015 didn’t added so much worth to me since I don’t use `this` and `prototype`-stuff. rest and spread are interesting.

— Stephan Hoyer (@cmx66)

August 31, 2016

And if you’re using these libraries already, might it be easier to use them everywhere instead of trying to remember which methods are built-in and which come from the library at this point? 

@derickbailey I obviously don’t need to use `map`, `filter` etc at this point, but I’m invested enough in the library that its easier

— Ben McCormick (@ben336)

August 31, 2016

Frankly, it seems there are some good reasons to continue using these libraries, at this point. 

Still, I had to wonder if the future of JavaScript might bring these to an end.

I mean, if these libraries are largely being replaced (at least in my use), and JavaScript is catching up to the capabilities that they provide while continuing to move forward, isn’t there a real chance that they will become obsolete?

It seemed like a possibility to me.

And I rarely use these libraries anymore, so doesn’t that show some evidence?

Well… maybe not.

John-David Dalton – the creator and maintainer of Lodash – steps in to the discussion with this:

Lodash is an active/evolving collection of 300+ modular utils that’ll continue to push beyond ES 5/6/7/.. additions. https://t.co/AvOLZXt5Wg

— John-David Dalton (@jdalton)

August 31, 2016

YMNK Lodash, by way of the jQuery foundation, has TC39 representation & is involved in the evolution of JS

Categories: Blogs

Integreer Kwaliteit met Lean Software Ontwikkeling

Ben Linders - Wed, 08/24/2016 - 13:26

Integreer kwaliteit Agile LeanAgile methoden zoals Scrum leggen de nadruk op functionaliteit. Klanten verwachten echter naast functionaliteit dat ook de kwaliteit van het product in orde is. Waar Agile voornamelijk aandacht geeft aan het software team en de interactie met de omgeving, kijkt Lean naar de gehele keten: van klantbehoefte tot waarde voor de klant. Een van de aspecten van Lean Software Ontwikkeling is het integreren van kwaliteit.

Lean Software Ontwikkeling combineert Agile en Lean met de volgende 7 principes:

  1. Verminder Verspillingen (Eliminate Waste)
  2. Integreer Kwaliteit (Build Quality In)
  3. Leer Voortdurend (Learn Constantly)
  4. Lever Snel (Deliver Fast)
  5. Betrek Iedereen (Engage Everyone)
  6. Verbeter Continue (Keep getting Better)
  7. Optimaliseer het Geheel (Optimize the whole)
Kwaliteit begint bij de klanten


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Mijn definitie van kwaliteit (zie Hoe zorg je met Scrum voor Kwaliteitsproducten en Diensten) is:

Kwaliteit is de mate waarin voldaan wordt aan de behoeften van de gebruikers, en aan de eisen van de opdrachtgevers. Dat kunnen zowel functionele behoeften zijn (iets wat het product of dienst moet doen), of “performance”of niet-functionele eisen (hoe snel, hoe veel, de betrouwbaarheid, etc), vaak is het een combinatie van beide.

Het zijn je klanten die bepalen wat kwaliteit is (en wat niet), en wat vereist is voor een goede kwaliteit. Pas als je weet wat je klanten nodig hebben kun je naar goed oplossingen zoeken om aan hun wensen te voldoen. Je integreert daarmee kwaliteit in het gehele ontwikkelproces. Het juiste product

Hoe kom je erachter wat de klant nodig heeft? Agile kent diverse practices waarin het team en de klanten (in de Scrum praat men over product owner ipv klant) intensief samenwerken om ervoor te zorgen dat het juiste product geleverd wordt. Voorbeelden daarvan zijn de planning game en de product demonstratie.

In de planning game gaat het erom dat de product eisen duidelijker worden. Wat willen klanten bereiken met het product, welke waarde moet het hun gaan bieden? Wat moet het product doen om die waarde te kunnen leveren? Maar ook de kwaliteitseisen: hoe snel moet het werken, en hoe betrouwbaar en stabiel moet het zijn? Het team benut zijn kennis en ervaring om te bepalen wat mogelijk is, en hoe het product op een Lean manier ontwikkelt kan worden.

In de product demonstratie laat je het product zien aan de klanten en vraag om je feedback. Is dit wat de klant wil? Is het goed genoeg? Maar ook: Is het snel genoeg, handig te gebruiken. Betrouwbaar en veilig? En, gegeven wat het product nu doet, wat is er nog meer nodig? Kwaliteit met Agile en Lean

Hoe gaat dat in de praktijk? Laten we kijken naar een medisch systeem wat specialisten gebruiken om onderzoek te doen met patiënten. De specialisten (klanten) willen een aantal mogelijkheden hebben om data en scans van een patiënt te bekijken. Ze willen beelden van eerdere scans kunnen oproepen, inzoomen, en vergelijken. Omdat ze dat vaak met de patiënt erbij doen moet dat snel gaan, en eenvoudig te bedienen zijn. De gegevens mogen niet fout zijn, de specialisten gebruiken ze om beslissingen te nemen waar het leven van de patiënt van af kan hangen.

In de discussies vooraf en in de planning game formuleren de product owner en het team de acceptatie criteria. Voor kwaliteitseisen moeten die criteria meetbaar zijn. Dus niet “snel genoeg” maar “in 90% van de gevallen reageert het systeem binnen 1 seconde”.

Samen met de product owner formuleren de teamleden user stories. De acceptatiecriteria in de user stories worden door het team gebruikt om af te spreken hoe ze de software gaan maken en verifiëren.

Bijvoorbeeld, voor een bepaalde user story doen ze een spike, ze maken een stukje sofware en een testcase die meet hoe snel de software is om te bepalen of wat de klant wil haalbaar is. Voor een andere user story wil het team pair programming gebruiken, het is een complexe functie waarmee de team leden nog geen ervaring hebben.

Er zijn ook stories waarbij test driven design volgens het team de beste aanpak is, en een enkele story waarbij de klant nog niet echt weet wat het product precies moet gaan doen om er op  een handige manier mee te kunnen werken, daar lijkt prototyping met Lean Startup het beste te passen.

De vereiste functionaliteit en kwaliteit is bepalend voor de aanpak. De product owner maakt duidelijk wat er nodig is en welke kwaliteit de klanten verwachten. Het team weet wat met een bepaalde manier van werken haalbaar is, en check in de planning game met de product owner. Te weinig kwaliteit is niet goed, maar teveel ook niet. Het gaat bij lean om het vinden van de juiste balans tussen tijd, geld en kwaliteit voor het leveren van functionaliteit.

In de demonstratie wordt de software getoond en gechecked of het voldoet. Daarbij telt zowel de functionaliteit als de kwaliteit. Het moet niet alleen werken, het moet ook snel genoeg zijn, betrouwbaar, bedienbaar, etc. Pas dan voldoet het product aan alle eisen en is het af.

De kracht zit in de samenwerking tussen het team en de product owner gedurende de ontwikkeling. Is er een gedeeld beeld wanneer, hoe en waarvoor klanten het product gebruiken? Wat betekent het product voor hun en welke waarde het kan toevoegen? Kan de product owner voldoende duidelijk maken wat nodig is, en checken de teamleden of ze het goed begrepen hebben? Leren ze van dingen die niet goed zijn gegaan? Integreer kwaliteit

Lean en Agile versterken elkaar als het gaat om de kwaliteit van de producten en diensten. Met agile en lean practices integreer je kwaliteit in de volledige productontwikkelingsketen.

kwaliteitsverbeteringIn de workshop software kwaliteitsverbetering leer je hoe je goede producten en diensten kunt leveren. Kwaliteitsverbetering helpt organisaties om beter te voldoen aan de behoeften van de gebruikers en aan de eisen van de opdrachtgevers.

Categories: Blogs

Continue Verbetering met Agile in Bits&Chips

Ben Linders - Wed, 08/24/2016 - 11:05

bitchipslogoMijn artikel Continue Verbetering met Agile is gepubliceerd in Bits&Chips nr 4. In dit artikel laat ik zien dat continue verbetering een integraal onderdeel is van de Agile-mindset en van de Agile-principes en -practices, en geef ik tips en advies voor verbeteren met agile:

Silver bullets bestaan niet in softwareontwikkeling. Effectieve softwareteams bepalen zelf hoe ze hun werk doen, passen zich continu aan en verbeteren zichzelf. Continue verbetering is ingebed in de Agile-mindset en -principes en helpt zo om de flexibiliteit in bedrijven te verhogen en meer waarde te leveren, betoogt Ben Linders.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Bits&Chips publiceert een magazine over ontwikkelingen in de high-tech industrie en organiseert de jaarlijkse smart systems conferenties (Software-Centric Systems Conference in 2016).

Eerder dit jaar publiceerde Bits&Chips een Engelstalige editie, met daarin mijn artikel Delivering quality software with Agile.

Categories: Blogs

Smalltalk-Inspired, Frameworkless, TDDed Todo-MVC

Matteo Vaccari - Tue, 08/16/2016 - 18:14
TL;DR

I wrote an alternative to the Vanilla-JS example of todomvc.com. I show that you can drive the design from the tests, that you don’t need external libraries to write OO code in JS, and that simple is better :)

Why yet another Todo App?

A very interesting website is todomvc.com. Like the venerable www.csszengarden.com shows many different ways to render the same page with CSS, todomvc.com instead shows how to write the same application with different JavaScript frameworks, so that you can compare the strengths and weakness of various frameworks and styles.

Given the amount of variety and churn of JS frameworks, it is very good that you can see a small-sized complete example: not too big that it takes ages to understand, but not so small to seem trivial.

Any comparison, though, needs a frame of reference. A good one in this case would be writing the app with no frameworks at all. After all, if you can do a good job without frameworks, why incur the many costs of ownership of frameworks? So I looked at the vanillajs example provided, and found it lacking. My main gripe is that there is no clear “model” in this code. If this were real MVC, I would expect to find a TodoList that holds a collection of TodoItems; this sort of things. Alas, the only “model” provided in that example has the unfortunate name of “Model” and is not a model at all; it’s a collection of procedures that read and write from browser storage. So it’s not really a model because a real “model” should be a Platonic, infrastructure-free implementation of business logic.

There are other shortcomings to that implementation, including that the “view” has a “render” method that accepts the name of an operation to perform, making it way more procedural than I would like. This is so different to what I think of as MVC that made me want to try my hand at doing it better.

Caveats: I’m not a good JS programmer. I don’t know the language well, and I’m sure my code is clumsier than it could be. But I’m also sure that writing a frameworkless app is not a sign of clumsiness, ignorance or old age. Anybody can learn Angular, React or what have you. Learning frameworks is not difficult. What is difficult is to write good code, with or without frameworks. Learning to write good code without frameworks gives you incredible leverage: gone are the hours spent looking on StackOverflow for the magic incantations needed to make framework X do Y. Gone is the cost of maintenance inflicted on you by the framework developers, when they gingerly update the framework from version 3 to version 4. Gone is the tedium of downloading megabytes of compressed code from the server!

So what were my goals?
  • Simple design. This means: no frameworks! Really, frameworks are sad. Just write the code that your app needs, and write it well.
  • TDD: let the tests drive the design. I try to write tests that talk the language of the app specification, avoiding implementation details as much as possible
  • Smalltalk-inspired object orientation. JS generally pushes you to expose the state of objects as public properties. In Smalltalk, the internal state of an object is totally encapsulated. I emulated that with a simple trick that does not require extra libraries.
  • I had in the back of my mind the “count” example in Jill Nicola’s and Peter Coad’s OOP book. That is what I think of when I say “MVC”. I tried to avoid specifying this design directly in the tests, though.
  • Simple, readable code. You wil be the judge on that.
How did it go?

The first time around I tried to work in a “presenter-first” style. After a while, I gave up and started again from scratch. The code was ugly, and I felt that I was committing the classic TDD mistake, to force my preconceived design. So I started again and the second time was much nicer.

You cannot understand a software design process just by looking at the final result. It’s only by observing how the design evolved that you can see how the designer thinks. When I started again from scratch, my first tests looked like this:

beforeEach(function() {
  fixture = document.createElement('div');
  $ = function(selector) { return fixture.querySelector(selector); }
})

describe('an empty todo list', function() {
  it('returns an empty html list', function() {
    expect(new TodoListView([]).render()).to.equal('<ul class="todo-list"></ul>');
  });
});

describe('a list of one element', function() {
  it('renders as html', function() {
    fixture.innerHTML = new TodoListView(['Pippo']).render();
    expect($('ul.todo-list li label').textContent).equal('Pippo');
    expect($('ul.todo-list input.edit').value).equal('Pippo');
  });
});

The above tests are not particularly nice, but they are very concrete: the check that the view returns the expected HTML, with very few assumptions on the design. Note that the “model” in the beginning was just an array of strings.

The final version of those test does not change much on the surface, but the logic is different:

beforeEach(function() {
  fixture = createFakeDocument('<ul class="todo-list"></ul>');
  todoList = new TodoList();
  view = new TodoListView(todoList, fixture);
})

it('renders an empty todo list', function() {
  view.render();
  expect($('ul.todo-list').children.length).to.equal(0);
});

it('renders a list of one element', function() {
  todoList.push(aTodoItem('Pippo'));
  view.render();
  expect($('li label').textContent).equal('Pippo');
  expect($('input.edit').value).equal('Pippo');
});

The better solution, for me, was to pass the document to the view object, call its render() method, and check how the document was changed as a result. This places almost no constraints on how the view should do its work. This, to me, was key to letting the test drive the design. I was free to change and simplify my production code, as long as the correct code was being produced.

Of course, not all the tests check the DOM. We have many tests that check the model logic directly, such as

it('can contain one element', function() {
  todoList.push('pippo');

  expect(todoList.length).equal(1);
  expect(todoList.at(0).text()).equal('pippo');
});

Out of a total of 585 test LOCs, we have 32% dedicated to testing the models, 7% for testing repositories, 4% testing event utilities and 57% for testing the “view” objects.

How long did it take me?

I did not keep a scrupolous count of pomodoros, but since I committed very often I can estimate the time taken from my activity on Git. Assuming that every stretch of commits starts with about 15 minutes of work before the first commit in the stretch, it took me about 18 and a half hours of work to complete the second version, distributed over 7 days (see my calculations in this spreadsheet.) The first version, the one I discarded, took me about 6 and a half hours, over two days. That makes it 25 hours of total work.

What does it look like?

The initialization code is in index.html:

<script src="js/app.js"></script>
<script>
  var repository = new TodoMvcRepository(localStorage);
  var todoList = repository.restore();
  new TodoListView(todoList, document).render();
  new FooterView(todoList, document).render();
  new NewTodoView(todoList, document).render();
  new FilterByStatusView(todoList, document).render();
  new ClearCompletedView(todoList, document).render();
  new ToggleAllView(todoList, document).render();
  new FragmentRepository(localStorage, document).restore();
  todoList.subscribe(repository);
</script>

I like it. It creates a bunch of objects, and starts them. The very first action is to create a repository, and ask it to retrieve a TodoList model from browser storage. The FragmentRepository should perhaps better named FilterRepository. The todoList.subscribe(repository) makes the repository subscribe to the changes in the todoList model. This is how the model is saved whenever there’s a change.

Each of the “view” objects takes the model and the DOM document as parameters. As you will see, these “views” also perform the function of controllers. This is how they came out of the TDD process. They probably don’t conform exactly to MVC, but who cares, as long as they are small, understandable and testable?

Each of the “views” handles a particular UI detail: for instance, the ClearCompletedView is in js/app.js:

function ClearCompletedView(todoList, document) {
  todoList.subscribe(this);

  this.notify = function() {
    this.render();
  }

  this.render = function() {
    var button = document.querySelector('.clear-completed');
    button.style.display = (todoList.containsCompletedItems()) ? 'block' : 'none';
    button.onclick = function() {
      todoList.clearCompleted();
    }
  }
}

The above view subscribes itself to the todoList model, so that it can update the visibility of the button whenever the todoList changes, as the notify method will then be called.

The test code is in the test folder. For instance, the test for the ClearCompletedView above is:

describe('the view for the clear complete button', function() {
  var todoList, fakeDocument, view;

  beforeEach(function() {
    todoList = new TodoList();
    todoList.push('x', 'y', 'z');
    fakeDocument = createFakeDocument('<button class="clear-completed">Clear completed</button>');
    view = new ClearCompletedView(todoList, fakeDocument);
  })

  it('does not appear when there are no completed', function() {
    view.render();
    expectHidden($('.clear-completed'));
  });

  it('appears when there are any completed', function() {
    todoList.at(0).complete(true);
    view.render();
    expectVisible($('.clear-completed'));
  });

  it('reconsider status whenever the list changes', function() {
    todoList.at(1).complete(true);
    expectVisible($('.clear-completed'));
  });

  it('clears completed', function() {
    todoList.at(0).complete(true);
    $('.clear-completed').onclick();
    expect(todoList.length).equal(2);
  });

  function $(selector) { return fakeDocument.querySelector(selector); }
});

Things to note:

  • I use a real model here, not a fake. This gives me confidence that the view and the model work correctly together, and allows me to drive the development of the containsCompletedItems() method in TodoList. However, it does couple the view and the model tightly.
  • I use a simplified “document” here, that only contains the fragment of index.html that this view is concerned about. However, I’m testing with the real DOM in a real browser, using Karma. This gives me confidence that the view will interact correctly with the real browser DOM. The only downside is that the view knows about the “clear-completed” class name.
  • The click on the button is simulated by invoking the onclick handler.

If you are curious, here is the implementation of createFakeDocument:

function createFakeDocument(html) {
  var fakeDocument = document.createElement('div');
  fakeDocument.innerHTML = html;
  return fakeDocument;
}

It’s that simple to test JS objects against the real DOM.

All the production code is in file js/app.js. An example model is TodoItem:

function TodoItem(text, observer) {
  var complete = false;

  this.text = function() {
    return text;
  }

  this.isCompleted = function() {
    return complete;
  }

  this.complete = function(isComplete) {
    complete = isComplete;
    if (observer) observer.notify()
  }

  this.rename = function(newText) {
    if (text == newText)
      return;
    text = newText.trim();
    if (observer) observer.notify()
  }
}

As you can see, I used a very simple style of object-orientation. I do not use (or need here) prototype inheritance, but I do encapsulate object state well.

I’m not showing the TodoList model because it’s too long :(. I don’t like this, but I don’t have a good idea at this moment to make it smaller. Another class that’s too long and complex is TodoListView, with about 80 lines of code. I could probably break it down in TodoListView and TodoItemView, making it a composite view with a smaller view for each TodoItem. That would require creating and destroying the view dynamically. I don’t know if that would be a good idea; I haven’t tried it yet.

Comparison with other Todo-MVC examples

How does it compare to the other examples? There is no way I can read all of the examples, let alone understand them. However, there is a simple metric that I can use to compare my outcome: simple LOC, counting just the executable lines and omitting comments and blank lines. After all, if you use a framework, I expect you to write less code; otherwise, it seems to me that either the framework is not valuable, or that you can’t use it well, which means that it’s not valuable to you. This is the table of LOCs, computed with Cloc. (Caveat: I tried to exclude all framework and library code, but I’m not sure I did that correctly for all examples.) My version is the one labelled “vanillajs/xpmatteo” in bold. I’m excluding test code.

#locs td:first-of-type { text-align: right; } 1204 typescript-angular/js 1185 ariatemplates/js 793 aurelia 790 socketstream 782 typescript-react/js 643 gwt/src 631 closure/js 597 dojo/js 594 puremvc/js 564 vanillajs/js 529 dijon/js 508 enyo_backbone/js 489 typescript-backbone/js 481 vanilla-es6/src 479 flight/app 475 lavaca_require/js 468 componentjs/app 432 duel/src/main 383 polymer/elements 364 cujo/app 346 sapui5/js 321 vanillajs/xpmatteo 317 scalajs-react/src/main/scala 311 backbone_marionette/js 310 ampersand/js 295 sammyjs/js 295 backbone_require/js 287 extjs_deftjs/js 284 durandal/js 280 rappidjs/app 276 thorax/js 271 troopjs_require/js 265 angular2/app 256 angularjs/js 249 mithril/js 242 thorax_lumbar/src 235 chaplin-brunch/app 233 vanilladart/web/dart 233 somajs_require/js 232 serenadejs/js 226 emberjs/todomvc/app 224 spine/js 224 exoskeleton/js 214 backbone/js 213 meteor 207 angular-dart/web 190 somajs/js 167 riotjs/js 164 react-alt/js 156 angularjs_require/js 147 ractive/js 146 olives/js 146 knockoutjs_require/js 145 canjs_require/js 139 atmajs/js 132 firebase-angular/js 130 foam/js 129 canjs/js 124 vue/js 99 knockback/js 98 react/js 96 angularjs-perf/js 34 react-backbone/js Things I learned

It’s been fun and I learned a lot about JS and TDD. Many framework-based solutions are shorter than mine, and that’s to be expected. However, all you need to know to understand my code is JS.

TDD works best when you try to avoid pushing it to produce your preconceived design ideas. It’s much better when you follow the process: write tests that express business requirements, write the simplest code to make the tests pass, refactor to remove duplication.

Working in JS is fun; however, not all things can be tested nicely with the approach I used here. I often checked in the browser that the features I had test-driven were really working. Sometimes they didn’t, because I had forgot to change the “main” code in index.html to use the new feature. At one point I had an unwanted interaction between two event handlers: the handler for the onchange event fired when the edit text was changed by the onkeyup handler. I wasn’t able to write a good test for this, so I resorted to simply testing that the onkeyup handler removed the onchange handler before acting on the text. (This is not very good because it tests the implementation instead of the outcome.)

You can do a lot of work without jQuery, expecially since there is the querySelector API. However, in real work I would probably still use it, to improve cross-browser compatibility. It would probably also make my code simpler.

Categories: Blogs

Survey on Agile Manifesto 2.0

Ben Linders - Mon, 08/15/2016 - 17:59

Survey agile manifesto KamleshIs There a Need For Agile Manifesto 2.0? That’s the question that Kamlesh Ravlani, Agile / Lean Coach and Scrum Trainer, is asking the Agile community. He is running a Survey on Agile Manifesto 2.0, which he announced on LinkedIn Pulse.

Lately there is a lot of buzz in the Agile community around the need to update the Agile Manifesto. Many agilists have been vocal about it and some have floated their own versions of the manifesto. Let’s explore collectively as a community the need for changes in the Agile Manifesto.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Please support this survey by answering three questions (it only takes a couple of minutes).

Kamlesh will publicly share the findings from the survey:

Who can participate?
All practitioners, Agile coaches, trainers and thought leaders are invited to share their opinion.

How will this information be used?
I intend to share the results of this survey with community via recognized platform for example – Infoq, ScrumAlliance, etc.

I’ve responded to this survey and I’m hoping many of you will do the same :-).

Categories: Blogs

Feedback in Agile

Ben Linders - Wed, 08/10/2016 - 11:38

feedback agileAgile software ontwikkeling kent ingebouwde feedback. Iedere iteratie wordt afgesloten met een sprint review/demo en een agile retrospective, waarin feedback centraal staat. Ook tijdens de iteratie is er gelegenheid voor feedback. Een overzicht van de diverse manieren van feedback in agile en de voordelen die feedback oplevert. Product Demonstratie

De product demonstratie (sprint review in Scrum) is bedoeld om feedback te krijgen op het product. Een goede demo zorgt voor antwoorden op vragen zoals:

  • Doet het product wat het zou moeten doen?
  • Is het product bruikbaar?
  • Welke functionaliteit is verder nodig?
  • Wat kan er aan het product verbeterd worden?
Agile Retrospective


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

In de agile retrospective reflecteert het team op hun proces. In de retrospective geven teamleden feedback naar elkaar. Deze feedback geeft inzicht in de manier van werken en helpt om continu te verbeteren.

Een goede retrospective geeft inzicht in:

  • Wat ging er goed en wat heb je als team geleerd?
  • Welke problemen zijn er geweest, wat zou je willen veranderen?
  • Welke sterktes en kwaliteiten heeft het team?
  • Hoe kan het team zich verder ontwikkelen?

Cover Waardevolle Agile Retrospectives ManagementboekWaardevolle Agile Retrospectives is het 1e Nederlandstalige Agile boek voor het faciliteren van retrospectives. Met vele oefeningen, het “wat” en “waarom” van retrospectives, de business value en de voordelen die retrospectives brengen. Tevens practische tips en adviezen voor het introduceren en verbeteren van retrospectives. Aanbevolen voor agile coaches, Scrum masters, project managers, product managers en facilitators die al enige ervaring hebben met retrospectives. Andere feedback momenten

De demo en retrospective zijn de bekendste feedback momenten in Agile. Maar er zijn er nog meer. Tijdens de dagelijkse stand up kunnen team leden elkaar feedback geven. Bijvoorbeeld over hoe ze de samenwerking in het team ervaren en hoe een activiteit gegaan is. In de planning game geeft het team feedback op de user stories naar de product owner, samen stemmen ze de inhoud van de iteratie af. Wat levert feedback op

Feedback in agile helpt om te leren en continu te verbeteren. De voordelen die feedback in agile oplevert zijn:

  • Met frequente snelle feedback kun je eenvoudiger bijsturen
  • Concrete feedback die kort na een gebeurtenis gegeven wordt, maakt eenvoudiger om actie te ondernemen
  • Verbeteren in kleine stapjes is eenvoudiger, snelle feedback maakt het mogelijk.
  • Goede feedback verbeterd de relatie tussen mensen en helpt om effectiever samen te werken

Agile wordt je door agile te doen. Wil je met agile resultaten bereiken dan is goede feedback essentieel. De sprint review/demo en de agile retrospective zorgen voor continue product- en procesverbetering, waardoor teams efficiënt en effectief producten kunnen leveren.

Categories: Blogs

Gratis mini-workshop over Agile Retrospectives

Ben Linders - Wed, 08/10/2016 - 11:06

AgileHubNoordOp 21 september geef ik een gratis mini-workshop over agile retrospectives in Groningen. In deze mini-workshop gebruik ik oefeningen uit mijn succesvolle workshop Waardevolle Agile Retrospectives.

Retrospectives helpen je om agile effectief toe te passen continu te verbeteren. Je pakt ermee problemen aan en zorgt voor een goede werksfeer in je teams. Scrum masters en Agile coaches halen  meer uit teams met behulp van een toolbox met retrospective oefeningen.

In deze mini-workshop geeft Ben Linders, auteur van het boek Waardevolle Agile Retrospectives, een introductie van de “waarom” en “wat” van retrospectives. Je oefent verschillende manieren om retrospectives te doen en krijgt tips en adviezen voor het introduceren en verbeteren van retrospectives.

Deze mini-workshop wordt gegeven in samenwerking met AgileHubNoord, een onafhankelijke netwerkorganisatie die als doel heeft om Agile-professionals uit Noord-Nederland met elkaar te verbinden, kennis met elkaar te delen en om het Agile-gedachtegoed onder de aandacht te brengen.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Er zijn helaas geen plaatsen meer beschikbaar voor deze workshop (hij was in enkele dagen volledig “uitverkocht”), maar er is een wachtlijst. Als mensen zich afmelden word je automatisch aangemeld voor de meetup.

De workshop workshop Waardevolle Agile Retrospectives geef ik zowel via open inschrijving als in-house, aangepast aan de specifieke wensen van jou bedrijf en situatie. Neem contact met mij op!

Categories: Blogs

Chapter on Visual Management added to What Drives Quality

Ben Linders - Wed, 08/10/2016 - 09:30

What Drive Quality coverA new chapter which explores how visual management can be used to improve quality of software products has been added to my second book What Drives Quality.

One of the principles from agile and lean software development is transparency. Making things visible helps teams to decide what to develop and to collaborate effectively with their stakeholders. It can also help to increase the quality of software. You can apply visual management to make potential quality issues visible early and prioritize solving them. The examples that I provide explain clearly why quality matters and how visualization can be used to establish, maintain and even increase the quality of software products.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

What drives quality provides insight into the factors that drive the quality software products and services. Understanding what drives quality enables you to take action before problems actually occur, thus saving time and money.

The book What Drives Quality is available for a reduced price on Leanpub as long as it’s under development. If you buy the book now you will automatically get all chapters that are added in the future for free. So don’t wait too long, get your copy now!

Categories: Blogs

Why do you want to become agile?

Ben Linders - Mon, 08/08/2016 - 13:16

Why becoming agileBecoming agile can help to achieve organizational goals. But setting agile as a goal for an organization does not work. The goal for a software organization should be to achieve results by delivering valuable products and services, not to become agile. Hence my question: do you know why do you want to become agile?
Yes, seriously, why would you do agile? There are lot’s of good reasons (and also some less good ones), but what’s your reason to become agile? What do you expect from agile?

Agile transformations seriously impact organizations (they should!). It’s a reorganization of people, work, and authorities. Employees are asked to think about the way they want to do their work, and to take responsibility. Managers have to give room to their employees. There must be a good reason to do all of this. You should know the reason why you want to become agile, and let everyone involved know.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

It is important to know why you want to increase the agility of your organization and what you expect to achieve with agile. To know why would you want to work in an agile way, why you want your culture to become agile.

Again, agile is not an goal or state that needs to be reached. It’s important that every manager and employee knows why the organization starts an agile journey and what is expected from agile. Reasons to become agile

If I ask people in organizations that I work with why they want to become agile they often look surprised at first. Of course they want agile! Everybody is doing agile, so it must be good. Agile is supposed to make them faster, cheaper and better. So let’s do it. If it only was that easy … every organization should be truly agile by now.

Does knowing the reason matter? Yes, it does! If you know the reason why you want to become agile, chances of success increase significantly. If people know why they have to chance, if they see the purpose, they are more willing to do it.

Some of the reasons that I have heard in organizations on why they want to become agile are:

  • Deliver the right products and services
  • Be able to deliver faster
  • Increase customer satisfaction and win new customers
  • Create innovative products with motivated employees
  • Reduce the cost of development and management
  • Improve the quality of goods and services
  • Effective cooperation between development and management
  • (your reason here)

My advice to companies is to think about why they want to become agile. Pick one reason, and one only. State very clearly in one sentence what your main objective to become agile. What would make your agile transformation successful. Going for one goal is hard enough. Also, the reason you choose impacts the way that agile will be applied (it should!), so choose your reason carefully. What is your goal with agile?

Do you want to deliver products with good quality? Or be able to better meet the needs of your customers? Lower your costs? Increase the motivation of your employees? Whatever your reason is to become agile, contact me, and I’ll help you to get results :-).

Categories: Blogs

Books by Ben Linders on Leanpub

Ben Linders - Mon, 08/08/2016 - 11:07

Books Ben Linders LeanpubAll of the books that I have published on Leanpub are now available in a bundle: Books by Ben Linders. You get a 30% discount when you buy my books with this bundle.

Currently this bundle contains three books:


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

With plenty of exercises for your personal retrospective toolbox, Getting Value out of Agile Retrospectives will help you to become more proficient in doing retrospectives and to get more out of them.

My book What Drives Quality helps you to prevent software problems from happening by building an shared understanding what drives software quality. It enables you to effectively take actions, saving time and money!

My book Continuous Improvement makes you aware of the importance of continuous improvement, explores how it is engrained in agile, and provides suggestions that Scrum masters, agile coaches, well everybody, can use in their daily work to improve continuously and increase team and organizational agility.

My 2nd and 3rd book are being written incrementally. Currently they are only sold via Leanpub. When you buy the book on Leanpub you will automatically receive new chapters when they become available, free of charge.

All books that I will publish in the future on leanpub will be added to this bundle.

Categories: Blogs

Masterclasses at Agile Tour Kaunas

Ben Linders - Mon, 08/01/2016 - 18:49

agiletour2016kaunas I’m giving two masterclasses at Agile Tour Kaunas on October 11 and 12 on Retrospectives and on Agile and Lean. Tickets for these agile workshops can be bought on the Agile Tour Lithuania courses webpage.

The two masterclasses are:


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

This is the first time that I’m giving my workshops in Lithuania. I’m grateful to Agile Lithuania for inviting me to their country.

The early bird price (till September 1st) for my masterclasses is 319 Eur (VAT not applicable). Regular price: is 379 Eur.

As an adviser, coach and trainer I helps organizations by deploying effective software development and management practices. I provide workshops and training sessions, public and private sessions. Here’s a list of my upcoming public sessions.

Categories: Blogs

A Summary of More Fearless Change in 15 Tweets

Ben Linders - Fri, 07/29/2016 - 11:58

More Fearless Change book coverThe book More Fearless Change by Mary Lynn Manns and Linda Rising provides ideas for driving change in organizations, using the format of patterns. This book is an new and extended version of their successful book Fearless Change.

I did an interview with Mary Lynn and Linda about how people are viewing change in organizations, the purpose of patterns and the benefits that organizations can get from using them, the new patterns that are described in More Fearless Change and the insights were added to the existing patterns, and their expectations about what the future will bring us in organizational change. You can read it on InfoQ: Q&A on the Book More Fearless Change. 15 Quotes from More Fearless Change


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Here’s a set of 15 quotes from the new patterns that have been added in More Fearless Change. I’m tweeting these quotes with #fearlesschange: No matter how great your new idea is and how well prepared you are, you are bound to meet some level of resistance Inspire people throughout the change initiative with a sense of optimism rather than fear When you feel discouraged, look for the bright spots among the challenges that surround you Displaying a warm smile and a willingness to be nice even when negativity surrounds you can go a long way To make progress toward your goal, state precisely what you will do as you take the next baby step To encourage adoption of a new idea, experiment with removing obstacles that might be standing in the way Change the environment in a way that will encourage people to adopt the new idea When you have a chance to introduce someone to your idea, you don’t want to stumble around for the right words to say Persuasion tactics must consider what people are logically thinking as well as what they are feeling By focusing on the future, individuals may be more motivated to let go of the past Your change initiative is a series of baby steps As you prepare to move forward, occasionally look for a quick and easy win that will have visible impact Stay in touch with your supporters—never assume that news of your progress is known across the organization Rumors need to be debunked before they take root and create significant concerns and anxieties during the change You can’t spend time and energy addressing every bit of resistance you meet Patterns can help you to drive change

If you want to truly change organizations, don’t try to plan it up front and don’t look for recipes. That won’t work (literally!). Patterns provide a useful format to convey ideas and to apply those ideas in a specific situation to do sustainable change.

Mary Lynn Manns and Linda Rising did a great job in this updated and extended version of Fearless Change. The experiences that they added to the existing patterns help you to get a deeper understanding, and the new patterns that they describe in this book are very valuable.

The patterns described in More Fearless Change help you to recognize situations and to come up with solutions for dealing with them. If you are dealing with change in organizations (and who isn’t nowadays) then I highly recommend to read this book and keep it close to you, as it will be useful at many times!

Categories: Blogs