Skip to content

Jimmy Bogard
Syndicate content
The Barley Architect
Updated: 3 hours 24 min ago

AutoMapper 6.1.0 released

Wed, 06/14/2017 - 13:25

See the release notes:

v6.1.0

As with all of our dot releases, the 6.0 release broke some APIs, and the dot release added a number of new features. The big features for 6.1.0 include those for reverse-mapping support. First, we detect cycles in mapping classes to automatically preserve references.

Much larger however is unflattening. For reverse mapping, we can now unflatten into a richer model:

public class Order {  
  public decimal Total { get; set; }
  public Customer Customer { get; set; } 
}
public class Customer {  
  public string Name { get; set; }
}

We can flatten this into a DTO:

public class OrderDto {  
  public decimal Total { get; set; }
  public string CustomerName { get; set; }
}

We can map both directions, including unflattening:

Mapper.Initialize(cfg => {  
  cfg.CreateMap<Order, OrderDto>()
     .ReverseMap();
});

By calling ReverseMap, AutoMapper creates a reverse mapping configuration that includes unflattening:

var customer = new Customer {  
  Name = "Bob"
};
var order = new Order {  
  Customer = customer,
  Total = 15.8m
};

var orderDto = Mapper.Map<Order, OrderDto>(order);

orderDto.CustomerName = "Joe";

Mapper.Map(orderDto, order);

order.Customer.Name.ShouldEqual("Joe");  

Dogs and cats living together! We now have unflattening.

Enjoy!

Categories: Blogs

Dealing With Optimistic Concurrency Control Collisions

Wed, 05/24/2017 - 00:06

Optimistic Concurrency Control (OCC) is a well-established solution for a rather old problem - handling two (or more) concurrent writes to a single object/resource/entity without losing writes. OCC works (typically) by including a timestamp as part of the record, and during a write, we read the timestamp:

  1. Begin: Record timestamp
  2. Modify: Read data and make tentative changes
  3. Validate: Check to see if the timestamp has changed
  4. Commit/Rollback: Atomically commit or rollback transaction

Ideally, step 3 and 4 happen together to avoid a dirty read. Most applications don't need to implement OCC by hand, and you can rely either on the database (through snapshot isolation) or through an ORM (Entity Framework's concurrency control). In either case, we're dealing with concurrent writes to a single record, by chucking one of the writes out the window.

But OCC doesn't tell us what to do when we encounter a collision. Typically this is surfaced through an error (from the database) or an exception (from infrastructure). If we simply do nothing, the easiest option, we return the error to the client. Done!

However, in systems where OCC collisions are more likely, we'll likely need some sort of strategy to provide a better experience to end users. In this area, we have a number of options available (and some we can combine):

  • Locking
  • Retry
  • Error out (with a targeted message)

My least favorite is the first option - locking, but it can be valuable at times.

Locking to avoid collisions

In this pattern, we'll have the user explicitly "check out" an object for editing. You've probably seen this with older CMS's, where you'll look at a list of documents and some might say "Checked out by Jane Doe", preventing you from editing. You might be able to view, but that's about it.

While this flow can work, it's a bit hostile for the user, as how do we know when the original user is done editing? Typically we'd implement some sort of timeout. You see this in cases of finite resources, like buying a movie ticket or sporting event. When you "check out" a seat, the browser tells you "You have 15:00 to complete the transaction". And the timer ticks down while you scramble to enter your payment information.

This kind of flow makes better sense in this scenario, when our payment is dependent on choosing the seat we want. We're also explicit to the user who is locking the item with a timeout message counter, and explicit to other users by simply not showing those seats as available. That's a good UX.

I've also had the OTHER kind of UX, where I yell across the cube farm "Roger are you done editing that presentation yet?!?"

Retry

Another popular option is to retry the transaction, steps 1-4 above. If someone has edited the record from under us, we just re-read the record including the timestamp, and try again. If we can detect this kind of exception, from a broad category of transient faults, we can safely retry. If it's a more permanent exception, validation error or the like, we can fall back to our normal error handling logic.

But how much should we retry? One time? Twice? Ten times? Until the eventual heat death of the universe? Well, probably not that last one. And will an immediate retry result in a higher likelihood of success? And in the meantime, what is the user doing? Waiting?

With an immediate error returned to the user, we leave it up to them to decide what to do. Ideally we've combined this with option number 3, and give them a "please try again" message.

That still leaves the question - if we retry, what should be our strategy?

It should probably be no surprise here that we have a lot of options on retries, and also a lot of literature on how to handle them.

Before we look at retry options, we should go back to our user - a retry should be transparent to them, but we do need to set some bounds here. Assuming that this retry is happening as the result of a direct user interaction where they're expecting a success or failure as the result of the interaction, we can't just retry forever.

Regardless of our retry decision, we must return some sort of result to our user. A logical timeout makes sense here - how about we just make sure that the user gets something back within time T. Maybe that's 2 seconds, 5 seconds, 10 seconds, this will be highly dependent on your end user's expectation. If they're already dealing with a highly contentious resource, waiting might be okay for them.

The elephant

One option I won't discuss, but is worth considering, is to design your entity so that you don't need concurrency control. This could include looking at eventually consistent data structures like CRDTs, naturally idempotent structures like ledgers, and more. For my purposes, I'm going to assume that you've exhausted these options and really just need OCC.

In the next post, I'll take a look at a few retry patterns and some ways we can incorporate them into a simple web app.

Categories: Blogs

Respawn 0.3.0-preview1 released for netstandard2.0

Wed, 05/17/2017 - 05:57

Respawn, a small library designed to ease integration testing by intelligently clearing out test data, now supports .NET Core. Specifically, I now target:

  • net45
  • netstandard1.2
  • netstandard2.0

I had waited quite a long time because I needed netstandard2.0 support for some SqlClient pieces. With those pieces in place, I can now support running Respawn on full .NET and .NET Core 1.x and 2.0 applications (and tests).

Respawn works by scanning foreign key relationships in your database and determining the correct order to clear out tables in a test database. In my testing, this method is at least 3x faster than TRUNCATE, dropping/recreating the database, or disabling FKs and indiscriminately deleting data.

Since netstandard2.0 is still in preview1 status, this is a preview release for the netstandard2.0 support. The other two TFNs are production ready. To use Respawn create a checkpoint:

static Checkpoint checkpoint = new Checkpoint  
{
    TablesToIgnore = new[]
    {
        "sysdiagrams",
        "tblUser",
        "tblObjectType",
    },
    SchemasToExclude = new []
    {
        "RoundhousE"
    }
};

And configure any tables/schemas you want to skip. Then, just call "Reset" at the beginning of your test (or in a setup method) to reset your local test database:

checkpoint.Reset("MyConnectionStringName");  

I support SQL Server (any version this millennium), SQL Server Compact Edition, and PostgreSQL, but the schema provider is pluggable (and since no one really does ANSI schema views the same way, has to be).

Enjoy!

Categories: Blogs

Domain Command Patterns - Handlers

Mon, 05/01/2017 - 23:45

In the last post, we looked at validation patterns in domain command handlers in response to a question, "Command objects should [always/never] have return values". This question makes an assumption - that we have command objects!

In this post, I want to look at a few of our options for handling domain commands:

Request to Domain

When I look at command handling, I'm really talking about the actual "meat" of the request handling. The part that mutates state. In very small applications, or very simple ones, I can put this "mutation" directly in the request handling (i.e., the controller action or event handler for stateful UIs).

But for most of the systems I build, it's too much to shove this all in the edge of my application. This raises the question - where should this logic go? We can look at a number of design patterns (including the Command Object pattern). Ultimately, I have a block of code that mutates state, and I need to decide where to put it.

Static Helper/Manager/Service Functions

A very simple option would be to create some static class to host mutation functions:

public static class SomethingManager {  
    public static void DoSomething(SomethingRequest request,
        MyDbContext db) {
        // Domain logic here
    }
}

If our method needed to work with any other objects to do its work, these would all be passed in as method arguments. We wouldn't use static service location, as we do have some standards. But with this approach, we can use all sorts of functional tricks at our disposal to build richness around this simple pattern.

How you break up these functions into individual separate classes is up to you. You might start off with a single static class per project, static class per domain object, per functional area, or per request. The general idea is that although C# doesn't support the full functional richness of F#, static functions provide a reasonable alternative.

The advantage to this approach is it's completely obvious exactly what the logic is. The return type above is "void" but as we saw with the validation options, it could be some sort of return object as well.

DDD Service Classes

Slightly different than the static class is the DDD Service Pattern. The big difference is that the service class is instance-oriented, and often uses dependency injection. The other big difference is in the wild I typically see service classes more entity or aggregate oriented:

public class SomethingService : ISomethingService {  
    public SomethingService(MyDbContext db) {
        _db = db;
    }

    public void DoSomething(SomethingRequest request) {
        // Domain logic here
    }
}

Services in the DDD world should be designed around a coordination activity. After all, the original definition was that services are designed to coordinate between aggregates, or between aggregates and external services. But that's not what I typically see, I typically see Java Spring-style DDD services where we have an entity Foo, and then:

  • FooController
  • FooService
  • FooRepository

I would highly discourage these kinds of services, as we've introduced arbitrary layering without much value. If we're doing DDD right, services would be a bit rarer, and therefore not needed for every single command in our system.

Request-Specific Handlers

With both the service and manager options, we typically see multiple requests handled by multiple method inside the same class. Although there's nothing stopping you from creating a service per request, the request-specific handler achieves this same end goal: a single class and method handling each individual request.

I copied this pattern enough times where I finally extracted the code into a library, MediatR. We create a class to encapsulate handling each individual request:

public class SomethingRequestHandler : IRequestHandler<SomethingRequest> {  
    public void Handle(SomethingRequest request) {
    }
}

There are variants for handling a request: sync/async, return value/void and combinations thereof.

This tends to be my default choice for handling domain commands, as it encourages me to isolate the logic from each request from any other request.

But sometimes the logic in my handler gets complicated, and I want to push that behavior down.

Domain Aggregate Functions

Finally, we can push our behavior down directly into our aggregates:

public class SomethingAggregate {  
    public void DoSomething(SomethingRequest request) {
    }
}

Or, if we don't want to couple our aggregates to the external request objects, we can destructure our request object into individual values:

public class SomethingAggregate {  
    public void DoSomething(string value1, int value2, decimal value3) {
    }
}

In my systems, I tend to start with the simple, procedural code inside a handler. When that code exhibits code smells, I push the behavior down into my domain objects. Of course we can just do that by default - only reserve procedural code for my CRUD areas of the application.

This certainly isn't an exhaustive list of domain command patterns, but it's 99% of what I typically see. I can mix multiple choices here as well - a handler for the logic to load/save, and a domain function for the actual "business logic".

I'm ignoring the actual Command Object pattern, as I find it might fit well with UI-level commands, it doesn't fit well with domain-level commands.

We can mix our validation choices too, and have field validation done by a framework, domain validation done by our aggregates, and use domain aggregate functions that return "result" objects.

So which way is "best"? I can't really say, a lot of this is a judgement call by your team. But with several options on the table we can at least make an informed decision.

Categories: Blogs