Skip to content

Jimmy Bogard
Syndicate content
The Barley Architect
Updated: 10 min 45 sec ago

Domain Command Patterns - Validation

Tue, 04/18/2017 - 21:36

I don't normally like to debate domain modeling patterns (your project won't succeed or fail because of what you pick), I do still like to have a catalog of available patterns to me. And one thing that comes up often are "how should I model commands?":

https://twitter.com/Mystagogue/status/853045473115987969

In general, apps I build follow CQRS, where I split my application architecture into distinct commands and queries. However, no two applications are identical in terms of how they've applied CQRS. There always seem to be some variations here and there.

My applications also tend to have explicit objects for external "requests", which are the types bound to the HTTP request variables. This might be a form POST, or it might be a JSON POST, but in either case, there's a request object.

The real question is - how does that request object finally affect my domain model?

Request to Domain

Before I get into different patterns, I like to make sure I understand the problem I'm trying to solve. In the above picture, from the external request perspective, I need a few questions answered:

  • Was my request accepted or rejected?
  • If rejected, why?
  • If accepted, what happened?

In real life, there aren't fire-and-forget requests, you want some sort of acknowledgement. I'll keep this in mind when looking at my options.

Validation Types

First up is to consider validation. I tend to look at validation with at least a couple different levels:

  • Request validation
  • Domain validation

Think of request validation as "have I filled out the form correctly". These are easily translatable to client-side validation rules. If it were 100 years ago, this would be a desk clerk just making sure you've filled in all the boxes appropriately. This sort of validation you can immediately return to the client and does not require any sort of domain-specific knowledge.

A next-level validation is domain validation, or as I've often seen referred, "business rule validation". This is more of a system state validation, "can I affect the change to my system based on the current state of my system". I might be checking the state of a single entity, a group of entities, an entire collection of entities, or the entire system. The key here is I'm not checking the request against itself, but against the system state.

While you can mix request validation and domain validation together, it's not always pretty. Validation frameworks don't mix the two together well, and these days I generally recommend against using validation frameworks for domain validation. I've done it a lot in the past and the results...just aren't great.

As a side note, I avoid as much as possible any kind of validation that changes the state of the system and THEN validates. My validation should take place before I attempt to change state, not after. This means no validation attributes on entities, for example.

Validation concerns

Next, I need to concern myself with how validation errors bubble up. For request validation, that's rather simple. I can immediately return 400 Bad Request and a descriptive body of what exactly is off with the request. Typically, request validation happens in the UI layer of my application - built in with the MVC framework I'm using. Request validation doesn't really affect the design of my domain validation.

Domain Validation

Now that we've split our validation concerns into request validation and domain validation, I need to decide how I want to validate the domain side, and how that information will bubble up. Remember - it's important to know not only that my request has failed, but why it failed.

In the domain side, understanding the design of the why is important. Can I have one reason, or multiple reasons for failure? Does the reason need to include contextual data? Do I need to connect a failure reason to a particular input, or is the contextual data in the reason enough?

Next, how are the failures surfaced? When I pass the request (or command) to the domain, how does it tell me this command is not valid? Does it just return back a failure, or does it use some indirect means, like an exception?

public void DoSomething(SomethingRequest request) {  
    if (stateInvalid) {
        throw new DomainValidationException(reason);
    }
    entity.DoSomething();
}

or

public bool DoSomething(SomethingRequest request) {  
    if (stateInvalid) {
        return false;
    }
    entity.DoSomething();
    return true;
}

In either case, I have some method that is responsible for affecting change. Where this method lives we can look at in the next post, but it's somewhere. I've gotten past the request-level validation and now need domain-level validation - can I affect this change based on the current state of the system? Two ways I can surface this back out - directly via a return value, or indirectly via an exception.

Exceptional domain validation

At first glance, it might seem that using exceptions are a bad choice for surfacing validation. Exceptions should be exceptional, and not part of a normal operation. But exceptions would let me adhere to the CQS principle, where methods either perform an action, or return data, but not both.

Personally, I'm not that hung up on CQS for these outer portions of my application, which is more of OOP concern. Maybe if I'm trying to follow OOP to the letter it would be important. But I'm far more concerned with clean code than OOP.

If I expect the exceptional case to be frequent, that is, the user frequently tries to do something that my domain validation disallows, then this wouldn't be a good choice. I shouldn't use exceptions just to get around the CQS guideline.

However, I do try to design my UX so that the user cannot get themselves in an invalid state. Even validations - my UX should guide the user so that they don't put in invalid data. The HTML5 placeholder attribute or explanatory text helps there.

But what about domain state? This is a bit more complex - but ideally, if a user isn't allowed to perform a state change, for whatever reason, then they are not presented with an option to do so! This can be communicated either with a disabled link/button, or simply removing the button/link altogether. In the case of REST, we just wouldn't return links and forms that were not valid state transitions.

If we're up-front designing our UX to not allow the user to try to get themselves in a bad state, then exceptions would truly be exceptional, and then it's OK to use them I believe.

Returning success/failure

If we don't want to use exceptions, but directly return the success/failure of our operation, then at this point we need to decide:

  • Can I have one or multiple reasons for failure?
  • Do I need contextual information in my message?
  • Do I need to correlate my message to input fields?

I don't really have a go to answer for any of these, it's really depended on the nature of the application. But if I just needed a single reason, then I can have a very simple CommandResult:

public class CommandResult  
{
   private CommandResult() { }

   private CommandResult(string failureReason)
   {
       FailureReason = failureReason;
   }

   public string FailureReason { get; }
   public bool IsSuccess => string.IsNullOrEmpty(FailureReason);

   public static CommandResult Success { get; } = new CommandResult();

   public static CommandResult Fail(string reason)
   {
       return new CommandResult(reason);
   }

   public static implicit operator bool(CommandResult result)
   {
       return result.IsSuccess;
   }
}

In the above example, we just allow a single failure reason. And for simplicity's sake, an implicit operator to bool so that we can do things like:

public IActionResult DoSomething(SomethingRequest request) {  
    CommandResult result = service.DoSomething(request);
    return result ? Ok() : BadRequest(result.FailureReason);
}

We can of course make our CommandResult as complex as we need be to represent the result of our command, but I like to start simple.

Between these two options, which should you use? I've gone back and forth between the two and they both have benefits and drawbacks. At some point it becomes what your team is more comfortable with and what best fits their preferences.

With request and command validation, let's next turn to handling the command itself inside our domain.

Categories: Blogs

Swagger, the REST Kryptonite

Tue, 04/04/2017 - 23:08

Swagger, a tool to help design, build, document, and consume RESTful APIs is ironically kryptonite for building actual RESTful APIs. The battle over the term "REST" is lost, where "RESTful" simply means "an API over HTTP" but these days is 99% of the time referring to "RPC over HTTP".

In a post covering the problems with Swagger, the author outlines some familiar issues I've seen with it (and its progenitors such as apiary.io):

  • Using YAML as the new XSD
  • Does not support Hypermedia (!!!!)
  • URI-centric
  • YAML-generation from code

Some of these are well-known issues, but the biggest one for me is the lack of hypermedia support. Those that know REST understand that REST includes a hypertext constraint. No hypermedia - you're not REST.

And that's OK for plenty of situations. I've blogged and given talks in the past about when REST is appropriate. I've shipped actual REST APIs as well as plenty of plain Web APIs. Each has its place, and I still stick to each name simply because it's valuable to distinguish between APIs with hypermedia and APIs without.

When not to use REST

In my client applications, I rarely actually need REST. If my server has only one client, and that client is developed/deployed lockstep with the server, there's no value to the decoupling that REST brings. Instead, I embrace the client/server coupling and use HTTP as merely the transport for client/server RPC. And that's perfectly for a wide variety of scenarios:

  • Single Page Applications (SPAs)
  • JS-heavy applications (but not full-blown SPAs)
  • Hybrid mobile applications
  • Native mobile applications where you force updates based on server

When you have a client and server that you're able to upgrade at the same time, hypermedia can hold you back. When I build clients alongside the server - and with ASP.NET Core, these both live in the exact same project - you can take advantage of this coupling to embrace this knowledge of the server. I even go so far as compiling my templates/views for Angular/Ember on the server side through Razor to get super-intelligent components that know exactly the shape of my DTOs.

In those cases, you're perfectly fine using RPC-over-HTTP, and Swagger.

When to use REST

When you have a client and server that deploy independently of each other, the coupling risk of RPC greatly increases. And in those cases, I start to look at REST as a means of decoupling my client and my server. The hypermedia constraint of REST goes a long way of helping to decouple, to the point where my clients can react to the existence of links, new form elements, labels, translations and more.

REST clients are more difficult to build, but it's a coupling tradeoff. But if I have server/client deployed independent, perhaps in situations of:

  • I don't control server API deployment
  • I don't control client consumer deployment
  • Mobile applications where I can't control upgrades
  • Microservice communication

Since Swagger doesn't support REST, and in fact encourages RPC-over-HTTP APIs, I wouldn't touch it for cases where I my client and server's deployments aren't lockstep.

REST and microservices

This decoupling is especially important for (micro)services, where often you'll see HTTP APIs exposed as a means of exposing service capabilities. Whether or not it's a good idea to expose temporal coupling this way is another question altogether.

If you expose RPC HTTP APIs, you're encouraging a new level of coupling with your microservice, leading down the same monolith path as before but now with 100-10K times more latency.

So if you decide to expose an HTTP API from your microservice for other services to consume, highly consider REST as then at least you'll only have temporal coupling to worry about and not the other forms of coupling that come along with RPC.

Documenting REST APIs

One of the big issues I have with Swagger documentation as it's essentially no different than API documentation for libraries. Java/Ruby/.NET documentation of a list of classes and a list of methods and a list of parameters. When I've had to consume an API that only had Swagger documentation, I was lost. Where do I start? How do I achieve a workflow of activities when I'm only given API endpoints?

My only savior was that I knew the web app also consumed the API, so I could reverse engineer the correct sequence of API calls necessary by following the workflow the app.

The ironic part was that the web application included links and forms - providing me a guided user experience and workflow for accomplishing a task. I looked at an item, saw links to related actions, followed them, clicked buttons, submitted forms and so on. The Swagger-based "REST" API was missing all of that, and the docs didn't help.

Instead, I would have preferred a markdown document describing the overall workflows, and the responses just include links and forms that I could follow myself. I didn't need a list of API calls, I needed a user experience applied to API.

Swagger, the tool for building RPC-over-HTTP APIs

Swagger has a rich ecosystem and support for a variety of platforms. If I were building a new SPA, I'd take a look at Swagger, especially for its ability to spit out TypeScript models, clients and the like.

However, if I'm building a protocol that demands decoupling with REST, Swagger would lock me in to a highly coupled RPC-over-HTTP API that would cripple my ability to deliver down the road.

Categories: Blogs