Skip to content

Feed aggregator

Refactoring Towards Resilience: Async Workflow Options

Jimmy Bogard - Fri, 02/17/2017 - 23:44

Other posts in this series:

In the last post, we looked at coupling options in our 3rd-party resources we use as part of "button-click" place order, and whether or not we truly needed that coupling or not. As a reminder, coupling is neither good nor bad, it's the side-effects of coupling to the business we need to evaluate in terms of desirable and undesirable, based on tradeoffs. We concluded that in terms of what we needed to couple:

  • Stripe: minimize checkout fallout rate; process offline
  • Sendgrid: Send offline
  • RabbitMQ: Send offline

Basically, none of our actions we determined to need to happen right at button click. This doesn't hold true for every checkout page, but we can make that assumption for this example.

Sidenote - in the real-life version of this, we opted for Stripe - undo, SendGrid - ignore, RabbitMQ - ignore, with offline manual re-sending based on alerts.

With this in mind, we can design a process that manages the side effects of an order placement separate from placing the order itself. This is going to make our process more complicated, but distributed system tend to be more complicated if we decide we don't want to blissfully ignore failures.

Starting the worfklow

Now that we've decided we can process our three resources out-of-band, the next question becomes "how do I signal to that back-end processing to do its work?" This largely depends on what your backend includes, if you're on-prem or cloud etc etc. Azure for example offers a number of options for us for "async processing" including:

  • Azure WebJobs
  • Azure Service Bus
  • Azure Scheduler

In my situation, we weren't deploying to Azure so that wasn't an option for us. For on-prem, we can look at:

From these three, I'm inclined towards Hangfire as it's easy to integrate into my Web API/MVC/ASP.NET Core app. A single background job executing say, once a minute, can check for any pending messages and send them along:

RecurringJob.AddOrUpdate(() => {  
    using (var db = new CartContext()) {
        var unsent = db.OutboxMessages.ToList();
        foreach (var msg in unsent) {
            Bus.Send(msg);
            db.OutboxMessages.Delete(msg);
        }
        db.SaveChanges();
    }
}, "*/1 * * * *");

Not too complicated, and this will catch any unsent messages from our API that we tried to send after the DB transaction. Once a minute should be quick enough to catch unsent messages and still not have it seem like to the end user that they're missing emails.

Now that we've got a way to kick off our workflow, let's look at our workflow options themselves.

Workflow options

There's still some ordering I need to enforce on my external resource operations, as I don't want emails to be sent without payment success. Additionally, because of the resilience options we saw earlier, I don't really want to couple each operation together. Because of this, I really want to break my workflow in multiple steps:

In our case, we can look at three major workflows: Routing Slip, Saga, and Process Manager. The Process Manager pattern can further break down into more detailed patterns, from the Microservices book "Choreography" and "Orchestration", or as I detailed them a few years back, "Controller" and "Observer".

With these options in mind, let's look at each in turn to see if they would be appropriate to use for our workflow.

Routing Slip

Routing slip is an interesting pattern that allows each individual step in the process to be decoupled from the overall process flow. With a routing slip, our process would look like:

We start create a message that includes a routing slip, and include a mechanism to forward along:

Bus.Route(msg, new [] {"Stripe", "SendGrid", "RabbitMQ"}  

Where "RabbitMQ" is really just and endpoint at this point to publish a "OrderComplete" message.

From the business perspective, does this flow make sense? Going back to our coordination options for Stripe, under what conditions should we flow to the next step? Always? Only on successful payment? How do we handle failures?

The downside of the Routing Slip pattern is it's quite difficult to introduce logic to our workflow, handle failures, retries etc. We've used it in the past successfully, and I even built an extension to NServiceBus for it, but it tends to fall down in our scenario especially around Stripe where I might need to do an refund. Additionally, it's not entirely clear if we publish our "Order Complete" message when SendGrid is down. Right now, it doesn't look good.

Saga

In the saga pattern, we have a series of actions and compensations, the canonical example being a travel booking. I'm booking together a flight, car, and hotel, and only if I can book all 3 do I call my travel booking "complete":

//vasters.com/archive/Sagas.html

Does a Saga make sense in my case? From our coordination examination, we found that only the Stripe resource had a capability of "Undo". I can't "Undo" a SendGrid call, nor can I "Undo" a message published.

For this reason, a Saga doesn't make much sense. Additionally, I don't really need to couple my process together like this. Sagas are great when I have an overall business transaction that I need to decompose into smaller, compensate-friendly transactions. That's clearly not what I have here.

Process Manager - Orchestration/Controller

Our third option is a process manager that acts as a controller, orchestrating a process/workflow from a central point:

Now, this still doesn't make much sense because we're coupling together several of our operations, making an assumption that our actions need to be coordinated in the first place.

So perhaps let's take a step back at our process, and examine what actually needs to be coordinated with what!

Process Examination

So far we've looked at process patterns and tried to bolt them onto our steps. Let's flip that, and go back to our original flow. We said our process had 4 main parts:

  1. Save order
  2. Process payment
  3. Email customer
  4. Notify downstream

From a coupling perspective, we said that "Process Payment" must happen only after I save the order. We also said that "Email customer" must happen only if "Process Payment" was successful. Additionally, our "notify downstream" step must only happen if our order successfully processed payment.

Taking a step back, isn't the "email customer" a form of notifying downstream systems that the order was created? Can we just make an additional consumer of the OrderCreatedEvent be one that sends onwards? I think so!

But we still have the issue of payment failure, so our process manager can handle those cases as well. And since we've already made payments asynchronous, we need some way to signal to the help team that an order is in a failed payment state.

With that in mind, our process will be a little of both, orchestration AND choreography:

We treat the email as just another subscriber of our event, and our process manager now is only really concerned about completing the order.

In our final post, we'll look at implementing our process manager using NServiceBus.

Categories: Blogs

New Case Study: Kantar Retail sees major gains in Delivery, Finance, and Human Resources with SAFe

Agile Product Owner - Fri, 02/17/2017 - 16:46

“Our time to market is impressive for an enterprise solution. It’s a competitive advantage in the market that we can make major product changes every two months.”

Cédric Guyot, CEO, Virtual Reality at Kantar Retail

case-study-box-kantarHow do you deliver faster, retain top talent, and carve out a competitive advantage—all while spending less? For Kantar Retail Virtual Reality (KRVR), our latest case study, the answer was in deploying SAFe.

Working with clients such as Walmart, Target, and Unilever, KRVR’s virtual reality solutions enable realistic consumer research using virtual stores. When the company set out to develop a new VR solution in 2013, team members at the small company worried that a more formal approach could stifle ideas, but SAFe provided the framework the company needed while preserving creativity.

KRVR began their SAFe implementation with just six team members, and has now grown to 30, including all aspects of software delivery, QA, Scrum Teams, and UI/UX. They didn’t leave it at that. Today, they also have SAFe fully implemented on the Team and Program levels.

Practicing SAFe, KRVR brought the latest version of its product to market. Cloud-based Kantar Retail VR Infinity™ puts VR technology directly in the hands of users, and connects teams and customers to understand issues and opportunities quickly. What’s really valuable in this story is that Kantar Retail took the time to measure results across three spectrums: Delivery, Finance, and Human Resources. Their story is one of the better examples that highlights the across-the-board effect that SAFe can have on an enterprise. As you can see here, practicing SAFe made a big difference for Kantar Retail:

Delivery
• Delivery of major releases down from 6 to 2 months
• Time to market decreased from 9 to 3 months
• Reduced time to respond to client feedback from 3 months to 1 month
• Greater predictability, which enhances client satisfaction

Finance
• 27.5% decrease in cost per epic

Human Resources
• 41% to 28% decrease in the attrition rate
• 36%-43% increase in team productivity due to clear job responsibilities and processes
• Easier talent acquisition and retention due to openness and transparency

Given those metrics, top management is fully behind SAFe. SAFe not only elevates internal team satisfaction and hiring; the sales team now brings the company’s time to market into conversations with prospects.

“We’ve adopted an enterprise framework for agility, the SAFe framework. We’ve been more consistent. We’ve been able to articulate a roadmap to the business and to our clients and deliver in time and in full, which is a really positive milestone..”

Eric Radermacher, Product Manager, Virtual Reality at Kantar Retail

Check out the company’s full case study here.

Many thanks to those who helped share KRVR’s SAFe story: Cedric Guyot, CEO; Dmytro Vavriv, PhD, Delivery Manager; Paul Gregory, CTO; Dmytro Tsybulskyi, Release Train Engineer; Eric Radermacher, Product Manager; and Timofey Yevgrashyn, SPC and Agile coach.

Stay SAFe!
—Dean

Categories: Blogs

Why don’t monitoring tools monitor changes?

Xebia Blog - Fri, 02/17/2017 - 13:38

Changes in applications or IT infrastructure can lead to application downtime. This not only hits your revenue, it also has a negative impact on your reputation. Everybody in IT understands the importance of having the right monitoring solutions in place. From an infrastructure – to a business perspective, we rely on monitoring tools to get […]

The post Why don’t monitoring tools monitor changes? appeared first on Xebia Blog.

Categories: Companies

How Agile Creates and Manages WIP Limits

Johanna Rothman - Thu, 02/16/2017 - 18:13

As I’m writing the agile project management book, I’m explaining how agile creates and manages WIP (Work in Progress) Limits.

Iteration-based agile manages WIP by estimating what you can do in an iteration. You might count points. Or, you use my preference, which is to count the (small) stories.

If you use flow-based approaches, you use kanban. You set WIP limits for the columns on your board.

In this image, there’s a limit of eight items in the Ready column, three in Dev and unit test, two in System test. The interesting question is how did this team decide on these limits?

This is a large-ish team. They have eight people aside from the PO: six developers and two testers. They decided to use a reasonable approximation for deciding on WIP limits:

  1. Take the number of people who work on a given column. That’s the numerator. Here, for the Dev and unit test column, it’s 6.
  2. Divide that number by 2. That gives you 3 as the WIP.

This team happens to have a policy of “No one works alone on the code,” so their approximation works well. You might have a product that requires a front-end, middleware, and back-end developer for each feature. You would have a WIP limit of 2 on the Dev and unit test column because you need three people for each feature.

Now, there are only two testers on this team. How did they get to a WIP limit of 2?

The testers do not work together. They each work independently. That means they can each work on a story. They can’t work on more than two stories at a time because they each take one story. This team agreed to work on stories until the story is done. There is no “Stuck” or “Waiting” column.

Every so often, the testers need help from the developers to complete a story. That’s because the developers didn’t realize something, or implemented something not quite right. In that case, the testers walk over to the developers and negotiate when the developer is available to help. Often, it’s right away. Yes, the developers stop what they are doing, because finishing something they thought was done is more important than starting or completing something new.

If you need a Stuck or Waiting column, you might add WIP limits to that column also. Why? Because you don’t want that column to turn into a purgatory/limbo column, where partly finished stories go and never emerge. You might call it Urgent, although I tend to reserve Urgent for support issues.

If you use iteration-based agile, and you have unfinished work at the end of the iteration, consider using a kanban board so you can see where the work in piling up. You might have a problem with “Everyone takes their own story.”  (See Board Tyranny in Iterations and Flow.)

If you have read Visualize Your Work So You Can Say No, consider adding WIP limits to your board. You might have noticed I say I don’t use WIP limits on my paper board because the paper, the size of my board, limits my work in progress.

Categories: Blogs

Revisiting the Business Value of Agile Transformation, Part 1

BigVisible Solutions :: An Agile Company - Thu, 02/16/2017 - 18:00
Carrying the Business Case to the Rest of the Enterprise

Since “The Business Value of Agile Transformation” was first published five years ago, the fact that Agile generates far more business value than was previously possible using traditional methods has become generally recognized — at least in IT. However, it is still the case that the financial benefits that businesses actually obtain from their investment in Agile fall far short of what could be achieved.

One reason for this suboptimal performance is that, even as Agile speeds up software development in IT, upstream and downstream business activities remain mired in outdated business practices. If more business value is not delivered to the end-customer, more business value will not be received in exchange by the business. The solution is to apply Agile methods to eliminate bottlenecks that slow cycle time wherever they occur across the broader value stream. In some cases the worst bottlenecks are not in software development but upstream in product management or downstream in staging to production.

Unfortunately, if the scope of an Agile initiative is limited to a subset of the value stream, the opportunity to globally optimize the value stream is compromised. Although the business case for Agile is better understood in IT than ever before, it is still not well understood in other parts of the business (e.g., product marketing or portfolio management). Furthermore the jargon of IT frequently creates the impression in other parts of the business that they have nothing in common with IT. Consequently what might be accepted as a perfectly valid business case for IT may not seem applicable to others. This makes it all that more difficult to develop an agreement to collaboratively optimize the value stream between the IT and non-IT aspects of the business.

There is a straightforward solution to this dilemma. Lose the IT jargon and domain specific references and speak the lingua franca that is understood across the entire enterprise: money and risk. With this in mind, we will now revisit six business benefits expressed in financial and risk-reduction terms that are obtained through Agile methods and are available to all segments of the business.

  1. Reduced failed project risk
  2. Reduced over-budget and late projects
  3. Reduced waste
  4. Improved return through early and frequent releases
  5. Reduced write-off risk
  6. Higher-quality software with fewer defects

 

Benefit 1: Reduced Failed Project Risk

The purpose of project (alternatively: initiative, program, etc.) management is to reduce risk of failure (or if you are a glass-half-full kind of person, increase likelihood of success). Traditionally we begin with a charter that documents what should be delivered, when it should be delivered, and how much it should cost. The second step is to create a project plan that describes in detail the best approach for delivering the desired outcome that optimally balances cost and risk.

Project risk is managed first and foremost by avoiding deviation from the plan and getting quickly back on track when unplanned deviations occur. When, for whatever reason, a change to a specification or some other element of the plan is desired, a formal change process is used to make absolutely clear that the change is authorized and properly incorporated into the project plan. The change process is a tool that project managers use to control risk. Other risk controls include time and budget buffers added to increase the likelihood that the project will end as planned. There are many many controls designed to reduce different kinds of risks. Although controls reduce local risks, they increase project cost and complexity. Good project managers introduce the controls they need to achieve an acceptable level of risk and no more.

This classical approach to project management works extremely well when it’s very clear what the desired outcome and how to achieve it. For example, building a spec house from a blueprint using standard materials and experienced tradespeople. This approach to project management fits hand in glove with classical financial governance. Funds are approved if and only if a charter and plan exists that makes crystal clear how funds will be spent and the value of the delivered outcome.

However, what happens if you must begin construction without a clear idea of what it is that you are supposed to build and therefore you also are not sure what materials or skills you will need or when you will need them? How can you predict in advance how much it will cost or what it will be worth?

More and more this is the plight that every aspect of the business faces. Increasing business uncertainty means initiatives must begin without clear objectives or the knowledge needed to predict outcomes. Important characteristics of the solution cannot be known in advance and will only emerge as the project unfolds. Unexpected exigencies demand frequent, radical changes to project plans.

Traditional risk controls were designed to keep project plans from changing. Yet we now see that, for projects to succeed, plans must continually adapt to changing conditions. Adding more and more controls won’t reduce fundamental uncertainty, and the increased cost and complexity of additional controls means that, like the heads of the hydra, for every risk you control, new project risks pop up elsewhere in the project. Ironically enough, when conditions are uncertain, traditional risk management often leads to project failure rather than project success. And in today’s business environment when are conditions not uncertain?

No matter if you are in IT, product marketing or some other part of business, Agile mitigates failed project risk by:

  • Reducing risk under conditions of high uncertainty
    • Project Risk Strategy
      The Agile organization can implement a project risk strategy that recognizes that changes to project plans and specifications don’t always increase project risk and often reduce it.
    • Change as Constant
      The organization can reduce the cost of change by allowing, encouraging and anticipating modifications to the specs and plans throughout the project lifecycle.
  • Delivering incremental value
    • Even Failures Yield Value
      Incrementally delivering the outcome over the course of the project allows for value to be received even if the project is ended prematurely. It also allows outcomes to be tested in production to verify their value and improve subsequent efforts.
    • MMF
      By focusing on the minimal marketable feature (MMF), the Agile organization can to deliver something of value as early as possible
    • Highest-Value Features First
      The organization can also focus on delivering the highest-value features first.

Next up is how Agile reduces the incidence of projects delivered late and/or over-budget.

Want to Skip to the Head of the Class?

Download the white paper that started it all. In “The Business Value of Agile Transformation”, SolutionsIQ CEO John Rudd discusses six business benefits that can be measured using traditional financial and production metrics and that are available to any Agile enterprise.

 

The post Revisiting the Business Value of Agile Transformation, Part 1 appeared first on SolutionsIQ.

Categories: Companies

It’s Not All Business – My YouTube Gaming Channel

Derick Bailey - new ThoughtStream - Thu, 02/16/2017 - 14:30

Most of what I post these days, is very directly related to the “business” of software development – either code, concepts or just flat out business stuff with WatchMeCode and other related ventures.

But I’m not all business, all the time.

IMG 0401

In fact, I keep a rather high level of play time in my life and being self-employed just makes that easier as I can do pretty much what I want, when I’m at home during the week.

Not the least of which is playing video games.

Last November, it occurred to me that I could record and stream my game playing to YouTube directly from my PS4.

Shortly after, I realized I could download the videos from YouTube, edit them, add voice-over, and upload them again.

Then I found this great little game recording device: the Elgato HD60 S, and it was Christmas… sooo…

All said and done, I’ve been uploading videos to a dedicated YouTube gaming channel for a couple of months now. I’m not doing one-a-day like a lot of the big names are, but I’m trying to do at least one a week. And I’m having a ton of fun doing it!

If you’re interested in gaming, at all, and want to see what I’ve been playing lately, check out the channel.

Code-Ninja Gaming

I’ve been playing and recording a lot of Battlefield 1, but have recently been adding in Titanfall 2 and I just started For Honor. There’s plenty of other games that I play and want to record, as well. And I’ve recently started live-streaming again (now that I have a basic setup that is working with the HD60S).

It’s all just a matter of making time to record and edit… which can be difficult when I spend days and nights working on products, marketing and sales pages, and doing everything I can to be productive.

But having a hobby is important. And I’m having fun with this one!

The post It’s Not All Business – My YouTube Gaming Channel appeared first on DerickBailey.com.

Categories: Blogs

Eating The Dog Food… In Public

Sonar - Thu, 02/16/2017 - 10:55

At SonarSource, we’ve always eaten our own dog food, but that hasn’t always been visible outside the company. I talked about how dogfooding works at SonarSource a couple years ago. Today, the process is much the same, but the visibility is quite different.

When I wrote about this in 2015, we used a private SonarQube server named “Dory” for dogfooding. Every project in the company was analyzed there, and it was Dory’s standards we were held to. Today, that’s still the case, but the server’s no longer private, and it’s no longer named “Dory”.

Today, we use next.sonarqube.com (nee Dory) for dogfooding, and it’s open to the public. That means you can follow along as, for instance, we run new rule implementations against our own code bases before releasing them to you. We also have a set of example projects we run new rules against before they even make it to Next, but seeing a potentially questionable issue raised against someone else’s code hits a different emotional note than seeing it raised against your own.

Of course, that’s the point of dogfooding: that we feel your pain. As an example, take the problem of new issues raised in the leak period on old code. Since we deploy new code analyzer snapshots on Next as often as daily, it means we’re always introducing new rules or improved implementations that find issues they didn’t find before. And that means that we’re always raising new issues on old code. Since we enforce the requirement to have a passing quality gate to release, this causes us the same problem you face when you do a “simple” code analyzer upgrade and suddenly see new issues on old code. Because we do feel that pain, SonarQube 6.3 includes changes to the algorithm that sets issue creation date so that issues from new rules that are raised on old code won’t be raised in the leak period.

Obviously, we’re not just testing rules on Next; we’re also testing changes to SonarQube itself. About once a day, a new version of SonarQube itself is deployed there. In fact, it happens so often, we added a notification block to our wallboard to keep up with it:

By running the latest milestone on our internal instance, each UI change is put through its paces pretty thoroughly. That’s because we all use Next, and no one in this crowd is meek or bashful.

Always running the latest milestone also means that if you decide to look over our shoulders at Next, you’ll get a sneak peek at where the next version is headed. Just don’t be surprised if details change from day to day. Because around here, change is the only constant.

Categories: Open Source

AgileEE Agile Eastern Europe, Kiev, Ukraine, April 7-8 2017

Scrum Expert - Thu, 02/16/2017 - 09:20
The AgileEE Agile Eastern Europe conference is a two-days event dedicated to promote Agile software development and Scrum project management in Ukraine and the Eastern European countries. It features Agile experts from all over the world, with well-known industry professionals from the US, Canada and Western Europe. In the agenda of the AgileEE Agile Eastern Europe conference you can find topics like “Test-Driven Development effectiveness – beyond anecdotal evidence”, “Focused Agile Coaching: co-create, capture and share your coaching vision”, “Achieving agility in strategy execution”, “Paint out the story point. Agile estimations and metrics in 90 minutes”, “Why do you scale: because you really need or because you don’t know how to organize without scaling?”, “Impact Mapping – creating software that matters”, “Retrospective Doctor: making Retrospectives better & more fun”, “Program/Portfolio Management in the Fields and the Tools to Organize It”, “Agile for Distributed and Remote Teams: Lessons Learned”, “Better planning with #NoEstimates”. Web site: http://agileee.org/ Location for the AgileEE Agile Eastern Europe conference: Ramada Hotel, 103, Stolichnoe Shosse, Kiev, Ukraine
Categories: Communities

AgileIndy, Indianapolis, May 12 2017

Scrum Expert - Thu, 02/16/2017 - 09:00
The AgileIndy Conference is a one-day event focuses on bringing Agile and Scrum thought leaders and practitioners from around the USA to Indianapolis for a great learning and networking experience. In the agenda of the AgileIndy conference you can find topics like “Agile Cross-Pollination: Growing Agile Adoption at Farm Credit Mid-America”, “Cultivating Agile Requirements”, “Secrets of Agile Estimation: Myths, Math, and Methods”, “Case Study: We Don’t Know Anything About Agile, but Let’s Give it a Try!”, “Framework-Driven Product Management”, “The Show Must Go On: Agile Leadership Lessons Learned from a Life in the Theatre”, “Coaching for Success – Practical Solutions for Building a High-Performance Organization”, “Emotional Intelligence for Agile Teams”. Web site: http://agileindy.org/conference/ Location for the AgileIndy conference: JW Marriott Downtown Indianapolis, 10 S West St, Indianapolis, Indiana 46204
Categories: Communities

Boosting PMO's with Lean Thinking

Leading Answers - Mike Griffiths - Thu, 02/16/2017 - 03:11
Lean Thinking, described and popularized in the book “Lean Thinking” by James Womack and Daniel Jones, is summarized as: “focusing on delivering the most value from a customer perspective, while reducing waste and fully utilizing the skills and knowledge of... Mike Griffiths
Categories: Blogs

Sometimes Docker Is A Pain… Like The Failed Image Builds

Derick Bailey - new ThoughtStream - Wed, 02/15/2017 - 18:10

Working with Docker is generally a good experience for me. I’ve got enough of the tools down, I know most of the command line parameters that I need, and I can look up what I don’t have memorized yet.

But that doesn’t mean Docker is completely painless.

For example, I was recently recording a WatchMeCode episode and I wanted to suggest a simple way to help reduce the need to remember so many command-line options. Well, it didn’t go quite the way I wanted.

If something as “simple” as which command-line options are needed can be a huge source of problems, imagine how many issues can come up when editing a Dockerfile, configuring an image and generally trying to make your Docker image builds work.

The list is endless, and I’ve run into just about every single problem you can image.

  • missing “EXPOSE” instructions for TCP/IP ports
  • forgetting to “RUN” a “mkdir” to create the folder for the app
  • telling the Dockerfile to set the “USER” before you have the right folder permissions in place
  • leaving out critical environment variables, such as “NODE_ENV=production”
  • and more!

The problems are the bane of a developer’s life, in Docker. They are things we are not accustomed to dealing with. We write code, after all, not server configuration and deployment automation. But here we are, in this new world where we have become a critical part of the infrastructure and deployment process.

And while we truly are better off for this – we have a more consistent development and deployment experience, with far fewer (i.e. zero) “Works On My Machine” problems – we do have a few new frustrations to deal with.

Announcing the Debugging Docker Images webinar

Stop the endless cycles of debugging failed Docker image builds.

No more tweaking a Dockerfile – hoping to get it right – building, testing, watching it fail again, and then repeating the process… over, and over, and over, and over again.

Join me and the other 50+ developers that have registered, so far, for a live webinar on February 27th, 2017 at 20:00 UTC (2PM CST).

I’ll show you the tools and techniques that I use, to cut image build cycles down to a single “docker build” in most cases.

Debugging docker containers poster

Learn About the Webinar and Register to Attend ]

You’re guaranteed to see some techniques that will help you reduce the debugging cycles for your failed Docker image builds.

The post Sometimes Docker Is A Pain… Like The Failed Image Builds appeared first on DerickBailey.com.

Categories: Blogs

Scaling Scrum with Visual Studio Team Services

TV Agile - Wed, 02/15/2017 - 14:58
Watch and learn how to use Visual Studio Team Services to work with many teams on the same product. This presentation explains how to create a Nexus of 3-9 teams working on a single product with separate team backlogs and separate sprint backlogs while being able to visualise the total amount of work underway. Video […]
Categories: Blogs

Refactoring Towards Resilience: Evaluating Coupling

Jimmy Bogard - Tue, 02/14/2017 - 23:25

Other posts in this series:

So far, we've been looking at our options on how to coordinate various services, using Hohpe as our guide:

  • Ignore
  • Retry
  • Undo
  • Coordinate

These options, valid as they are, make an assumption that we need to coordinate our actions at a single point in time. One thing we haven't looked at is breaking the coupling of our actions, which greatly widens our ability to deal with failures. The types of coupling I encounter in distributed systems (but not limited to) include:

  • Behavioral
  • Temporal
  • Platform
  • Location
  • Process

In our code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Of the coupling types we see here, the biggest offender is Temporal coupling. As part of placing the order for the customer's cart, we also tie together several other actions at the same time. But do we really need to? Let's look at the three external services we interact with and see if we really need to have these actions happen immediately.

Stripe Temporal Coupling

First up is our call to Stripe. This is a bit of a difficult decision - when the customer places their order, are we expected to process their payment immediately?

This is a tough question, and one that really needs to be answered by the business. When I worked on the cart/checkout team of a Fortune 50 company, we never charged the customer immediately. In fact, we did very little validation beyond basic required fields. Why? Because if anything failed validation, it increased the chance that the customer would abandon the checkout process (we called this the fallout rate). For our team, it made far more sense to process payments offline, and if anything went wrong, we'd just call the customer.

We don't necessarily have to have a black-and-white choice here, either. We could try the payment, and if it fails, mark the order as needing manual processing:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    try {
        var payment = await stripeService.PostPaymentAsync(order);
    } catch (Exception e) {
        Logger.Exception(e, $"Payment failed for order {order.Id}");
        order.MarkAsPaymentFailed();
    }
    if (!order.PaymentFailed) {
        await sendGridService.SendPaymentSuccessEmailAsync(order);
    }
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

There may also be business reasons why we can't process payment immediately. With orders that ship physical goods, we don't charge the customer until we've procured the product and it's ready to ship. Otherwise we might have to deal with refunds if we can't procure the product.

There are also valid business reasons why we'd want to process payments immediately, especially if what you're purchasing is digital (like a software license) or if what you're purchasing is a finite resource, like movie tickets. It's still not a hard and fast rule, we can always build business rules around the boundaries (treat them as reservations, and confirm when payment is complete).

Regardless of which direction we go, it's imperative we involve the business in our discussions. We don't have to make things technical, but each option involves a tradeoff that directly affects the business. For our purposes, let's assume we want to process payments offline, and just record the information (naturally doing whatever we need to secure data at rest).

SendGrid Temporal Coupling

Our question now is, when we place an order, do we need to send the confirmation email immediately? Or sometime later?

From the user's perspective, email is already an asynchronous messaging system, so there's already an expectation that the email won't arrive synchronously. We do expect the email to arrive "soon", but typically, there's some sort of delay. How much delay can we handle? That again depends on the transaction, but within a minute or two is my own personal expectation. I've had situations where we intentionally delay the email, as to not inundate the customer with emails.

We also need to consider what the email needs to be in response to. Does the email get sent as a result of successfully placing an order? Or posting the payment? If it's for posting the payment, we might be able to use Stripe Webhooks to send emails on successful payments. In our case, however, we really want to send the email on successful order placement not order payment.

Again, this is a business decision about exactly when our email goes out (and how many, for what trigger). The wording of the message depends on the condition, as we might have a message for "thank you for your order" and "there was a problem with your payment".

But regardless, we can decouple our email from our button click.

RabbitMQ Coupling

RabbitMQ is a bit of a more difficult question to answer. Typically, I generally assume that my broker is up. Just the fact that I'm using messaging here means that I'm temporally decoupled from recipients of the message. And since I'm using an event, I'm behaviorally decoupled from consumers.

However, not all is well and good in our world, because if my database transaction fails, I can't un-send my message. In an on-premise world with high availability, I might opt for 2PC and coordinate, but we've already seen that RabbitMQ doesn't support 2PC. And if I ever go to the cloud, there are all sorts of reasons why I wouldn't want to coordinate in the cloud.

If we can't coordinate, what then? It turns out there's already a well-established pattern for this - the outbox pattern.

In this pattern, instead of sending our messages immediately, we simply record our messages in the same database as our business data, in an "outbox" table":

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    dbContext.SaveMessage(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Internally, we'll serialize our message into a simple outbox table:

public class Message {  
    public Guid Id { get; set; }
    public string Destination { get; set; }
    public byte[] Body { get; set; }
}

We'll serialize our message and store in our outbox, along with the destination. From there, we'll create some offline process that polls our table, sends our message, and deletes the original.

while (true) {  
    var unsentMessages = await dbContext.Messages.ToListAsync();
    var tasks = new List<Task>();
    foreach (var msg in unsentMessages) {
        tasks.Add(bus.SendAsync(msg)
           .ContinueWith(t => dbContext.Messages.Remove(msg)));
    }
    await Task.WhenAll(tasks.ToArray());
}

With an outbox in place, we'd still want to de-duplicate our messages, or at the very least, ensure our handlers are idempotent. And if we're using NServiceBus, we can quite simply turn on Outbox as a feature.

The outbox pattern lets us nearly mimic the 2PC coordination of messages and our database, and since this message is a critical one to send, warrants serious consideration of this approach.

With all these options considered, we're now able to design a solution that properly decouples our different distributed resources, still satisfying the business goals at hand. Our next post - workflow options!

Categories: Blogs

Invert Time Management; Schedule Energy

Agile Complexification Inverter - Tue, 02/14/2017 - 18:42
One can not manage Time. Why we talk like this is possible, might just lead to a billion dollar self help industry. Or we could invert the way we talk and think…

Scheduling Your Energy, Not Your Time By Scott AdamsYes that Scott Adams!
In that short article Scott give you his secret to success - it's basically free.  Now you could go out and buy a book like one of these to get other advice about your time usage.  Or - you could start by taking his (free) advice ... the decision is yours; but it's past time to make it.


The Time Of Your Life | RPM Life Management System $395 by Tony Robbins
100 Time Savers (2016 Edition) [obviously time sensitive information]Tell Your Time: How to Manage Your Schedule So You Can Live Free by Amy Lynn Andrews


See Also:
I'm Dysfunctional, You're Dysfunctional by Wendy Kaminer.    "The book is a strong critique of the self-help movement, and focuses criticism on other books on the subject matter, including topics of codependency and twelve-step programs. The author addresses the social implications of a society engaged in these types of solutions to their problems, and argues that they foster passivity, social isolation, and attitudes contrary to democracy."



Categories: Blogs

Docker Recipes eBook Update: Editor, New Content, and More

Derick Bailey - new ThoughtStream - Tue, 02/14/2017 - 14:30

It’s been a few weeks since the pre-sale of the Docker Recipes for Node.js Development ebook ended, and I haven’t spoken about it much but that doesn’t mean it’s been dormant!

Quite the opposite. In fact, I’m a bit overwhelmed by how quickly things are moving right now.

Buried by work

So I wanted to share an update on what’s been going on, what’s happening next, etc.

The Pre-Sale Numbers

A lot of people have been asking how the pre-sale went.

I had a goal of hitting 100 sales by the end of January, originally. That goal was smashed EASILY in the first week of the pre-sale, which prompted a bonus early recipe to be included in the book!

But the success of the book didn’t stop there.

All said and done, I saw a total of 265 sales of the ebook before the pre-sale period ended!

That number far exceeded my expectations for how well this pre-sale would do.

Thank you to everyone that bought into the pre-sale version! Your trust and willingness to support this book with your wallet is why I do this, and has given me what I need to ensure this book lives up to your standards.

Technical Editing

Probably most important thing to happen since the pre-sale ended – and as a direct result of how well the pre-sale went…  I’ve hired a technical editor!

This is something that I have wanted to do for every single eBook I’ve written, and I am extremely happy to have done so for this book.

I don’t want to call out any names yet (because I haven’t asked permissions, yet), but I can assure you that this editor has the chops that are needed for this book.

They are not only a great editor – having tackled some eBooks that I am more than familiar with, personally – they are also a Docker expert, eBook author and speaker!

I couldn’t have asked for a better match for an editor on this ebook, and I’m happy to say that they are already in the manuscript, tearing things apart and helping me put it back together.

The First Feedback Cycle

As a part of the ebook pre-sale, those that bought will be included in all feedback cycles for the ebook’s recipes.

The first round survey went out shortly after the pre-sale period ended, and I’ve received a TON of feedback already.

There are a wide variety of opinions, experience levels and ideas coming out of the feedback, including the general sentiment that the “Debugging with VS Code” recipe is a fan favorite, so far.

A number of people suggested the 2 debugging recipes may be better off as a single recipe, as well. This is something I had considered, and wasn’t sure of. But I’ll be leaning heavily on my editor to help me make that decision (and others).

I’m seeing a lot of questions and concerns around Windows based developers with Docker, for feedback and questions, too. This is a bit of a surprise for me, honestly, but it’s some of the most common feedback I’ve seen so far, and has me thinking about how I can best address that. There are are a few cases where I can add Windows specific recipes, and some other options for helping to alleviate the general concerns even before the recipes begin in the book.

New Recipes Coming Soon

With all of this feedback, and with some additional research that I’ve done on my own, I have a fair plan on the next 3 or 4 recipes (at least) to write for the book.

I’m hoping to have 1 more update to the book within February, but it may be early march, depending on scheduling for my editor and for myself.

Owners of the book will be getting updates sent out via email, as soon as they are available.

More Docker On The Way!

In the mean time, I’ve got more Docker content coming out – starting with the Debugging Docker Images webinar that I’ll be hosting on February 27th.

I’m expecting the Q&A session at the end of this to help drive some of the content for the book as well. Any and all questions, comments and feedback I get around Docker – from any source and any angle – will fuel the writing of this book.

The post Docker Recipes eBook Update: Editor, New Content, and More appeared first on DerickBailey.com.

Categories: Blogs

Azure Functions imperative bindings

Xebia Blog - Tue, 02/14/2017 - 14:09

The standard input and output bindings in Azure Functions are written in a declarative pattern using the function.json. When defining input and output declarative, you do not have the option to change some of the bindings properties like the name or make multiple outputs from one input. An imperative binding can do this for you. In […]

The post Azure Functions imperative bindings appeared first on Xebia Blog.

Categories: Companies

Our top 3 posts of 2016

Growing Agile - Tue, 02/14/2017 - 13:51
Categories: Companies

Created an open source VSTS build & release task for Azure Web App Virtual File System

Xebia Blog - Tue, 02/14/2017 - 09:15

I’ve created a new VSTS Build & Release task to help you interact with the (VFS) Virtual File System API (Part of KUDU API of your Azure Web App). Currently this task can only be used to delete specific files or directories from the web app during your build or release workflow. It will be […]

The post Created an open source VSTS build & release task for Azure Web App Virtual File System appeared first on Xebia Blog.

Categories: Companies

Article 5 in SAFe Implementation Roadmap series: Identify Value Streams and ARTs

Agile Product Owner - Tue, 02/14/2017 - 01:32
Click to enlarge.Click to enlarge.

Perhaps you’ve worked your way through the first five ‘critical moves’ in the SAFe Implementation Roadmap, and the big moment has arrived. You are now ready to actually implement SAFe. That means it’s time to Identify Value Streams and Agile Release Trains (ARTs), which is the topic of our latest guidance article in the Roadmap series.

If you think of value streams and ARTs as the organizational backbone of a SAFe transformation, you will understand their importance to this journey. Attempting to shortcut or breeze through this step would be the same as putting your foot on the brake at the same time you are trying to accelerate. But get this one right, and you’ll be well on your way to a successful transformation. This is a not-so-subtle hint to strongly encourage you to read this article, especially if you are engaging with SAFe for the first time.

This article covers the key activities involved in identifying value streams and ARTs. They include:

  • Identifying operational value streams
  • Identifying the systems that support the operational value stream
  • Identifying the people in the development value stream
  • Identifying ARTs

To assist you in this effort, the article provides two examples—one from healthcare, and one from financial services—that illustrate how specific elements of value flow to the customer.

Read the full article here.

As always, we welcome your thoughts so if you’d like to provide some feedback on this new series of articles, you’re invited to leave your comments here.

Stay SAFe!
—Dean and the Framework team

Categories: Blogs

The Importance of Right-Sized Epics in SAFe

NetObjectives - Mon, 02/13/2017 - 19:58
The Goal is to Realize Value Quickly The Scaled Agile Framework® (SAFe) makes very clear the importance of building incrementally and prioritizing according to value (using WSJF, the “weighted shortest job first” formula). The goal is to realize the greatest value over the shortest period of time. This is one of the primary objectives in Lean-Thinking and it requires careful thought. It begins...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.