Skip to content

Feed aggregator

Boosting PMO's with Lean Thinking

Leading Answers - Mike Griffiths - Thu, 02/16/2017 - 03:11
Lean Thinking, described and popularized in the book “Lean Thinking” by James Womack and Daniel Jones, is summarized as: “focusing on delivering the most value from a customer perspective, while reducing waste and fully utilizing the skills and knowledge of... Mike Griffiths
Categories: Blogs

Cycle Time and Lead Time

Agile Complexification Inverter - Wed, 02/15/2017 - 22:51
Our organization is starting to talk about measuring Cycle Time and Lead Time on our software engineering stories.  It's just an observation, but few people seem to understand these measurement concepts, but everyone is talking about them.  This is a bad omen...  wish I could help illustrate these terms.  Because I doubt the measurements will be very accurate if the community doesn't understand when to start the clock, and just as important - when to stop it.

[For the nature of confusion around this terms compare and contrast these:  Agile Alliance Glossary; Six Sigma; KanbanTool.com; Lean Glossary.]

The team I'm working with had a toy basket ball goal over their Scrum board...  like many cheep toys the rim broke.  Someone bought a superior mini goal, it's a nice heavy quarter inch plastic board with a spring loaded rim - not a cheep toy.  The team used "Command Strips" to mount it but they didn't hold for long.

The team convinced me there was a correlation between their basketball points on the charts and the teams sprint burndown chart.  Not cause and effect, but correlation; have you ever stopped to think what that really means?  Could it mean that something in the environment beyond your ability to measure is an actual cause to the effect you desire?

I asked the head person at the site for advice, how could we get the goal mounted in our area?  He suggested that we didn't need permission, that the walls of the building were not national treasures - we should just mount it... maybe try some Command Strips.  Yes, great minds...  but what about getting fired after putting holes in the walls scares one from doing the right thing?  How hard is it to explain to the Texas Work Force Commission when they ask why you were fired?

The leader understood that if I asked the building facilities manager that I might get denied - but if he asked for a favor... it would get done.  That very day, Mike had the facilities manager looking at the board and the wall (a 15-20 minute conversation).  Are you starting the clock?  It's Dec 7th, lead time starts when Mike agreed to the team's request.

The team was excited, it looked like their desire was going to be granted.  Productive would flourish again.

Over the next few days I would see various people looking up at the wall and down at the basketball goal on the floor.  There were about 4 of these meetings each very short and not always the same people.  Team members would come up to me afterwards and ask...  "are we still getting the goal?"... "when are they going to bring a drill?"...  "what's taking so long?"

Running the calendar forward a bit... Today the facilities guy showed up with a ladder and drill.  It took about 20 minutes.  Basketball goal mounted (Dec 13th) - which clock did you stop?  All of the clocks stop when the customer (team) has their product (basketball goal) in production (a game commences).

I choose to think of lead time as the time it takes an agreed upon product or service order to be delivered.  In this example that starts when Mike, the dude, agreed to help the team get their goal mounted.

In this situation I want to think of cycle time as the time that people worked to produce the product (mounted goal) - other's might call this process time (see Lean Glossary).  And so I estimated the time that each meeting on the court looking at the unmounted goal took, plus the actual time to mount  the goal (100 minutes).  Technically cycle time is per unit of product - since in the software world we typically measure per story and each story is some what unique - it's not uncommon to drop the per unit aspect of cycle time.

Lead time:  Dec 13th minus Dec 7th = 5 work days
Cycle time:  hash marks //// (4)  one for each meeting at the board to discuss mounting techniques (assume 20 m. each); and about 20 minutes with ladder and drill;  total 100 minutes

Lead Time 5 days; Cycle Time 100 minutes
This lead to a conversation on the court - under the new goal with a few team members about what we could do with these measurements.  How if one's job was to go around and install basketball goals for every team in the building that a cycle time of 100 minutes with a lead time of 5 days might make the customers a bit unhappy.   Yet for a one off, unusual once a year sort of request that ratio of 100 minutes to 5 days was not such a bad response time.  The customer's were very happy in the end, although waiting for 5 days did make them a bit edgy.

But now what would happen if we measured our software development cycle time and lead time - would our (business) customers be happy?  Do we produce a once in a year product? (Well yes - we've yet to do a release.) Do our lead times have similar ratios to cycle time, with very little value add time (process time)?

Pondering...

Well it's January 5th and this example came up in a Scrum Master's Forum meeting.  After telling the tale we still did not agree on when to start and stop the two watches for Lead Time and Cycle Time.  Maybe this is much harder than I thought.  Turns out I'm in the minority of opinions - I'm doing it wrong!

Could you help me figure out why my view point is wrong?  Comment below, please.

LeanKit just published an article on this topic - it's very good but might also misinterpret cycle time.  I see no 'per unit' in their definition of cycle time.  The Lead Time and Cycle Time Debate: When Does the Clock Start? by Tommy Norman.


See Also:

LeanKit published this excellent explanation of their choices in calculating cycle time within their tool:  Kanban Calculations: How to Calculate Cycle Time by Daniel Vacanti.
LeanKit Lead Time Metrics: Why Weekends Matter
Elon Musk turns a tweet into reality in 6 days by Loic Le Meur
The ROI of Multiple Small Releaseshttps://en.wikipedia.org/wiki/Frank_Bunker_Gilbreth_Sr.
https://en.wikipedia.org/wiki/Cheaper_by_the_Dozen


The Hummingbird Effect: How Galileo Invented Timekeeping and Forever Changed Modern Life
by Maria Popova.  How the invisible hand of the clock powered the Industrial Revolution and sparked the Information Age.
Categories: Blogs

Sometimes Docker Is A Pain… Like The Failed Image Builds

Derick Bailey - new ThoughtStream - Wed, 02/15/2017 - 18:10

Working with Docker is generally a good experience for me. I’ve got enough of the tools down, I know most of the command line parameters that I need, and I can look up what I don’t have memorized yet.

But that doesn’t mean Docker is completely painless.

For example, I was recently recording a WatchMeCode episode and I wanted to suggest a simple way to help reduce the need to remember so many command-line options. Well, it didn’t go quite the way I wanted.

If something as “simple” as which command-line options are needed can be a huge source of problems, imagine how many issues can come up when editing a Dockerfile, configuring an image and generally trying to make your Docker image builds work.

The list is endless, and I’ve run into just about every single problem you can image.

  • missing “EXPOSE” instructions for TCP/IP ports
  • forgetting to “RUN” a “mkdir” to create the folder for the app
  • telling the Dockerfile to set the “USER” before you have the right folder permissions in place
  • leaving out critical environment variables, such as “NODE_ENV=production”
  • and more!

The problems are the bane of a developer’s life, in Docker. They are things we are not accustomed to dealing with. We write code, after all, not server configuration and deployment automation. But here we are, in this new world where we have become a critical part of the infrastructure and deployment process.

And while we truly are better off for this – we have a more consistent development and deployment experience, with far fewer (i.e. zero) “Works On My Machine” problems – we do have a few new frustrations to deal with.

Announcing the Debugging Docker Images webinar

Stop the endless cycles of debugging failed Docker image builds.

No more tweaking a Dockerfile – hoping to get it right – building, testing, watching it fail again, and then repeating the process… over, and over, and over, and over again.

Join me and the other 50+ developers that have registered, so far, for a live webinar on February 27th, 2017 at 20:00 UTC (2PM CST).

I’ll show you the tools and techniques that I use, to cut image build cycles down to a single “docker build” in most cases.

Debugging docker containers poster

Learn About the Webinar and Register to Attend ]

You’re guaranteed to see some techniques that will help you reduce the debugging cycles for your failed Docker image builds.

The post Sometimes Docker Is A Pain… Like The Failed Image Builds appeared first on DerickBailey.com.

Categories: Blogs

Scaling Scrum with Visual Studio Team Services

TV Agile - Wed, 02/15/2017 - 14:58
Watch and learn how to use Visual Studio Team Services to work with many teams on the same product. This presentation explains how to create a Nexus of 3-9 teams working on a single product with separate team backlogs and separate sprint backlogs while being able to visualise the total amount of work underway. Video […]
Categories: Blogs

Refactoring Towards Resilience: Evaluating Coupling

Jimmy Bogard - Tue, 02/14/2017 - 23:25

Other posts in this series:

So far, we've been looking at our options on how to coordinate various services, using Hohpe as our guide:

  • Ignore
  • Retry
  • Undo
  • Coordinate

These options, valid as they are, make an assumption that we need to coordinate our actions at a single point in time. One thing we haven't looked at is breaking the coupling of our actions, which greatly widens our ability to deal with failures. The types of coupling I encounter in distributed systems (but not limited to) include:

  • Behavioral
  • Temporal
  • Platform
  • Location
  • Process

In our code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Of the coupling types we see here, the biggest offender is Temporal coupling. As part of placing the order for the customer's cart, we also tie together several other actions at the same time. But do we really need to? Let's look at the three external services we interact with and see if we really need to have these actions happen immediately.

Stripe Temporal Coupling

First up is our call to Stripe. This is a bit of a difficult decision - when the customer places their order, are we expected to process their payment immediately?

This is a tough question, and one that really needs to be answered by the business. When I worked on the cart/checkout team of a Fortune 50 company, we never charged the customer immediately. In fact, we did very little validation beyond basic required fields. Why? Because if anything failed validation, it increased the chance that the customer would abandon the checkout process (we called this the fallout rate). For our team, it made far more sense to process payments offline, and if anything went wrong, we'd just call the customer.

We don't necessarily have to have a black-and-white choice here, either. We could try the payment, and if it fails, mark the order as needing manual processing:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    try {
        var payment = await stripeService.PostPaymentAsync(order);
    } catch (Exception e) {
        Logger.Exception(e, $"Payment failed for order {order.Id}");
        order.MarkAsPaymentFailed();
    }
    if (!order.PaymentFailed) {
        await sendGridService.SendPaymentSuccessEmailAsync(order);
    }
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

There may also be business reasons why we can't process payment immediately. With orders that ship physical goods, we don't charge the customer until we've procured the product and it's ready to ship. Otherwise we might have to deal with refunds if we can't procure the product.

There are also valid business reasons why we'd want to process payments immediately, especially if what you're purchasing is digital (like a software license) or if what you're purchasing is a finite resource, like movie tickets. It's still not a hard and fast rule, we can always build business rules around the boundaries (treat them as reservations, and confirm when payment is complete).

Regardless of which direction we go, it's imperative we involve the business in our discussions. We don't have to make things technical, but each option involves a tradeoff that directly affects the business. For our purposes, let's assume we want to process payments offline, and just record the information (naturally doing whatever we need to secure data at rest).

SendGrid Temporal Coupling

Our question now is, when we place an order, do we need to send the confirmation email immediately? Or sometime later?

From the user's perspective, email is already an asynchronous messaging system, so there's already an expectation that the email won't arrive synchronously. We do expect the email to arrive "soon", but typically, there's some sort of delay. How much delay can we handle? That again depends on the transaction, but within a minute or two is my own personal expectation. I've had situations where we intentionally delay the email, as to not inundate the customer with emails.

We also need to consider what the email needs to be in response to. Does the email get sent as a result of successfully placing an order? Or posting the payment? If it's for posting the payment, we might be able to use Stripe Webhooks to send emails on successful payments. In our case, however, we really want to send the email on successful order placement not order payment.

Again, this is a business decision about exactly when our email goes out (and how many, for what trigger). The wording of the message depends on the condition, as we might have a message for "thank you for your order" and "there was a problem with your payment".

But regardless, we can decouple our email from our button click.

RabbitMQ Coupling

RabbitMQ is a bit of a more difficult question to answer. Typically, I generally assume that my broker is up. Just the fact that I'm using messaging here means that I'm temporally decoupled from recipients of the message. And since I'm using an event, I'm behaviorally decoupled from consumers.

However, not all is well and good in our world, because if my database transaction fails, I can't un-send my message. In an on-premise world with high availability, I might opt for 2PC and coordinate, but we've already seen that RabbitMQ doesn't support 2PC. And if I ever go to the cloud, there are all sorts of reasons why I wouldn't want to coordinate in the cloud.

If we can't coordinate, what then? It turns out there's already a well-established pattern for this - the outbox pattern.

In this pattern, instead of sending our messages immediately, we simply record our messages in the same database as our business data, in an "outbox" table":

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    dbContext.SaveMessage(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

Internally, we'll serialize our message into a simple outbox table:

public class Message {  
    public Guid Id { get; set; }
    public string Destination { get; set; }
    public byte[] Body { get; set; }
}

We'll serialize our message and store in our outbox, along with the destination. From there, we'll create some offline process that polls our table, sends our message, and deletes the original.

while (true) {  
    var unsentMessages = await dbContext.Messages.ToListAsync();
    var tasks = new List<Task>();
    foreach (var msg in unsentMessages) {
        tasks.Add(bus.SendAsync(msg)
           .ContinueWith(t => dbContext.Messages.Remove(msg)));
    }
    await Task.WhenAll(tasks.ToArray());
}

With an outbox in place, we'd still want to de-duplicate our messages, or at the very least, ensure our handlers are idempotent. And if we're using NServiceBus, we can quite simply turn on Outbox as a feature.

The outbox pattern lets us nearly mimic the 2PC coordination of messages and our database, and since this message is a critical one to send, warrants serious consideration of this approach.

With all these options considered, we're now able to design a solution that properly decouples our different distributed resources, still satisfying the business goals at hand. Our next post - workflow options!

Categories: Blogs

Invert Time Management; Schedule Energy

Agile Complexification Inverter - Tue, 02/14/2017 - 18:42
One can not manage Time. Why we talk like this is possible, might just lead to a billion dollar self help industry. Or we could invert the way we talk and think…

Scheduling Your Energy, Not Your Time By Scott AdamsYes that Scott Adams!
In that short article Scott give you his secret to success - it's basically free.  Now you could go out and buy a book like one of these to get other advice about your time usage.  Or - you could start by taking his (free) advice ... the decision is yours; but it's past time to make it.


The Time Of Your Life | RPM Life Management System $395 by Tony Robbins
100 Time Savers (2016 Edition) [obviously time sensitive information]Tell Your Time: How to Manage Your Schedule So You Can Live Free by Amy Lynn Andrews


See Also:
I'm Dysfunctional, You're Dysfunctional by Wendy Kaminer.    "The book is a strong critique of the self-help movement, and focuses criticism on other books on the subject matter, including topics of codependency and twelve-step programs. The author addresses the social implications of a society engaged in these types of solutions to their problems, and argues that they foster passivity, social isolation, and attitudes contrary to democracy."



Categories: Blogs

Docker Recipes eBook Update: Editor, New Content, and More

Derick Bailey - new ThoughtStream - Tue, 02/14/2017 - 14:30

It’s been a few weeks since the pre-sale of the Docker Recipes for Node.js Development ebook ended, and I haven’t spoken about it much but that doesn’t mean it’s been dormant!

Quite the opposite. In fact, I’m a bit overwhelmed by how quickly things are moving right now.

Buried by work

So I wanted to share an update on what’s been going on, what’s happening next, etc.

The Pre-Sale Numbers

A lot of people have been asking how the pre-sale went.

I had a goal of hitting 100 sales by the end of January, originally. That goal was smashed EASILY in the first week of the pre-sale, which prompted a bonus early recipe to be included in the book!

But the success of the book didn’t stop there.

All said and done, I saw a total of 265 sales of the ebook before the pre-sale period ended!

That number far exceeded my expectations for how well this pre-sale would do.

Thank you to everyone that bought into the pre-sale version! Your trust and willingness to support this book with your wallet is why I do this, and has given me what I need to ensure this book lives up to your standards.

Technical Editing

Probably most important thing to happen since the pre-sale ended – and as a direct result of how well the pre-sale went…  I’ve hired a technical editor!

This is something that I have wanted to do for every single eBook I’ve written, and I am extremely happy to have done so for this book.

I don’t want to call out any names yet (because I haven’t asked permissions, yet), but I can assure you that this editor has the chops that are needed for this book.

They are not only a great editor – having tackled some eBooks that I am more than familiar with, personally – they are also a Docker expert, eBook author and speaker!

I couldn’t have asked for a better match for an editor on this ebook, and I’m happy to say that they are already in the manuscript, tearing things apart and helping me put it back together.

The First Feedback Cycle

As a part of the ebook pre-sale, those that bought will be included in all feedback cycles for the ebook’s recipes.

The first round survey went out shortly after the pre-sale period ended, and I’ve received a TON of feedback already.

There are a wide variety of opinions, experience levels and ideas coming out of the feedback, including the general sentiment that the “Debugging with VS Code” recipe is a fan favorite, so far.

A number of people suggested the 2 debugging recipes may be better off as a single recipe, as well. This is something I had considered, and wasn’t sure of. But I’ll be leaning heavily on my editor to help me make that decision (and others).

I’m seeing a lot of questions and concerns around Windows based developers with Docker, for feedback and questions, too. This is a bit of a surprise for me, honestly, but it’s some of the most common feedback I’ve seen so far, and has me thinking about how I can best address that. There are are a few cases where I can add Windows specific recipes, and some other options for helping to alleviate the general concerns even before the recipes begin in the book.

New Recipes Coming Soon

With all of this feedback, and with some additional research that I’ve done on my own, I have a fair plan on the next 3 or 4 recipes (at least) to write for the book.

I’m hoping to have 1 more update to the book within February, but it may be early march, depending on scheduling for my editor and for myself.

Owners of the book will be getting updates sent out via email, as soon as they are available.

More Docker On The Way!

In the mean time, I’ve got more Docker content coming out – starting with the Debugging Docker Images webinar that I’ll be hosting on February 27th.

I’m expecting the Q&A session at the end of this to help drive some of the content for the book as well. Any and all questions, comments and feedback I get around Docker – from any source and any angle – will fuel the writing of this book.

The post Docker Recipes eBook Update: Editor, New Content, and More appeared first on DerickBailey.com.

Categories: Blogs

Azure Functions imperative bindings

Xebia Blog - Tue, 02/14/2017 - 14:09
The standard input and output bindings in Azure Functions are written in a declarative pattern using the function.json. When defining input and output declarative, you do not have the option to change some of the bindings properties like the name or make multiple outputs from one input. An imperative binding can do this for you. In
Categories: Companies

Our top 3 posts of 2016

Growing Agile - Tue, 02/14/2017 - 13:51
Categories: Companies

Created an open source VSTS build & release task for Azure Web App Virtual File System

Xebia Blog - Tue, 02/14/2017 - 09:15
I’ve created a new VSTS Build & Release task to help you interact with the (VFS) Virtual File System API (Part of KUDU API of your Azure Web App). Currently this task can only be used to delete specific files or directories from the web app during your build or release workflow. It will be
Categories: Companies

Article 5 in SAFe Implementation Roadmap series: Identify Value Streams and ARTs

Agile Product Owner - Tue, 02/14/2017 - 01:32
Click to enlarge.Click to enlarge.

Perhaps you’ve worked your way through the first five ‘critical moves’ in the SAFe Implementation Roadmap, and the big moment has arrived. You are now ready to actually implement SAFe. That means it’s time to Identify Value Streams and Agile Release Trains (ARTs), which is the topic of our latest guidance article in the Roadmap series.

If you think of value streams and ARTs as the organizational backbone of a SAFe transformation, you will understand their importance to this journey. Attempting to shortcut or breeze through this step would be the same as putting your foot on the brake at the same time you are trying to accelerate. But get this one right, and you’ll be well on your way to a successful transformation. This is a not-so-subtle hint to strongly encourage you to read this article, especially if you are engaging with SAFe for the first time.

This article covers the key activities involved in identifying value streams and ARTs. They include:

  • Identifying operational value streams
  • Identifying the systems that support the operational value stream
  • Identifying the people in the development value stream
  • Identifying ARTs

To assist you in this effort, the article provides two examples—one from healthcare, and one from financial services—that illustrate how specific elements of value flow to the customer.

Read the full article here.

As always, we welcome your thoughts so if you’d like to provide some feedback on this new series of articles, you’re invited to leave your comments here.

Stay SAFe!
—Dean and the Framework team

Categories: Blogs

The Importance of Right-Sized Epics in SAFe

NetObjectives - Mon, 02/13/2017 - 19:58
The Goal is to Realize Value Quickly The Scaled Agile Framework® (SAFe) makes very clear the importance of building incrementally and prioritizing according to value (using WSJF, the “weighted shortest job first” formula). The goal is to realize the greatest value over the shortest period of time. This is one of the primary objectives in Lean-Thinking and it requires careful thought. It begins...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Installing a VSTS Build agent on Mac OSX

Xebia Blog - Mon, 02/13/2017 - 17:36
Today I needed to install the VSTS Build Agent on a Mac Mini in the office so we can build the IPhone App. The great part of the new build infrastructure in VSTS is that it is Cross Platform and we CAN do that. Just download the agent and run the command provided in the interface.
Categories: Companies

Make Them Grow

Scrum Expert - Mon, 02/13/2017 - 17:16
There might exist some lonely standalone software developers that create software without any other person involved, but my guess is that there are not many of them. Communication is an essential skill in software development, testing and project management… and life. As feedback is a key communication tool, I was therefore very interested when I stumble on this book about feedback written by an Agile coach. “Make Them Grow by Giving Feedback People Apply” is not a long book to read, but it deals in a very pragmatic way about the important topic of feedback, both about positive and negative behaviors. As being Agile seems the “way to go” for most software development teams, the values of courage, trust and honesty will all be supported by putting in practice the advice contained in this book. After reading it, I think that it goes way beyond the software development part of our life. You can give it also to your partners and teenagers. You will have a good communication moment by just starting to have a conversation about it. Which bring us back to Agile, where conversations are one the main tool. Reference: Make Them Grow by Giving Feedback People Apply, Michal Nowostawski, CreateSpace, ISBN-13: 978-1539744153 Quotes Think about the results you can achieve by giving people a simple feedback on their contribution to the development of the company. It is relatively easy to do, but if you want to do it even more effectively, read this book to the end. [...]
Categories: Communities

We Need a Map

Agile Tools - Mon, 02/13/2017 - 05:57

map

“I have an existential map. It has ‘You are here’ written all over it.”

-Steven Wright

I happened to be in a cabin one evening after a long day of hunting. The wood stove was blazing and we were all chilling after dinner. The guides were off to one side hanging out together sharing their experiences from the day. It was fun to watch them as they described where they had been and what they had seen. The dialog would go something like this:

“We were down trail X off of the old logging road Y near the fork in the road and we didn’t see anything”
“You mean down past road Z near the bridge, right?”
“No, no, no. We were down logging road Y.”

Around and around they went.

At this point in the conversation they usually resort to hand gestures to further supplement the conversation. This goes on for a while and pretty soon it’s hard to tell whether you are looking at guides trying to tell each other where they were, or perhaps you are looking at a pair of fighter pilots describing their latest dogfight. There are hands waving wildly in the air. It’s Top Gun in the hunting cabin. Voices are raised, and expressions are animated.

And still they can’t seem to simply tell each other where they were. I’m watching all of this and I’m thinking, “These guys need a map.”

I’d laugh, but I see it all the time in software teams. If I’m honest, I catch myself doing it all the time. It seems that even with all the software that we have today, visualizing what we do and sharing it with each other is still a significant problem for us. How often do you find teams working together and trying to describe something – hands waving in the air and all. I guess we’re all future fighter pilots.

Like I said, I think sometimes what we really need is a map. I challenge you to go take a look at the walls in a second grade classroom. You’ll see nothing but maps. Maps of the US. Maps of the parts of a sentence. Maps of numbers. Everywhere there are maps of the knowledge that second graders consider important. What you see on those walls is the cartography of the eight year old mind.

Now go back to your office and look at the walls. What do you see? I’m betting that the walls are completely bare. Maybe you have some of those crappy motivational posters. If you are really lucky there is a map to the fire escape. There are no maps in the office of the typical knowledge worker. Why is that?

All too often we are like the guides in the cabin. We’re struggling to communicate simple concepts with each other – playing Top Gun. Maybe it’s time for a map.


Filed under: Agile, Teams Tagged: cartography, communication, knowledge work, maps, Software, Teams
Categories: Blogs

ReactJS/Material-UI: Cannot resolve module ‘material-ui/lib/’

Mark Needham - Mon, 02/13/2017 - 00:43

I’ve been playing around with ReactJS and the Material-UI library over the weekend and ran into this error while trying to follow one of the example from the demo application:

ERROR in ./src/app/modules/Foo.js
Module not found: Error: Cannot resolve module 'material-ui/lib/Subheader' in /Users/markneedham/neo/reactjs-test/src/app/modules
 @ ./src/app/modules/Foo.js 13:17-53
webpack: Failed to compile.

This was the component code:

import React from 'react'
import Subheader from 'material-ui/lib/Subheader'

export default React.createClass({
  render() {
    return 
    Some Text
    
  }
})

which is then rendered like this:

import Foo from './modules/Foo'
render(Foo, document.getElementById("app"))

I came across this post on Stack Overflow which seemed to describe a similar issue and led me to realise that I was actually on the wrong version of the documentation. I’m using version 0.16.7 but the demo I copied from is for version 0.15.0-alpha.1!

This is the component code that we actually want:

import React from 'react'
import Subheader from 'material-ui/Subheader'

export default React.createClass({
  render() {
    return 
    Some Text
    
  }
})

And that’s all I had to change. There are several other components that you’ll see the same error for and it looks like the change was made between the 0.14.x and 0.15.x series of the library.

The post ReactJS/Material-UI: Cannot resolve module ‘material-ui/lib/’ appeared first on Mark Needham.

Categories: Blogs

Force uninstall Visual Studio 2017 Release candidates

Xebia Blog - Sun, 02/12/2017 - 12:56
If you, like me, are stuck trying to upgrade Visual Studio 2017, then you may only get unblocked by removing everything and starting afresh. Since Visual Studio 2017 is still in Release Candidate and not final, this is something we may have to deal with from time to time. But when the "uninstall" button in
Categories: Companies

Add :8080 to your TFS 2017 bindings after upgrading to SSL

Xebia Blog - Sun, 02/12/2017 - 12:40
Because TFS 2017 allows authentication with Personal Access Tokens (PAT) it's recommended to upgrade to SSL if you were still on port 80. The installer will even help with the configuration and can add a redirect from port :80 to :443. It doesn't add a a redirect from port :8080 though, so your users may
Categories: Companies

Refactoring Towards Resilience: Evaluating RabbitMQ Options

Jimmy Bogard - Fri, 02/10/2017 - 19:50

Other posts in this series:

In the last post, we looked at dealing with an API in SendGrid that basically only allows at-most-once calls. We can't undo anything, and we can't retry anything. We're going to find some similar issues with RabbitMQ (although it's not much different than other messaging systems).

RabbitMQ, like all queuing systems I can think of, offer a wide variety of reliability modes. In general, I try to make my message handlers idempotent, as it enables so many more options up stream. I also don't really trust anyone sending me messages so anything I can do to ensure MY system stays consistent despite what I might get sent is in my best interest.

Looking back at our original code:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    return RedirectToAction("Success");
}

We can see that if anything fails after the "bus.Publish" line, we don't really know what happened to our message. Did it get sent? Did it not? It's hard to tell, but going to our picture of our transaction model:

Transaction flow

And our options we have to consider as a reminder:

Coordination Options

Let's take a look at our options dealing with failures.

Ignore

Similar to our SendGrid solution, we could just ignore any failures with connecting to our broker:

public async Task<ActionResult> ProcessPayment(CartModel model) {  
    var customer = await dbContext.Customers.FindAsync(model.CustomerId);
    var order = await CreateOrder(customer, model);
    var payment = await stripeService.PostPaymentAsync(order);
    await sendGridService.SendPaymentSuccessEmailAsync(order);
    try {
        await bus.Publish(new OrderCreatedEvent { Id = order.Id });
    } catch (Exception e) {
        Logger.Exception(e, $"Failed to send order created event for order {order.Id}");
    }
    return RedirectToAction("Success");
}

This approach would shield us from connectivity failures with RabbitMQ, but we'd still need some sort of process to detect these failures and retry those sends later on. One way to do this would be simply to flag our orders:

} catch (Exception e) {
    order.NeedsOrderCreatedEventRaised = true;
    Logger.Exception(e, $"Failed to send order created event for order {order.Id}");
}    

It's not a very elegant solution, as I'd have to create flags for every single kind of message I send. Additionally, it ignores the issue of a database transaction rolling back, but my message is still sent. In that case, my message will still get sent, and consumers could get events for things that didn't actually happen! There are other ways to fix this - but for now, let's cover our other options.

Retry

Retries are interesting in RabbitMQ because although it's fairly easy to retry my message on my side, there's no guarantee that consumers can support a message if it came in twice. However, in my applications, I try as much as possible to make my message consumers idempotent. It makes life so much easier, and allows so many more options, if I can retry my message.

Since my original message includes the unique order ID, a natural correlation identifier, consumers can have an easy way of ensuring their operations are idempotent as well.

The mechanics of a retry could be similar to our above example - mark the order as needing a retry of the event to be raised at some later point in time, or retry in the same block, or include a resiliency layer on top of sending.

Undo

RabbitMQ doesn't support any sort of "undo" natively, so if we wanted to do this ourselves, we'd have to have some sort of compensating event published. Perhaps an event, "OrderNotActuallyCreatedJustKiddingAboutBefore"?

Perhaps not.

Coordinate

RabbitMQ does not natively support any sort of two-phase commit, so coordination is out.

Next steps

Now that we've examined all of our options around the various services our application integrates with, I want to evaluate each service in terms of the coupling we have today, and determine if we truly need that level of coupling.

Categories: Blogs

Why I Use a Paper Kanban Board

Johanna Rothman - Fri, 02/10/2017 - 17:37

My most recent post about how to Visualize Your Work So You Can Say No showing a couple of different kanbans was quite popular. Several people ask me how I use my personal kanban.

I use paper. Here’s why I don’t use a tool:

  • I am too likely to put too much into a tool. I put all this week’s work, next week’s work, next month’s and next year’s work, even though I’m not going to think about anything that far out. Paper helps me contain my To Do list.
  • When I collaborate with others, they want to break down the stories (large as they may be) into tasks. No!! I can’t take tasks. I need to see the value. See my post about From Tasks to Stories with Value.
  • I change my board, depending on what’s going on. I often have a week-based kanban because I retrospect at the end of the week. I often—not always—have a “today” column.

This is what my board looks like this week. it might look like this for a while because I’m trying to finish a book. (I have several more books planned, so yes, I will have a bunch of work “in progress” for the next several months/rest of the year.)

I have several chapters in This Week. I have two chapters in “Today:” That helps me focus on the work I want to finish this week and today. As a technical editor for agileconnection.com and as a shepherd for XP 2017, I have work in “Waiting to Discuss.” I will discuss other people’s writing.

Earlier this week, I had interactions with a potential client, so that work is now in Waiting for Feedback. Sometimes, I have book chapters there, if I need to discuss what the heck goes in there and doesn’t go in a chapter.

I haven’t finished much yet this week. I am quite close on two chapters, which I expect to finish today. My acceptance criteria is ready for my editor to read. I do not expect them to be done as in publishable. I’ll do that after I receive editorial feedback.

Could I do this on an electronic board? Of course.

However, I limit my WIP by staying with paper. I can’t add any more to the paper.

Should I have WIP limits? Maybe. If I worked on a project, I would definitely have WIP limits. However, the fact that I use paper limits what I can add to my board. If I notice I have work building up in any of the Waiting columns, I can ask myself: What can I do to move those puppies to Done before I take something off the Today or To Do columns?

I’ve been using personal kanban inside one-week iterations since I read Benson’s and Barry’s book, Personal Kanban. (See my book review, Book Review: Personal Kanban: Mapping Work | Navigating Life.

I recommend it for a job search. (See Manage Your Job Search and  Personal Kanban for Your Job Hunt.

You should use whatever you want as a tool. Me, I’m sticking with paper for now. I don’t measure my cycle time or lead time, which are good reasons to use an electronic board. I also don’t measure my cumulative flow, which is another reason to use a board.

I do recommend that until you know what your flow is, you use paper. And, if you realize you need to change your flow, return to paper until you really understand your flow. You don’t need a gazillion columns, which is easy to do in a tool. Use the fewest number of columns that help you achieve control over your work nad provide you the information you need.

Question for you: Do you want to see a parking lot board? I have images of these in Manage Your Project Portfolio, but you might want to see my parking lot board. Let me know.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.