Skip to content

Feed aggregator

Middle Management and Lean-Agile. A Conversation with with Dr. Tom Grant

NetObjectives - Thu, 07/21/2016 - 17:05
Middle Management and Lean-Agile: A Conversation with Dr. Tom Grant Dr. Tom Grant and Jim Trott discuss the implications of Lean and Agile software development for middle management. The role is changed, with different skills and behaviors required. Managers transition from being directive to enabling teams to be self-organizing. There are changes in performance evaluations, metrics, and power...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Making Release Frictionless, a Business Decision, Part 1

Johanna Rothman - Thu, 07/21/2016 - 16:40

Would you like to release your product at any time? I like it when releases are a business decision, not a result of blood, sweat, and tears. It’s possible, and it might not be easy for you. Here are some stories that showed you how I did it, long ago and more recently.

Story 1: Many years ago, I was a developer on a moderately complex system. There were three of us working together. We used RCS (yes, it was in the ’80s or something like that). I hated that system. Maybe it was our installation of it. I don’t know. All I  know is that it was too easy to lock each other out, and not be able to do a darn thing. My approach was to make sure I could check in my work in as few files as possible (Single Responsibility Principle, although I didn’t know it at the time), and to work on small chunks.

I checked in every day at least before I went to lunch, once in the middle of the afternoon, and before I left for the day. I did not do test-first development, and I didn’t check my tests in at the time. It took me a while to learn that lesson. I only checked in working code—at least, it worked on my machine.

We built almost every day. (No, we hadn’t learned that lesson either.) We could release at least once a week, closer to twice a week.  Not friction-less, but close enough for our needs.

Story 2: I learned some lessons, and a few years later, I had graduated to SCCS. I still didn’t like it. Merging was not possible for us, so we each worked on our own small stuff. I still worked on small chunks and checked in at least three times a day. This time, I was smarter, and checked in my tests as I wrote code. I still wrote code first and tests second. However, I worked in really small chunks (small functions and the tests that went with them) and checked them in as a unit. The only time I didn’t do that is if it was lunch or the end of the day. If I was done with code but not tests, I checked in anyway. (No, I was not perfect.) We all had a policy of checking in all our code every day. That way, someone else could take over if one of us got sick.

Each of us did the same thing. This time, we built whenever we wanted a new system. Often, it was a couple of times a day. We told each other, “Don’t go there. That part’s not done, but it shouldn’t break anything.” We had internal releases at least once a day. We released as a demo once a week to our manager.

After that, I worked at a couple of places with home-grown version control systems that look a lot like subversion does now. That was in the later 80s. I became a project manager and program manager.

Story 3: I was a program manager for a 9-team software program. We’d had trouble in the past getting to the point where we could release. I asked teams to do these things: Work towards a program-wide deliverable (release) every month, and use continuous integration. I said, “I want you to check everything in every day and make sure we always have a working build. I want to be able to see the build work every morning when I arrive.” Seven teams said yes. Two teams said no. I explained to the teams they could work in any way they wanted, as long as they could integrate within 24 hours of seeing everyone else’s code. “No problem, JR. We know what we’re doing.”

Well, those two teams didn’t deliver their parts at the first month milestone. They were pissed when I explained they could not work on any more features until they integrated what they had. Until they had everything working, no new features. (I was pissed, too.)

It took them almost three weeks to integrate their four weeks of work. They finally asked for help and a couple of other guys worked with the teams to untangle their code and check everything in.

I learned the value of continuous integration early. Mostly because I was way too lazy (forgetful?, not smart enough?) to be able to keep the details of an entire system in my head for an entire project. I know people who can. I cannot. I used to think it was one of my failings. I now realize many people only think they can keep all the details. They can’t either.

Here’s the technical part of how I got to frictionless releases:

  1. Make the thing you work on small. If you use stories, make the story a one-day or smaller story. I don’t care if the entire team works on it or one person works on it (well, I do care, and that’s a topic for another post), but being able to finish something of value in one day means you can descend into it. You finish it. You come up for air/more work and descend again. You don’t have to remember a ton of stuff related but not directly a part of this feature.
  2. Use continuous integration. Check in all the time. Now that I write books using subversion, I check in whenever I have either several paras/one chunk, or it’s been an hour. I check that the book builds and I fix problems right away, when the work is fresh in my mind. It’s one of the ways I can write fast and write well. Our version control systems are much more sophisticated than the ones I used in the early days. I’m not sure I buy automated merge. I prefer to keep the stories small and cohesive. (See this post on curlicue features. Avoid those by managing to implement by feature.)
  3. Check in all the associated data. I check in automated tests and test data when I write code. I check in bibliographic references when I write books. If you need something else with your work product, do it at the time you create. If I was a developer now, I would check in all my unit tests when I check in the code. If I was really smart, I might even check in the tests first, to do TDD. (TDD helps people design, not test.) If I was a tester, I would definitely check in all the automated tests as soon as possible. I could then ask the developers to run those tests to make sure they didn’t make a mistake. I could do the hard-to-define and necessary exploratory testing. (Yes, I did this as a tester.)

Frictionless releases are not just technical. You have to know what done means for a release. That’s why I started using release criteria back in the 70s. I’ll write a part 2 about release criteria.

Categories: Blogs

Agile Aus – Towards an Agile enterprise

Growing Agile - Thu, 07/21/2016 - 14:47
The Agile Australia 2016 conference had some great keynote talks, which is surprising because usually we find keynotes a bit dull (too much sitting and listening). Over the next few months we will blog about the talks we attended at Agile Australia. The first keynote was by Jeff Smith from IBM who shared his thoughts on […]
Categories: Companies

Our Answer To the Alert Storm: Introducing Team View Alerts

Xebia Blog - Thu, 07/21/2016 - 12:39
As a Dev or Ops it’s hard to focus on the things that really matter. Applications, systems, tools and other environments are generating notifications at a frequency and amount greater than you are able to cope with. It's a problem for every Dev and Ops professional. Alerts are used to identify trends, spikes or dips
Categories: Companies

Neo4j: Cypher – Detecting duplicates using relationships

Mark Needham - Wed, 07/20/2016 - 19:32

I’ve been building a graph of computer science papers on and off for a couple of months and now that I’ve got a few thousand loaded in I realised that there are quite a few duplicates.

They’re not duplicates in the sense that there are multiple entries with the same identifier but rather have different identifiers but seem to be the same paper!

e.g. there are a couple of papers titled ‘Authentication in the Taos operating system’:

2016 07 20 11 43 00

2016 07 20 11 43 38

This is the same paper published in two different journals as far as I can tell.

Now in this case it’s quite easy to just do a string similarity comparison of the titles of these papers and realise that they’re identical. I’ve previously use the excellent dedupe library to do this and there’s also an excellent talk from Berlin Buzzwords 2014 where the author uses locality-sensitive hashing to achieve a similar outcome.

However, I was curious whether I could use any of the relationships these papers have to detect duplicates rather than just relying on string matching.

This is what the graph looks like:

Graph  8

We’ll start by writing a query to see how many common references the different Taos papers have:

MATCH (r:Resource {id: "168640"})-[:REFERENCES]->(other)
WITH r, COLLECT(other) as myReferences
UNWIND myReferences AS reference
OPTIONAL MATCH path = (other)-[:REFERENCES]->(reference)
WITH other, COUNT(path) AS otherReferences, SIZE(myReferences) AS myReferences
WITH other, 1.0 * otherReferences / myReferences AS similarity WHERE similarity > 0.5
RETURN, other.title, similarity
ORDER BY similarity DESC
││other.title                                │similarity│
│168640  │Authentication in the Taos operating system│1         │
│174614  │Authentication in the Taos operating system│1         │

This query:

  • picks one of the Taos papers and finds its references
  • finds other papers which reference those same papers
  • calculates a similarity score based on how many common references they have
  • returns papers that have more than 50% of the same references with the most similar ones at the top

I tried it with other papers to see how it fared:

Performance of Firefly RPC

││other.title                                                     │similarity        │
│74859   │Performance of Firefly RPC                                      │1                 │
│77653   │Performance of the Firefly RPC                                  │0.8333333333333334│
│110815  │The X-Kernel: An Architecture for Implementing Network Protocols│0.6666666666666666│
│96281   │Experiences with the Amoeba distributed operating system        │0.6666666666666666│
│74861   │Lightweight remote procedure call                               │0.6666666666666666│
│106985  │The interaction of architecture and operating system design     │0.6666666666666666│
│77650   │Lightweight remote procedure call                               │0.6666666666666666│

Authentication in distributed systems: theory and practice

││other.title                                               │similarity        │
│121160  │Authentication in distributed systems: theory and practice│1                 │
│138874  │Authentication in distributed systems: theory and practice│0.9090909090909091│

Sadly it’s not as simple as finding 100% matches on references! I expect the later revisions of a paper added more content and therefore additional references.

What if we look for author similarity as well?

MATCH (r:Resource {id: "121160"})-[:REFERENCES]->(other)
WITH r, COLLECT(other) as myReferences
UNWIND myReferences AS reference
OPTIONAL MATCH path = (other)-[:REFERENCES]->(reference)
WITH r, other, authorSimilarity,  COUNT(path) AS otherReferences, SIZE(myReferences) AS myReferences
WITH r, other, authorSimilarity,  1.0 * otherReferences / myReferences AS referenceSimilarity
WHERE referenceSimilarity > 0.5
MATCH (r)<-[:AUTHORED]-(author)
WITH r, myReferences, COLLECT(author) AS myAuthors
UNWIND myAuthors AS author
OPTIONAL MATCH path = (other)<-[:AUTHORED]-(author)
WITH other, myReferences, COUNT(path) AS otherAuthors, SIZE(myAuthors) AS myAuthors
WITH other, myReferences, 1.0 * otherAuthors / myAuthors AS authorSimilarity
WHERE authorSimilarity > 0.5
RETURN, other.title, referenceSimilarity, authorSimilarity
ORDER BY (referenceSimilarity + authorSimilarity) DESC
││other.title                                               │referenceSimilarity│authorSimilarity│
│121160  │Authentication in distributed systems: theory and practice│1                  │1               │
│138874  │Authentication in distributed systems: theory and practice│0.9090909090909091 │1               │
││other.title                   │referenceSimilarity│authorSimilarity│
│74859   │Performance of Firefly RPC    │1                  │1               │
│77653   │Performance of the Firefly RPC│0.8333333333333334 │1               │

I’m sure I could find some other papers where neither of these similarities worked well but it’s an interesting start.

I think the next step is to build up a training set of pairs of documents that are and aren’t similar to each other. We could then train a classifier to determine whether two documents are identical.

But that’s for another day!

Categories: Blogs

#SIQRoadtrip – Coming to a City Near You!

BigVisible Solutions :: An Agile Company - Wed, 07/20/2016 - 19:00

The thing about being the largest pure-play Agile consultancy in the world is our coaches, consultants and trainers are all over the place! At times it’s hard to maintain the ties that bind and to fuel the passions that keep our people burning bright. One of the ways that we keep strong connections with the hundreds of Agilists and supporting individuals that make up our community is company gatherings. Gatherings are great for bringing everyone together to mend broken fences and reconnect with like-minded individuals, but this year we’re trying something completely different:


SolutionsIQ Chief Cat-Herder Jeff Leach and his wife Sheri IMG_0699have hit the road in their RV, bringing smiles and good feels to our hardworking consultants, trainers and coaches across the country! Starting in Redmond, Washington, the two drove all the way to their starting city—Chicago, IL! But first they made a pit stop at Mount Rushmore for a selfie.

Wrigley Field was the setting for our first night of fun, and the Chicago Cubs and the New York Mets went at it. Spoiler alert: the Cubs won, 5 – 1! Way to win to defend your home turf, Cubs! The stadium was packed, giving our SolutionsIQ staff, family and friends a nice, warm reception.


From Chicago, Jeff and Sheri will head over to Bloomington, IL; Washington, D.C., and McLean, VA; Charlotte, NC (where we will check in on our East Coast headquarters); Memphis, TN; and finally Kansas City, MO. While primarily serving keep our community strong and healthy, the road trip also provides us an opportunity to check in with some of our key clients in the Midwest and East Coast.


Follow all the action with the hashtag #SIQRoadtrip! And while you’re at it, check out our Facebook album for this memorable event.

The post #SIQRoadtrip – Coming to a City Near You! appeared first on SolutionsIQ.

Categories: Companies

What I Did Not Learn At ScrumMaster Training

Leading Agile - Mike Cottmeyer - Wed, 07/20/2016 - 18:38

I was perplexed in the difference between the velocities of the two Agile (Scrum) teams. When I examined the metrics of two different Scrum teams, I noticed they produced wildly different outputs. The first, let’s call them Team Alpha, produced excellent work. Always completing on time and meeting their targets. The other, Team Beta, was late for target after target and their output was disappointing at best.

Everyone involved was competent, highly motivated and each of them had worked together well on previous projects. There was nothing abnormal about their goals and the tasks assigned to the two teams was roughly equivalent in complexity.

Yet their velocity was strikingly different.

It was time to take a look and see what was happening and observe each of the teams in action.

A Tale of Two Scrum Teams

I began with Team Alpha and found they were working well together. The team operated smoothly and efficiently, showing respect for one another and following each of the ceremonies as expected. The ScrumMaster expertly handled disruptions and disagreements, working with those involved to effortlessly get the team back on track. They had become a close team over the past six sprints. I also noticed that these team members were not rotated in and out of the group.

On the other hand, when I observed Team Beta, I found a different environment entirely. The members of the team had disagreements, as could be expected, but those were allowed to fester and grow into arguments and heated discussions. In this case, the ScrumMaster was unable to control the group, which quickly led to a low velocity and reduced quality. I also noticed that the team was constantly shifting team members and only three team members had been with the team over the last six sprints. The team was continuously changing.

It didn’t take me long to realize that the Beta ScrumMaster had no formal understanding of how group dynamics work. Which is effectively managing teams to their various stages. This completely explained their lack of progress and their inconsistent results. The people on the team had not learned how to work together due to the ScrumMaster’s lack of knowledge about group development.

When I was a young man, I remember learning about group development in the Boy Scouts of America. The BSA teaches the Tuckman model which is forming, storming, norming and performing.  Because the ScrumMaster of Team Beta did not understand the team development in the Tuckman model, the team in fact never got out of the storming phase, which completely explained their lack of progress and the roughness of their results.

The Tuckman Model
  • Forming: The team acts as individuals and there is a lack of clarity about the team’s purpose and individual roles.
  • Storming: Conflicts arise as people begin to establish their place in the team.
  • Norming: There is a level of consensus and agreement within the team. There is clarity about individual roles. The role of the Scrum lead is important in managing this.
  • Performing: The group has a clear strategy and shared vision. It can operate autonomously and resolve issues positively.

But how could this happen? Doesn’t Scrum training help? Why wouldn’t a highly trained ScrumMaster be able to successfully lead his team through forming and storming and into norming and performing?

As I reflected on my own Scrum training, I realized that the leadership skills required to guide people through these phases so they reach their optimum velocity was not part of my Scrum training at all.

The ScrumMaster for Team Alpha had received leadership training a few years before, and understood how to work with people and get them to operate as functional teams. On the other hand, Team Beta’s ScrumMaster was a developer without any experience or training in management or leadership.

Since ScrumMasters often come from development or other technical backgrounds, they have not tended to emphasize people skills in their training and career paths. While they may be incredible at their technical roles, leading teams can be a challenge for them due to the lack of focus in this area.

Wrapping Up

Thus, if you, as a manager, want your Scrum teams to succeed, make sure your ScrumMasters receive leadership training, coaching and mentoring so they understand how to work with people in groups. They must become comfortable with gently moving the members of the teams through the phases of team development as quickly as possible. Otherwise, you may become discouraged due to the lack of results from your teams.

So what happened with Team Beta? After spending a few hours coaching the ScrumMaster on the finer points of leading a group, his team quickly moved to the norming phase and the velocity started to improve. Over time, Team Beta’s velocity went on to match that of Team Alpha.

The post What I Did Not Learn At ScrumMaster Training appeared first on LeadingAgile.

Categories: Blogs

Integrating AutoMapper with ASP.NET Core DI

Jimmy Bogard - Wed, 07/20/2016 - 18:30

Part of the release of ASP.NET Core is a new DI framework that’s completely integrated with the ASP.NET pipeline. Previous ASP.NET frameworks either had no DI or used service location in various formats to resolve dependencies. One of the nice things about a completely integrated container (not just a means to resolve dependencies, but to register them as well), means it’s much easier to develop plugins for the framework that bridge your OSS project and the ASP.NET Core app. I already did this with MediatR and HtmlTags, but wanted to walk through how I did this with AutoMapper.

Before I got started, I wanted to understand what the pain points of integrating AutoMapper with an application are. The biggest one seems to be the Initialize call, most systems I work with use AutoMapper Profiles to define configuration (instead of one ginormous Initialize block). If you have a lot of these, you don’t want to have a bunch of AddProfile calls in your Initialize method, you want them to be discovered. So first off, solving the Profile discovery problem.

Next is deciding between the static versus instance way of using AutoMapper. It turns out that most everyone really wants to use the static way of AutoMapper, but this can pose a problem in certain scenarios. If you’re building a resolver, you’re often building one with dependencies on things like a DbContext or ISession, an ORM/data access thingy:

public class LatestMemberResolver : IValueResolver<object, object, User> {
  privat readonly AppContext _dbContext;
  public LatestMemberResolver(AppContext dbContext) {
    _dbContext = dbContext;
  public User Resolve(object source, object destination, User destMember, ResolutionContext context) {
    return _dbContext.Users.OrderByDescending(u => u.SignUpDate).FirstOrDefault();

With the new DI framework, the DbContext would be a scoped dependency, meaning you’d get one of those per request. But how would AutoMapper know how to resolve the value resolver correctly?

The easiest way is to also scope an IMapper to a request, as its constructor takes a function to build value resolvers, type converters, and member value resolvers:

IMapper mapper 
  = new Mapper(Mapper.Configuration, t => ServiceLocator.Resolve(t));

The caveat is you have to use an IMapper instance, not the Mapper static method. There’s a way to pass in the constructor function to a Mapper.Map call, but you have to pass it in *every single time*, and thus not so useful:

Mapper.Map<User, UserModel>(user, 
  opt => opt.ConstructServicesUsing(t => ServiceLocator.Resolve(t)));

Finally, if you’re using AutoMapper projections, you’d like to stick with the static initialization. Since the projection piece is an extension method, there’s no way to resolve dependencies other than passing them in, or service location. With static initialization, I know exactly where to go to look for AutoMapper configuration. Instance-based, you have to pass in your configuration to every single ProjectTo call.

In short, I want static initialization for configuration, but instance-based usage of mapping. Call Mapper.Initialize, but create mapper instances from the static configuration.

Initializating the container and AutoMapper

Before I worry about configuring the container (the IServiceCollection object), I need to initialize AutoMapper. I’ll assume that you’re using Profiles, and I’ll simply scan through a list of assemblies for anything that is a Profile:

private static void AddAutoMapperClasses(IServiceCollection services, IEnumerable<Assembly> assembliesToScan)
    assembliesToScan = assembliesToScan as Assembly[] ?? assembliesToScan.ToArray();

    var allTypes = assembliesToScan.SelectMany(a => a.ExportedTypes).ToArray();

    var profiles =
        .Where(t => typeof(Profile).GetTypeInfo().IsAssignableFrom(t.GetTypeInfo()))
        .Where(t => !t.GetTypeInfo().IsAbstract);

    Mapper.Initialize(cfg =>
        foreach (var profile in profiles)

The assembly list can come from a list of assemblies or types passed in to mark assemblies, or I can just look at what assemblies are loaded in the current DependencyContext (the thing ASP.NET Core populates with discovered assemblies):

public static void AddAutoMapper(this IServiceCollection services)

public static void AddAutoMapper(this IServiceCollection services, DependencyContext dependencyContext)
        .SelectMany(lib => lib.GetDefaultAssemblyNames(dependencyContext).Select(Assembly.Load)));

Next, I need to add all value resolvers, type converters, and member value resolvers to the container. Not every value resolver etc. might need to be initialized by the container, and if you don’t pass in a constructor function it won’t use a container, but this is just a safeguard just in case something needs to resolve these AutoMapper service classes:

var openTypes = new[]
foreach (var openType in openTypes)
    foreach (var type in allTypes
        .Where(t => t.GetTypeInfo().IsClass)
        .Where(t => !t.GetTypeInfo().IsAbstract)
        .Where(t => t.ImplementsGenericInterface(openType)))

I loop through every class and see if it implements the open generic interfaces I’m interested in, and if so, registers them as transient in the container. The “ImplementsGenericInterface” doesn’t exist in the BCL, but it probably should :) .

Finally, I register the mapper configuration and mapper instances in the container:

services.AddScoped<IMapper>(sp => 
  new Mapper(sp.GetRequiredService<IConfigurationProvider>(), sp.GetService));

While the configuration is static, every IMapper instance is scoped to a request, passing in the constructor function from the service provider. This means that AutoMapper will get the correct scoped instances to build its value resolvers, type converters etc.

With that in place, it’s now trivial to add AutoMapper to an ASP.NET Core application. After I create my Profiles that contain my AutoMapper configuration, I instruct the container to add AutoMapper (now released as a NuGet package from the AutoMapper.Extensions.Microsoft.DependencyInjection package):

public void ConfigureServices(IServiceCollection services)
    // Add framework services.


And as long as I make sure and add this after the MVC services are registered, it correctly loads up all the found assemblies and initializes AutoMapper. If not, I can always instruct the initialization to look in specific types/assemblies for Profiles. I can then use AutoMapper statically or instance-based in a controller:

public class UserController {
  private readonly IMapper _mapper;
  private readonly AppContext _dbContext;
  public UserController(IMapper mapper, AppContext dbContext) {
    _mapper = mapper;
    _dbContext = dbContext;
  public IActionResult Index() {
    var users = dbContext.Users
    return View(users);
  public IActionResult Show(int id) {
    var user = dbContext.Users.Where(u => u.Id == id).Single();
    var model = _mapper.Map<User, UserIndexModel>(user);
    return View(model);

The projections use the static configuration, while the instance-based uses any potential injected services. Just about as simple as it can get!

Other containers

While the new AutoMapper extensions package is specific to ASP.NET Core DI, it’s also how I would initialize and register AutoMapper with any container. Previously, I would lean on DI containers for assembly scanning purposes, finding all Profile classes, but this had the unfortunate side effect that Profiles could themselves have dependencies – a very bad idea! With the pattern above, it should be easy to extend to any other DI container.

Categories: Blogs

Rethinking the To Do list

Kanbanery - Wed, 07/20/2016 - 17:24

Artykuł Rethinking the To Do list pochodzi z serwisu Kanbanery.

I have been fascinated with personal productivity for most of my adult life. I’ve tried everything, every plan and scheme, from Franklin Planner training back in the 80s to Kanban. But now, I’m going to share my thoughts on that most simple of tools, the old-fashioned To Do list.

Most of us have to do lists, even if we don’t think we do. Some of us just keep them in our heads, but we still have them. I’m not a fan of keeping anything in my head, because my memory is terrible, and that just leads to stress. Is there anything worse then that feeling that there’s something critical that you should be doing right now, but you can’t remember what it is? And someone you love or respect is going to be so disappointed in you for not doing it. That’s why I write down everything. Every single thing. My standard reply when someone asks if I’ll do something is, “did you see me write it down? If not, then don’t count on it.” My wife doesn’t ask me for anything verbally anymore, except maybe a hug. She emails requests. And bless her for it.

But I digress. The to do list. It’s a tool we all use in some fashion, and its primary function to ensure that things get done. But that’s not what it really does, is it? It’s a source of stress, of worry, of self-flagellation, of despair. And while I think there are far better tools than the to do list, even that simple tool can stand a lot of improvement.

To do lists come in two basic forms: time-limited and unlimited. A time-limited to do list is like a daily to do list. It’s a list of everything that we plan to do today. Another form is the running to do list. It’s a potentially huge collection of everything we ever agreed to do or hope to do.

The idea behind a running to do list is to get everything done by forcing you to do things in the order in which you commit to them. So even if your list grows to a hundred items, and you really want to do item number 98 today, if number 3 isn’t done, you’ve got to do that first. It’s a way of fighting procrastination, and over time it forces you to think carefully before commiting to anything.

The running to do list alone didn’t work for me, because it grew too fast, soon spanning several pages and including things I might never do and things that didn’t need to be done for weeks or months, and so it became distracting when I was trying to figure out what to do now. The daily to do list used to be my biggest enemy, until I gor some brilliant advice from my last boss’ wife, fifteen years ago. She told me that on a good day, she got two or three important things done. Any executive might do dozens of things in a day, but a good day was one in which one or two of those dozens of things was important. So she suggested a to do list with no more than three things on it, as long as those three things were really valuable. That advice changed my life.

I used to have daily to do lists with a dozen or more items, and I’d usually do them in the order of easiest to do up to the easiest to put off. The problem is, that every day I’d do 50-70% of the items and them move the “easiest to put off” to the next day. Some of them never got done and others they just got moved until they became emergencies. Once I shortened my daily to do list to include just two categories of things, my problems were solved. Now, I start my day with a very short list that includes only things that satisfy these two criteria:

1) Something really bad will happen if this isn’t done before I go home


2) If I only did this today, and then went home, it would still be a great day.

Rarely does the list have more than three items in it. Sometimes it has only one. One thing that must be done or I’ll lose a client, or get hit with a lawsuit, or get fined for late tax reporting or one thing that if only it was done, I’d feel like I’d done something really important to advance my goals or build my business. On most days, my whole to do list is done before lunch. I can spend the rest of the day any way I like, whether it’s going for a walk in the woods or getting a head start on the next day or dealing with the usual stuff that comes up at work.

But of course, I’m a Kanban guy, and I use personal kanban with Kanbanery at work. What kind of a Kanbanery employee would I be if I didn’t use and love my own product? One column of my Kabanery board is my daily to do list, and it has a really tight WIP limit. I collapse all the other columns before it and get to work on that list first. Once it’s done, if I still feel like working, I can always pull tasks from the running to do list which is the first column of my personal Kanbanery board, or I can take a run along the river. That’s part of the beauty of personal kanban. Done right, it gives you the freedom to choose how to spend your time, safe in the knowledge that you aren’t forgetting to do something that’s terribly important right now.

I’ll be writing more about the evolution of my personal productivity system, which has been featured in Business Insider and Forbes, but for now, if you’re still using to do lists to plan your day, consider keeping them short. Really short. Because in the two decades since Amy Arden gave me that great advice I’ve consistently found it to be true.

Whether I’m in my role as a husband and father planning my weekend, as a worker planning his day, or as an entrepreneur or executive, there are rarely more than three things that I do in a day that really have to be done to make it a good day, and anything else is just a distraction. That’s the 80/20 principle at work. Give it a try, and tell me how it goes in the comments below. Even better, set up a free Kanbanery account now and put a WIP limit of 3 on your Today column and see if it doesn’t make you happier and more effective than you’ve ever been.

Artykuł Rethinking the To Do list pochodzi z serwisu Kanbanery.

Categories: Companies

ATDD Topics

NetObjectives - Wed, 07/20/2016 - 11:52
I've written about several ATDD topics recently on the ATDD website.  Here's a summary of the topics: Testing Contracts for Services There is a growing use of services and micro-services to develop applications. I’ve had numerous questions about how to test the services and who should be responsible for what testing. I wrote a book called Interface-Oriented Design which covered many of these...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Case Study: ATDD - A Critical Success Factor

NetObjectives - Wed, 07/20/2016 - 09:42
At a financial firm where I taught and coached ATDD and TDD, here’s the results as reported by one team: “One of the critical success factors for our project was our adoption of ATTD and TDD, including frequent test collaboration, engaging the business in writing acceptance test criteria, and robust test automation.” “Collaborating on writing tests ensured that the entire team understood all...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Extended Call for GOAT 2016 Speakers

Notes from a Tool User - Mark Levison - Tue, 07/19/2016 - 23:32

Gatineau Ottawa Agile Tour 2016
The Gatineau Ottawa Agile Tour 2016 will take place on November 21st and, once again, anyone with Agile experience is welcome to propose a session for the conference. The deadline for submissions for speakers has been extended to Thursday, September 15th, 2016, to better accommodate those who are travelling or on Summer vacation.

Would you like to present a case study or a report on your experience implementing Lean or Agile within your business or workplace? Have you discovered a workshop so useful that you would like to share it with others? Would you like to take the opportunity to share your knowledge on Lean or Agile? If you answered yes to one of these, we would definitely love to hear what you have to offer!

Submit your session proposal here now!

We encourage you to not only learn more about this upcoming opportunity to participate in the value-packed conference, but to Join/Share/Attend the following pages to lend your support and spread the word.

2016 Gatineau Ottawa Agile Tour Facebook Event

Agile Ottawa Facebook Group

Extended Call for Speakers for the Gatineau Ottawa Agile Tour 2016


Image attribution:

Categories: Blogs

Prolongation de l’appel aux conférenciers pour la tournée agile de l’Outaouais (GOAT) 2016

Agile Ottawa - Tue, 07/19/2016 - 21:44
Nous avons déjà reçu d’excellentes propositions de conférences pour GOAT 2016, merci à tous ceux qui ont pris le temps de les soumettre. L’été s’est enfin installé, et quelle chaleur! Nous avons réalisé que, même si vous vouliezsoumettre une conférence, vous … Continue reading →
Categories: Communities

MediatR Extensions for Microsoft Dependency Injection Released

Jimmy Bogard - Tue, 07/19/2016 - 21:07

To help those building applications using the new Microsoft DI libraries (used in Orleans, ASP.NET Core, etc.), I pushed out a helper package to register all of your MediatR handlers into the container.


To use, just add the AddMediatR method to wherever you have your service configuration at startup:

public void ConfigureServices(IServiceCollection services)


You can either pass in the assemblies where your handlers are, or you can pass in Type objects from assemblies where those handlers reside. The extension will add the IMediator interface to your services, all handlers, and the correct delegate factories to load up handlers. Then in your controller, you can just use an IMediator dependency:

public class HomeController : Controller
  private readonly IMediator _mediator;

  public HomeController(IMediator mediator)
    _mediator = mediator;
  public IActionResult Index()
    var pong = _mediator.Send(new Ping {Value = "Ping"});
    return View(pong);

And you’re good to go. Enjoy!

Categories: Blogs

Software Craftsmanship as an Act of Courage

BigVisible Solutions :: An Agile Company - Tue, 07/19/2016 - 19:00


Agile is Dead (Long Live Agility)” – Dave Thomas.

A great quote, if ever I heard one. If you haven’t read or listened to Dave Thomas before, then I highly recommend his discussions on the subject of agility, and not “Agile”. For example, in this video from GOTO 2015.

For my part, I agree with much of Dave’s perspective. Here are some facts:

  • Waterfall, RUP, and Agile were all conceived as engineering processes for software development.
  • All of them are processes (journeys) and none of them guarantee end results.
  • You cannot buy, sell, or give someone agile.
  • You can teach agile concepts, and demonstrate some circumstances in which to apply those concepts.
  • We want software to do complex things, but we want a simple way to describe, develop, and validate.

Dave believes that at the root of the problem is the fact that Agile has been turned into a noun (with a big A), something that can be given to someone or put on top of something else. But previously agile (small a) was just an adjective meaning “able to move quickly and easily”. “Agile software development” therefore was simply a process for developing software “quickly and easily”. Today Agile has come to mean much more than that, though the goal is still to deliver software (and other products) quickly and easily. It isn’t now, nor was it ever designed to be, something (a noun) that you add to an existing development process to “solve” delivery problems. To become agile in the delivery sense (that is, “able to move quickly and easily”), you have to put in the work, just like you would have to in order to become physically agile.

Ok, so how do I know I’m going to get the software I wanted, if there aren’t guarantees? The Agile Transformation Movement does more than any previous methodology to answer that question. However, some of those selling “agile” don’t cover this area enough or, worse, get fixated on a single implementation recommendation (co-location, for example). A broader view has to be considered and addressed.

Practice Your Craft

Software development and IT operations are about creating specific solutions to different problems. Unless all companies, governments, not-for-profits and individuals use the exact same software for the exact same purposes without exception (an end that I cannot foresee occurring anytime soon), we need to look at the practice of software development and technology operations. This doesn’t mean you should outsource all of your software development and technology operations: often times there is a direct conflict between your goals and those of a service provider. This doesn’t mean you have to build everything internally, either; that’s just not practical. It does mean that the quality you want should correspond to the effort you put into getting that quality. All applications are developed to be used, either by people or other programs. Therefore by definition software is a foundation. The question is: is the software you are creating cement or quicksand? High-quality software requires high-quality development by developers who know their craft.

High-quality software requires high-quality development by devs who know their craft
Click To Tweet

These developers focus on developing a rock-solid foundation that users can build upon without fear of shoddy architecture. Today few people take this perspective, assuming that developers wouldn’t knowingly create a buggy product, or that developers that do take this perspective are somehow less productive, and therefore less valuable. The unfortunate truth is that, if current software were buildings, many wouldn’t pass code.

Return on Team as a “Guarantee”

“Any book that tells you, ‘This is how to build software,’ is wrong. Because, unless that book was written for your team and your company doing your project at this particular time, it doesn’t know how you should write software.” – Dave Thomas

Software development and software operations are crafts, practiced at varying levels, by a huge number of people of different skills and capabilities. What’s important, however, is that these crafts are best practiced by teams. The best teams are those that grow together with each member improving his or her contribution to their craft in concert with the skills of the rest of their team. Members of high-performance teams cultivate complementary skills so that the team product is greater than any individual member could produce alone. A persistent, agility-focused team grows according to their needs, abilities, and personalities. “One size fits all” won’t work here. As Dave Thomas said in his GOTO 2015 session:

“Any book that tells you, ‘This is how to build software,’ is wrong. Because, unless that book was written for your team and your company doing your project at this particular time, it doesn’t know how you should write software.”

Recently, a colleague of mine, Brent Barton, wrote about a relatively novel concept in his blog post “Using Return on Team to Enhance Business Agility”. Anyone even casually acquainted with the concept of Return on Investment (ROI) should be able to grasp Return on Team without difficulty: persistent teams, by virtue of growing together and in complementary fashion, are able to more accurately estimate their throughput and, better, their outcomes than new or newly formed teams. If the team knows the capabilities of itself as a collective and of each individual member, made possible by spending lots of time together delivering value in many different projects, then they can—if they are so inclined—offer the “guarantee” of quality and speed that businesses are looking for. Either way, changing your teams up every time you have a new project only makes estimating outcomes that much harder.

Want a software delivery guarantee? Invest in persistent Agile teams.
Click To Tweet

If you need software built and operated to meet a specific set of needs, you are best served by a team you already know and, if that isn’t possible, a team that is invested in your objectives now and into the future. You don’t want a team that is in the business of “deliver and run”. Teams that are not invested in the operation of your software (whether they are contractors or your own employees) won’t be invested in the security, architecture, and maintenance capabilities of what they deliver. Ideally your software team can’t be logically separated (even if they are physically) from the users and decision makers of the software.

Something else also begins to take form when you invest in persistent teams: they begin to trust that their relationships and partnerships within the team are dependable, and that the longevity of the team isn’t illusory. Persistent teams can form alliances and develop an identity that strengthens the bond between its team members—a luxury that mix-and-match team members don’t have and that their managers wouldn’t want them to develop. If the team is only going to be dissolved at the end of the project, you don’t spend time building the kind of deep relationships that provide a sense of security, self-worth and loyalty to the organization as a whole. These feelings in turn allow teams and team members to deepen their individual and collective skills, to put their heart and soul into their work, and thus build trust with the business team by consistently providing consistent value. In short, when the business invests in persistent teams, the teams invest in the business.

When the business invests in persistent teams, the teams invest in the business.
Click To Tweet

It Ain’t Easy Being a Craftsman

Inherent in all of this is the assumption that business people understand and buy into the value of high-quality software products that stand the test of time and that are easier to maintain. If you have that, you’re golden. If you don’t, then we need to have a discussion of a different nature. Assuming you have the right buy-in, you’ll have to gain the respect and buy-in of your development teams. Traditional developers have either been seduced to believe that they can do no wrong or, paradoxically, no right. For those who believe themselves infallible, they have come to see software development as an art and themselves as artists, and to think that anyone who disagrees simply doesn’t understand because they aren’t developers themselves. On the other end of the spectrum are those who feel they are nothing more than a cog in a great machine, replaceable on a whim, and that the software they are creating is nothing more than a statement of individual power for their “bosses” who are on a power trip. As a result, these developers don’t invest any effort into their craft because the whole process is demeaning.

As a fellow developer, I can say that most developers often don’t know what they’re doing (or why), taking shortcuts just to meet an arbitrary schedule, or worse intentionally skipping steps and skimping on quality. Forced to operate with tools not of their choosing, they come to rely on faulty logic and shortcuts to create code that barely sticks together. Forced to produce a result in insufficient time, they skip steps, including taking the time to understand the problem they are attempting to solve. In some cases, they may be providing the least amount of effort, simply to get out of something they didn’t really want to do in the first place.

Being a software craftsman means assuming a personal code of ethics in spite of it all — a code of ethics where defects are a choice, quality isn’t something you can add on later, and life-long learning and continuous improvement of one’s craft is the only way to ensure that you are always doing the best. The Association for Computing Machinery (ACM) has gone so far as to create their own code of ethics. The craftsman never says, “It works on my machine” or “Bugs are QA’s problem.” Another colleague, Jerry Rajamoney, wrote more about software craftsmanship and its history in this blog.

Being a software craftsman means assuming a personal code of ethics.
Click To Tweet

No team of software craftsmen and craftswomen, no matter how awesome, can do it alone though. I recommend that you form a tight relationship with an individual or organization that will provide a craftsmanship perspective, if you don’t have one already. This “coach” can help you identify, build and maintain teams, but more importantly to grow and maintain those teams over time. Coaches look at the way teams behave, and as a result their perspective will help you find, build, coordinate, cooperate, and maintain teams best designed for your circumstances, improving their craft together to your mutual benefit.

I’ll end with a battle cry, or at least a call to action, from one developer to another. It’s taken right from Dave’s presentation:

  • Be courageous.
  • Stand up to fear-mongers.
  • Use the values that you already possess to create practices that will lead to high-quality software.
  • Get feedback, refine and repeat.

Remember that the time to exercise courage is when you’re developing. Any time after that is too late.

The time to exercise courage is when you’re developing. Be courageous, devs.
Click To Tweet


Like this? You’ll love Agile Eats

Agile Eats is our semi-monthly e-blast chock full of tips and tricks too good not to share. Subscribe now!

The post Software Craftsmanship as an Act of Courage appeared first on SolutionsIQ.

Categories: Companies

Case Study: ATDD Helps Solve Development Issues

NetObjectives - Tue, 07/19/2016 - 14:26
I’ve been teaching Acceptance Test-Driven Development (ATDD) for many years.   At the start of every course, I ask the attendees for issues they have with their development processes.    After they have experienced the ATDD process, including creation of acceptance tests, I review their issues to see whether they feel that ATDD will help, hurt, or be neutral in respect to those issues.    Here’s...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

How Rally Does… Strategy Deployment

AvailAgility - Karl Scotland - Tue, 07/19/2016 - 12:30

This is another post originally published on the Rally Blog which I am reposting here to keep an archived copy. It was part of the same series as the one on annual and quarterly planning, in which we described various aspects of the way the business was run. Again, apart from minor edits to help it make sense as a stand alone piece I have left the content as it was.

Strategy Deployment is sometimes known as Hoshin Kanri, and like many Lean concepts, it originated from Toyota. Hoshin Kanri is a Japanese term whose literal translation can be paraphrased as “compass control.” A more metaphorical interpretation, provided by Pascal Dennis in Getting the Right Things Done, is that of a “ship in a storm going in the right direction.”


Strategy Deployment is about getting everyone involved in the focus, communication, and execution of a shared goal. I described in previous posts how we collaboratively came up with strategies and an initial plan in the form of an X-matrix. The tool that we use for the deployment is the Strategic A3.

Strategic A3s

A3 refers to the size of the paper (approximately 11 x 17 inches) used by a number of different formats to articulate and communicate something in a simple, readable way on a single sheet of paper. Each rock or departmental team uses a Strategic A3 to describe its plan. This forms the basis for their problem-solving approach by capturing all the key hypotheses and results, which helps identify the opportunities for improvement.

The different sections of the A3 tell a story about the different stages of the PDSA cycle (Plan, Do, Study, Adjust.) I prefer this latter formulation from Dr. W. Edwards Deming to the original PDCA(Plan, Do, Check, Act) of Walter A. Shewhart, because “Study” places more emphasis on learning and gaining knowledge. Similarly, “Adjust” implies feedback and iteration more strongly than does “Act.”

This annual Strategic A3 goes hand-in-hand with a macro, longer-term (three- to five-year) planning A3, and numerous micro, problem-solving A3s.

Anatomy of a Strategic A3

This is what the default template that we use looks like. While it is often good to work on A3s using pencil and paper, for wider sharing across the organisation we’ve found that using a Google document works well too.


Each A3 has a clear topic, and is read in a specific order: down the left-hand side, and then down the right hand side. This flow aligns with the ORID approach (Objective, Reflective, Interpretive, Decisional) which helps avoid jumping to early conclusions.

The first section looks at prior performance, gaps, and targets, which give objective data on the current state. Targets are a hypothesis about what we would like to achieve, and performance shows the actual results. Over time, the gap between the two gives an indication of what areas need investigation and problem-solving. The next section gives the reactions to, and reflections on, the objective data. This is where emotions and gut feelings are captured. Then comes interpretation of the data and feelings to give some rationale with which to make a plan.

The three left-hand sections help us look back into the past, before we make any decisions about what we should do in the future. Having completed that we have much better information with which to complete the action plan, adding high-level focus and outcomes for each quarter. The immediate quarter will generally have a higher level of detail and confidence, with each subsequent quarter afterward becoming less granular. Finally, the immediate next steps are captured and any risks and dependencies are noted so that they can be shared and managed.

Co-creating a Strategic A3

As you can probably imagine from reading the previous posts, the process of completing a Strategic A3 can be a highly collaborative, structured, and facilitated process. One team with which I work closely recently had grown to a point where we would benefit from our own Strategic A3, rather than being a part of a larger, international Strategic A3. To create it we all got together for a day in our Amsterdam office. We felt that this would allow us to align more strongly with the corporate strategy and communicate more clearly what we were doing, and where we needed help.

We began by breaking into small groups of three to four people, mostly aligned around a regional territory. These groups spent some time filling in their own copy of the A3 template. We then reconvened together and each group gave a readout of its discussions, presenting the top three items from each section, which we captured with post-it notes on flip charts. Having gone around each group I then asked everyone to silently theme the post-its in each section until everyone seemed happy with the results. This led to a discussion about each theme and identifying titles for them. We still had quite a few themes, so we finished off by ranking them with dot-voting so that we could be clear on which  items were most important.

Our last step was to identify the top three items on the A3 that we wanted to highlight to the wider business. This turned out to be a relatively simple conversation. The collaborative nature of the process meant that everyone had a clear and shared understanding of what was important and where we needed focus.

A3Karl0 a3Karl1

Corporate Steering

Strategy deployment is not a one-off, top-down exercise. Instead, the Strategic A3 is used as a simple tool that involves everyone in the process. Teams prepare and plan their work, in line with the corporate goals, and each quarter they revisit and revise their A3s as a means of communicating status and progress. As performance numbers become available an A3 will be updated with any changes highlighted, and the updated A3 then becomes a key input into Quarterly Steering.

Categories: Blogs

Free Retrospective Tools for Distributed Scrum Teams

Scrum Expert - Tue, 07/19/2016 - 09:00
Even if Agile approaches favor collocated teams, distributed Scrum teams are more common that what we might think. Many Agile software development teams are based on a virtual organization. This article presents some free online tools that can be used to facilitate retrospectives for distributed Scrum teams. You will find in this article only tools that are supposed to be used for free in the long term. We do not list tools that offer only a free trial based on duration or the number of retrospectives. We will also only mention the tools that have features specifically dedicated to Scrum retrospectives. There are many other tools that Scrum teams might use, from video conferencing platforms to online whiteboard software. Mentioning all these tools will result more in a book than an article. If you want to add a tool that fits these requirements to this article, just let us now using the contact form. Updates July 19 2016: added Fun Retro IdeaBoardz IdeaBoardz is a free online team collaboration tool. It allows teams to collectively gather inputs, reflect and retrospect. It is especially useful for distributed teams. For Scrum retrospectives, you can create two types of boards: standard or starfish. More board options are available (pros & cons, to-dos) that could be also useful. You can edit the titles of the sections of your board. The interface seems very intuitive, but sometimes I ended up in some situations where I didn’t know how to exit gracefully, for instance when I [...]
Categories: Communities

HtmlTags 4.1 Released for ASP.NET 4 and ASP.NET Core

Jimmy Bogard - Mon, 07/18/2016 - 20:20

One of the libraries that I use on most projects (but probably don’t talk about it much) is now updated for the latest ASP.NET Core MVC. In order to do so, I broke out the classic ASP.NET and ASP.NET Core pieces into separate NuGet packages:

Since ASP.NET Core supports DI from the start, it’s quite a bit easier to integrate HtmlTags into your ASP.NET Core application. To enable HtmlTags, you can call AddHtmlTags in the method used to configure services in your startup (typically where you’d have the AddMvc method):

services.AddHtmlTags(reg =>
       .ModifyWith(er => er.CurrentTag.Text(er.CurrentTag.Text() + "?"));

The AddHtmlTags method takes a configuration callback, a params array of HtmlConventionRegistry objects, or an entire HtmlConventionLibrary. The one with the configuration callback includes some sensible defaults, meaning you can pretty much immediately use it in your views.

The HtmlTags.AspNetCore package includes extensions directly for IHtmlHelper, so you can use it in your Razor views quite easily:

@Html.Label(m => m.FirstName)
@Html.Input(m => m.FirstName)
@Html.Tag(m => m.FirstName, "Validator")

@Html.Display(m => m.Title)

Since I’m hooked in to the DI pipeline, you can make tag builders that pull in a DbContext and populate a list of radio buttons or drop down items from a table (for example). And since it’s all object-based, your tag conventions are easily testable, unlike the tag helpers which are solely string based.


Categories: Blogs

Extended Call for Speakers for the Gatineau Ottawa Agile Tour 2016

Agile Ottawa - Mon, 07/18/2016 - 20:10
We have received some amazing speaker submissions for GOAT 2016, thank you to all who took the time to submit their proposals. Summer is finally here, and what a hot one it is! We realize that although you want to submit … Continue reading →
Categories: Communities

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.