Skip to content

Blogs

Agile Retrospectives: Hoe vaak doe je ze?

Ben Linders - Wed, 03/30/2016 - 14:24
Agile teams gebruiken retrospectives om hun manier van werken te reflecteren en te verbeteren. Veel Scrum teams sluiten iedere iteratie af met een retrospective. Teams die Kanban gebruiken doen veelal wekelijks een retrospective of doen zo vaak als nodig is een mini-retrospective om van een probleem te leren en in kleine stapjes te verbeteren. Hoe vaak doen jouw teams agile retrospectives? Continue reading →
Categories: Blogs

Agile and Lean Program Management is Done

Johanna Rothman - Wed, 03/30/2016 - 13:50

I sent my newsletter, Scaling Agile and Lean to Programs to my subscribers yesterday. (Are you one of them? No? You should be!)

If you are trying to use agile for several projects that together deliver value (a program), you might be wondering what the “right” approach is. You’ve heard of frameworks. Some of them seem to be a bit heavy.

Instead of a framework, consider your context. You and your organization are unique. Do you have hardware to integrate into your product? Do you have agile and non-agile teams who are supposed to deliver? Are you trying to work in iterations and they don’t quite work at the problem-solving level?

 Scaling Collaboration Across the OrganizationYou have many choices. In Agile and Lean Program Management: Scaling Collaboration Across the Organization, I offer you options for how to think about and solve these and many other problems. The book is principle-based, not practice-based. That way, if you consider the principles, you’ll be in great shape, regardless of what you decide to do.

Please do check out the book. It’s available everywhere fine books are sold. (I love saying that even if it is passive voice!)

Categories: Blogs

Managing Dotfiles With Ansible

Yesterday I posted about managing our local configuration with Ansible and today I’m going to continue this path by putting my zsh configuration under configuration management.

Installing ZSH and oh-my-zsh

First up let’s install our preferred shell and customizations. For me this is zsh and oh-my-zsh. Up front I know that this is going to probably be a multi-step process so I’m going to create a local role to bundle up the tasks and templates we’ll be using. I create a new directory named roles and add it to the roles_path in our ansible.cfg. This new directory with our new role will look like the following.

For our initial set of tasks we’ll install zsh via homebrew, configure zsh as the current user’s shell and install oh-my-zsh.

You can see this change in its entirety here. Next up we’ll add a template for our zshrc that we can customize to our liking. To start we’ll grab the zshrc template from the oh-my-zsh git repository and save it to jamescarr.dotfiles/templates/zshrc.j2.

A good first piece of literal text to template out is the zsh theme and the plugins loaded up. We’ll define these with default variables under jamescarr.dotfiles/defaults/main.yaml.

You’ll notice here we’ll also be polite by backing up the existing zshrc file if it is present. A good benefit with this example is that we can now switch up the different pieces of our .zshrc configuration from our playbook by overriding the default variables.

You can find everything up to this point at this referenced commit in jamescarr/ansible-mac-demo.

Up Next

Obviously a topic like zsh customizations is a rather large undertaking deserving of its own post so tomorrow I’ll share some of my personal zsh functions and aliases that I find extremely useful to start off with.

Well did you find the useful? Did you run into any problems following along? Please let me know in the comments below!

Categories: Blogs

Why Messaging? Why Not Have The Web Server Handle The Request?

Derick Bailey - new ThoughtStream - Wed, 03/30/2016 - 13:30

A reader of my RabbitMQ Patterns email course recently asked a question about using messaging systems within an web application. Specifically, they wanted to know why you would use a request/reply pattern over a messaging server, instead of just handling the request within the HTTP call and the web server, itself.

The short answer is, “it depends”.

Http or rabbitmq

There are times when the request can and should be handled by the web server, directly. Loading a blog post to view? Yeah, that’s probably a good call for the web server to handle. But there are also times when it doesn’t make sense for the web server to handle the workload, directly.

The following is a short conversation that took place via email, regarding messaging vs HTTP.

Shifting The Workload

Them:

What are the benefits of using a request / response queue over just doing the work as part of http request?

For fire and forget messages I can see the performance benefits…

Is it just being able to shift the work load to another service? Or abstract some complexity of the work?

Me:

That’s the core of it, really – shifting the workload to another service.

If you have an external system that contains the information you need, or can do the work within a few hundred milliseconds, it might make sense to do request/reply.

Scheduled Code Example

Me, continued:

For example, I have a system that uses a node.js module for running tasks at specific times. It’s a bit like a cron job, but it runs entirely in node.

This module lives in a back-end service that runs in it’s own process. I used a separate process because this service has nothing to do with handling HTTP requests and should not live in the HTTP context or web server, directly.

But, my web app needs to get a list of what’s running next, and display them on a particular page. So how do I do that? What are the options?

Read The Database Directly

I could read the database collection that the scheduler module uses – it’s in the same database that the web app uses. But, that would be a backdoor into the functionality the module provides. I would have to re-create the logic of the scheduler module within my web app, to translate the data into something meaningful for the user.

No, thanks.

A New Schedule Module Instance

I could create new instance of the scheduler module in my web app. But this would create two sources of the truth on what is running next, and they would be in conflict with each other.

Each instance would assume it owns the data in the database (because the instances don’t know about each other). This would cause problems when one updates the database and the other tries to update it again.

Again: No, thanks.

Request/Reply

The solution, in this case, was to use request/reply to get the list of items from the existing back-end service. That way I have a single source of the truth for the scheduled items, and don’t have to re-create logic from an existing module.

Decoupling The Database

Them:

Ah thanks, that helps. Nice, interesting idea. Means you web app isn’t so coupled to the database.

Me: 

Exactly!

There’s a saying I heard a long time ago, “a database is not an integration layer.”

I don’t remember where I first heard that, but it continues to ring true after many years.

I could have tried to use the database as an integration layer, but that would have added complexity and potential problems to the system. 

It made more sense for me to use messaging in this scenario, and allow the one source of truth on the upcoming schedules to be that one source of truth.

 

Beyond Reading: Writes With Messaging

There are far more examples of what can (and should) be done with messaging, when handling HTTP requests. The list is nearly endless, and can go so far as to say database writes should be handled through messages!

In my RabbitMQ For Developers training package, I spoke with Anders Ljusberg about Event Sourcing and messaging systems for database writes.

It was an enlightening conversation, to hear about the need for pushing a database write to a back-end service, and how this affects the over-all architecture of a web application.

Be sure to check out the complete RabbitMQ For Developers course, for this and other interviews, screencasts, ebooks and more!

Categories: Blogs

Scaling Scrum with The Nexus Framework

TV Agile - Wed, 03/30/2016 - 09:57
In this video SSW Chief Architect Adam Cogan discusses with Scrum.org Team Member Steve Porter about the Nexus Framework, an Agile framework designed for large scale software development. The Nexus framework is presented by Scrum.org as a framework that drives to the heart of scaling: cross-team dependencies and integration issues. It is an exoskeleton that […]
Categories: Blogs

Links for 2016-03-29 [del.icio.us]

Zachariah Young - Wed, 03/30/2016 - 09:00
Categories: Blogs

Agile Change or Adoption Always Starts with Why

Notes from a Tool User - Mark Levison - Wed, 03/30/2016 - 07:08

Your organization has decided to become more “Agile.” Why? As we learned in a previous blog post, “Because Our Competitors Are” is not a valid – or sensible – reason.

Before embarking on a change, adoption, or improvement program, you need to know the rationale behind that decision. So… why Agile?

Group of peopleA traditional approach to answering this question might see the executive team going off-site for two to three days and holding a workshop where they decide why they should be Agile, then design an adoption strategy, and then summarize the whole thing in a few sentences to be sent out in a memo.

Typically, large-scale change initiatives have a lot more ceremony, more meetings, and more setup than this. However, there are several key failings, including that they involve only a select few executives in the envisioning and decision-making process, and they attempt to plan for the long haul.

There are dozens of examples in our industry of failed change efforts that have cost billions of dollars and proved that this approach doesn’t work. At Nokia, Stephen Elop issued the famous ‘burning platform’ memo in 2011, and yet two years later the company was sold to Microsoft. In 2013 Avon had to write off $125 million[1] of work that built an enterprise software implementation which drove representatives away. This was change that failed to help the very people it was intended for.

These and other failures involve some combination of the following:

  • Why – The “Why” isn’t understood by most of the victims of change.
  • Strategy – The “Strategy” created by the executive group doesn’t make sense to all of the people doing the work.
  • Ownership – People at the edges of the system (who do most of the work) feel no ownership of the change.
  • Connection – The strategy doesn’t appear connected to the problems that the people at the edges of the system are experiencing.
  • Improvement -The strategy appears to improve the lot of the executives, but not of the doers.
  • Culture – The change doesn’t fit the organization culture.
  • Leadership – Top level is asking for change but doesn’t appear to be involved in making it happen.

To be effective, Agile organizational change needs to… well, involve the Organization! Not just the executives who have made the decree, often without fully understanding what the goals of the change are. This shouldn’t be a quick decision made at a two-day corporate retreat. It needs to be an ongoing effort to figure out the “why” collaboratively and share it effectively, being mindful of some essential ingredients.

We address those ingredients in the next blog post: Agile Change or Adoption – the Steps to Go from “Why” to “How”

[1] Avon’s Failed SAP Implementation A Perfect Example Of The Enterprise IT Revolution – Ben Kepes: http://www.forbes.com/sites/benkepes/2013/12/17/avons-failed-sap-implementation-a-perfect-example-of-enterprise-it-revolution

Image attribution: http://photodune.net/

Categories: Blogs

Managing Your Macbook with Ansible

For a long time I’ve been a big believer in Infrastructure as Code and I have always wanted to use configuration management to provision my personal workstation and keep it constantly updated to an expected state. Boxen was one of the first tools I saw in this space and it even seemed like it might be comfortable since I was using Puppet at the time. However I never really had a lot of luck with it and the original aim of Boxen was actually lost on us at Zapier since we engineered a very nice docker-compose based setup that lets anyone begin running and hacking on zapier locally constrained by the time it takes to download the docker images for the first time.

That being said when we began transitioning from Puppet to Ansible last year and I naturally started using it locally to kind of whet my appetite a bit. Here’s a brief run down of how I’m currently using Ansible to manage my laptop and some notes on where to go next.

Getting Started

There are several guides out there on getting Ansible installed, the most authoritative being the instructions right on Ansible’s website. I won’t repeat those well written steps here.

Once that’s all done let’s run ansible --version and verify we’re running Ansible 2.0.1.0 or above. If you’re visiting from the future then I will have to say that I am really unsure if this blog post will work with 3.0.0 or whatever release is out then. Keep in mind this post is written in 2016. 🙂

First up we’ll create a very simple Ansible playbook that just prints a hello world message to the console to ensure we have everything configured correctly.

Place this in a project directory (I name mine “personal-dev-laptop”) and run ansible-playbook main.yml. If all is configured correctly you’ll see a playbook run that executes a task that prints out “Hello World” to the console.

Homebrew

The most important piece to a provisioning system is the package management and Ansible is no different. Homebrew is the go to on OSX and thankfully Ansible has a pretty decent homebrew module for managing the state of different Homebrew packages. Let’s dip our toes in by adding a task to ensure macvim is installed and at the latest version.

The nice benefit here is that each time we run Ansible macvim will automatically get updated to the latest available package. However if we want to ensure a package is simply installed but don’t want to upgrade each time we run we can set the state to `present`. After awhile if we’ve worked with vim and decided that it’s just not for us and we’d prefer to use emacs instead we could just set macvim’s state to absent and emacs state to latest.

Taking It Further
Sure we can just keep adding our own tasks to install roles, perhaps even using a with_items iterator to include a big list of them but sooner or later we’re going to be duplicating a lot of work someone else has done. Which is a good time to introduce third party roles installed via ansible galaxy. There are most likely several good roles out there but my favorite so far is geerlinguy.homebrew. I usually put a requirements yaml file in the main root of my project with the module I want to use and the version I want to lock in.

Now to install this third party role we’ll run ansible-galaxy install -p vendor -r requirements.yaml. The -p switch will install it to a local directory named vendor so we don’t clobber the global include path and we can add that directory to our project’s .gitignore so that it isn’t stored in git. We also add an ansible.cfg to specify the role path for third party roles we’ll be using.

Now we also update our main.yaml to include a few changes. Firstly we want to include the new role we just imported and then we move the packages we want to install as variables that the homebrew role will utilize.

This time we’ll run with the -K switch since this role also ensures that homebrew is installed and will require sudo access to do so. Now I know what you’re thinking… you’re thinking “James is trying to hack my box!” and quickly closing your browser tab. Obviously you should never provide sudo without giving the source code a look over and the most important pieces will be the main task file and meta file where there could be dependent roles. After careful inspection we decide all is good and run ansible-playbook -K main.yml. Congratulations, you now have Spotify and iterm2 installed!

One small improvement to make before we move on is to extract these variables that are specifically for homebrew to their own var file. While it might seem silly now, sooner or later we might be using many roles that utilize many different variables and mixing them will lead to a lot of confusion. I personally like to name the variable files after the role they’re used for as illustrated below.

Managing OSX Settings

You can do a lot of tweaking to how your OSX behaves by using the osx_defaults module to manage OSX Defaults. There’s a lot of opportunities here but I’ll just leave a quick and dirty example to set our preferred screensaver below.

You could possibly even go as far as using this to manage various applications you have installed and possibly even setting registration keys for those applications. I haven’t even gotten to that point yet either so I’m not covering it here.

Further Reading

Well I hope this was good for you… it was good for me and helped me flesh out some of my current setup. I’m still on my path to learning how to best utilize ansible to manage my development environment so there’s definitely more to learn that I’ll continue to share as time progresses. I’m also not ignorant of a few other projects that aim to make working with ansible to manage development environments easier and one I’ve been looking at is Battleschool.

You can find the completed work for this blog post on github at jamescarr/ansible-mac-demo.

Categories: Blogs

Sprint Day Checklist

Agile Game Development - Tue, 03/29/2016 - 18:05


I'm often asked about how other teams organize their sprints.  Below is an example calendar and checklist (click on the image) one team used to review, retrospect and plan their sprints effectively.
Categories: Blogs

Doing a retrospective when you can’t get the team to meet?

Ben Linders - Tue, 03/29/2016 - 16:12
Over the years I've had some situations where I couldn't get a team to come together in a retrospective to meet and reflect on the sprint. I've seen two different kinds of reasons for this. One was that a team wasn't convinced that agile was suitable for them, so they questioned doing agile retrospectives. The other situation was that the team believed in agile and Scrum and wanted to do a retrospective, but was looking for an alternative solution where they would not have to meet physically. Something that also a dispersed team would consider when travelling isn't an option. Let's explore how you can recognize these situations and deal with them. Continue reading →
Categories: Blogs

Certified LeSS Practitioner with Craig Larman

Learn more about transforming people, process and culture with the Real Agility Program

In just a few weeks we will be hosting Craig Larman here in Toronto as he facilitates the first-ever-in-Canada Certified Large Scale Scrum Practitioner training!  Large Scale Scrum (LeSS) is about de-scaling.  In simple terms, this is about using Scrum to make the best possible use of the creativity, problem-solving and innovation abilities of large numbers of people, rather than getting them stuck in bureaucracy and management overhead.

Here are the details of this unique learning event:

  • Date and Time: April 11-13 (3 Days), 2016 – 9am to 6pm all three days
  • Location: Courtyard by Marriott Downtown Toronto, 475 Yonge St. Phone: 416-924-0611
  • Price: $3990.00 / person (that’s in Canadian Dollars – super great deal if you are coming from the US!)

Check out the full agenda and register here.

Here are some quotes from previous attendees:

“It was inspiring to discuss Large-Scale Scrum with Craig Larman. The content of the course was top-notch.” – Steve Alexander

“The delivery was outstanding and the supporting material vast and detailed.” – Simone Zecchi

“The best course I have ever been on. Totally blown away.” – Simon Powers

Certified Less Practitioner BadgeToronto is a great place to visit (I know many of our Dear Readers are from the United States) – don’t hesitate to consider coming in for a weekend as well as the course!

Register now! (Goes to our BERTEIG / World Mindware learning event registration site.)

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Certified LeSS Practitioner with Craig Larman appeared first on Agile Advice.

Categories: Blogs

Pattern: Testable Screens

Matteo Vaccari - Tue, 03/29/2016 - 13:44

When you are developing a complex application, be it web, mobile or whatever, it’s useful to be able to launch any screen immediately and independently from the rest of the system. By “screen” I mean a web page, an Android activity, a Swing component, or whatever it is called in the UI technology that you are using. For instance, in an ecommerce application, I would like to be able to immediately show the “thank you for your purchase” page, without going through logging in, adding an item to the cart and paying.

The benefits of this simple idea are many:

  1. You can easily demo user stories that are related to that screen
  2. You can quickly test UI changes
  3. You can debug things related to that page
  4. You can spike variations
  5. The design of the screen is cleaner and less expensive to maintain.

Unfortunately, teams are often not able to do this, because screens are tightly coupled to the rest of the application. For instance, in Javascript single-page applications, it would be good to be able to launch a view without having to start a server. Often this is not possible, because the view is tightly coupled to the Ajax code that gets the data from the server, that the view needs to function.

The way out of this problem is to decouple the screen from its data sources. In a web application, I would launch a screen by going to a debug page that allows me to set up some test data, and then launch the page. For instance:

Untitled 2

Note that the form starts pre-populated with default data, so that I can launch the desired screen with a single click.

Making screens decoupled from their data sources does, in my opinion, generally improve the design of the application. Making things more testable has a general positive impact on quality.

Categories: Blogs

How To Get Valuable Time Back: Part 1

Leading Agile - Mike Cottmeyer - Tue, 03/29/2016 - 12:30

Recently, I’ve been swamped with meetings.  I’m not talking Portfolio Planning, Release Planning, or even Sprint Planning meetings. I’m talking a lot of in-the-weeds type meetings.  After I walk out of some, I realize I could have been informed of the outcomes and action items and that would have been good enough. I didn’t need to sit through the whole damn thing.  There are times everyone walks out an hour later, are looking around, and are asking how to get that valuable time back.  It got me thinking, I need to write about this!  Then, as I started writing, I realized that this was either going to be a seriously long long-form blog post or I was going to have to write a few parts to it.  Being the bloggy-blog type, I vote for short form and write a series.

The Scenario

You arrive to the office at 8am on a Monday, only to realize you are late for a meeting someone on Friday after 5pm scheduled.  You’re not in the office 5 minutes and you’re already behind schedule.  What the hell!?  How does this happen?  You look at your calendar. You have back-to-back-to-back meetings all day Monday and Tuesday.  When are you supposed to actually do your work?  Given the current conditions, you’re going to need to catch up on things before or after work. This sucks!

The Problem

You have become a meeting hoarder.  That’s right.  At any moment, A&E is going to show up at the office and start filming an episode about you.  In this episode, they follow you around the office.  They confront you and the addiction of accepting too many meeting invites.  Of course this is ridiculous but you really do need some practical strategies to deal with this problem and get back on the track.

Meetings are supposed to be about the exchange of information.  Unfortunately, they are wildly inefficient and offer limited value.  For the most part, they are waste of our time.  Nobody wants to listen to you go on and on about how many meetings you have, now that you’re becoming a bottleneck in getting things done.

To start, I’m going to bucket meetings into 3 categories.

  1. Non value added but it is necessary.
  2. Non value added but it is NOT necessary.
  3. Value added.

I see very view meetings offer direct value to the customer.  Most meetings a non value added but we don’t have a sufficient method to exchange the information so we settle for the meeting.  It’s necessary.

Going forward, assume most meetings don’t add value and you should make them prove their worth to you.

The Solutions

In this post, I’m going to give you a strategy to begin controlling the volume of meeting invitations coming into your calendar. First, stop accepting meeting invites for meetings that are less than a full day away.  If someone invites you to a meeting at 5pm on Monday for a meeting at 9am Tuesday, they are being disrespectful of your time.

Set Limits

You may have a standard eight hour work day but the reality is that only half of that day is likely to be productive.  With that assumption, you should guarantee you have 4 hours of productivity. If you don’t, your day will be taken up with meetings, responding to email, browsing the Internet and related activities.  Block out 4 hours a day on your calendar for actual work. Make the events private.

Tip: Schedule your most important, high value tasks in the morning, before you get worn out from your current meetings

Turn On Your Email Auto-Responder

Until you can get your meeting addition under control, I recommend you begin using your email autoresponder.  I actually did this several years back, after reading The 4-Hour Work Week with very good results. When someone sends you an email or meeting invite, they automatically get an email from you (with the assumption that you have NOT read their invite).  This will buy you time to focus on real work and not just respond impulsively to the request.

Let’s look at a basic template

Greetings,
Due to high workload and too many meeting invites, I am currently 
checking and responding twice daily at 12:00 P.M. and 4:00 P.M.
If you require urgent assistance (please ensure it is urgent) that 
cannot wait until either 12:00 P.M. or 4:00 P.M., please contact 
me via phone at 555-876-5309.

All meeting invites will require 24 hour notice. Though I appreciate 
the invitation, sending me a meeting invite does not mean I will 
be accepting your invitation.
Thank you for understanding this move to more efficiency and 
effectiveness. It helps me accomplish more to serve you better.
Sincerely,
[Your name]
Conclusion

I can guarantee this is going to help, at least a little.  The more we can slow down the influx of meetings, the more we can assess the value of them and decide if we really need to accept them or not.  The autoresponder will put people on notice and inform them that your time is valuable but that you’re not being unreasonable.  If this gets you out of 1 meeting, won’t it be worth it?  I know it will do better than that.  Try it and let me know your results.

In my next post, I’ll write about how to triage your meeting requests, so you can begin spending more time doing real work and less going to meetings.

The post How To Get Valuable Time Back: Part 1 appeared first on LeadingAgile.

Categories: Blogs

How To Get Valuable Time Back: Part 1

Leading Agile - Mike Cottmeyer - Tue, 03/29/2016 - 12:30

Recently, I’ve been swamped with meetings.  I’m not talking Portfolio Planning, Release Planning, or even Sprint Planning meetings. I’m talking a lot of in-the-weeds type meetings.  After I walk out of some, I realize I could have been informed of the outcomes and action items and that would have been good enough. I didn’t need to sit through the whole damn thing.  There are times everyone walks out an hour later, are looking around, and are asking how to get that valuable time back.  It got me thinking, I need to write about this!  Then, as I started writing, I realized that this was either going to be a seriously long long-form blog post or I was going to have to write a few parts to it.  Being the bloggy-blog type, I vote for short form and write a series.

The Scenario

You arrive to the office at 8am on a Monday, only to realize you are late for a meeting someone on Friday after 5pm scheduled.  You’re not in the office 5 minutes and you’re already behind schedule.  What the hell!?  How does this happen?  You look at your calendar. You have back-to-back-to-back meetings all day Monday and Tuesday.  When are you supposed to actually do your work?  Given the current conditions, you’re going to need to catch up on things before or after work. This sucks!

The Problem

You have become a meeting hoarder.  That’s right.  At any moment, A&E is going to show up at the office and start filming an episode about you.  In this episode, they follow you around the office.  They confront you and the addiction of accepting too many meeting invites.  Of course this is ridiculous but you really do need some practical strategies to deal with this problem and get back on the track.

Meetings are supposed to be about the exchange of information.  Unfortunately, they are wildly inefficient and offer limited value.  For the most part, they are waste of our time.  Nobody wants to listen to you go on and on about how many meetings you have, now that you’re becoming a bottleneck in getting things done.

To start, I’m going to bucket meetings into 3 categories.

  1. Non value added but it is necessary.
  2. Non value added but it is NOT necessary.
  3. Value added.

I see very view meetings offer direct value to the customer.  Most meetings a non value added but we don’t have a sufficient method to exchange the information so we settle for the meeting.  It’s necessary.

Going forward, assume most meetings don’t add value and you should make them prove their worth to you.

The Solutions

In this post, I’m going to give you a strategy to begin controlling the volume of meeting invitations coming into your calendar. First, stop accepting meeting invites for meetings that are less than a full day away.  If someone invites you to a meeting at 5pm on Monday for a meeting at 9am Tuesday, they are being disrespectful of your time.

Set Limits

You may have a standard eight hour work day but the reality is that only half of that day is likely to be productive.  With that assumption, you should guarantee you have 4 hours of productivity. If you don’t, your day will be taken up with meetings, responding to email, browsing the Internet and related activities.  Block out 4 hours a day on your calendar for actual work. Make the events private.

Tip: Schedule your most important, high value tasks in the morning, before you get worn out from your current meetings

Turn On Your Email Auto-Responder

Until you can get your meeting addition under control, I recommend you begin using your email autoresponder.  I actually did this several years back, after reading The 4-Hour Work Week with very good results. When someone sends you an email or meeting invite, they automatically get an email from you (with the assumption that you have NOT read their invite).  This will buy you time to focus on real work and not just respond impulsively to the request.

Let’s look at a basic template

Greetings,
Due to high workload and too many meeting invites, I am currently 
checking and responding twice daily at 12:00 P.M. and 4:00 P.M.
If you require urgent assistance (please ensure it is urgent) that 
cannot wait until either 12:00 P.M. or 4:00 P.M., please contact 
me via phone at 555-876-5309.

All meeting invites will require 24 hour notice. Though I appreciate 
the invitation, sending me a meeting invite does not mean I will 
be accepting your invitation.
Thank you for understanding this move to more efficiency and 
effectiveness. It helps me accomplish more to serve you better.
Sincerely,
[Your name]
Conclusion

I can guarantee this is going to help, at least a little.  The more we can slow down the influx of meetings, the more we can assess the value of them and decide if we really need to accept them or not.  The autoresponder will put people on notice and inform them that your time is valuable but that you’re not being unreasonable.  If this gets you out of 1 meeting, won’t it be worth it?  I know it will do better than that.  Try it and let me know your results.

In my next post, I’ll write about how to triage your meeting requests, so you can begin spending more time doing real work and less going to meetings.

The post How To Get Valuable Time Back: Part 1 appeared first on LeadingAgile.

Categories: Blogs

Bureaucratic tests

Matteo Vaccari - Mon, 03/28/2016 - 18:00

The TDD cycle should be fast! We should be able to repeat the red-green-refactor cycle every few minutes. This means that we should work in very small steps. Kent Beck in fact is always talking about “baby steps.” So we should learn how to make progress towards our goal in very small steps, each one taking us a little bit further. Great! How do we do that?

Example 1: Testing that “it’s an object”

In the quest for “small steps”, I sometimes see recommendations that we write things like these:

it("should be an object", function() {
  assertThat(typeof chat.userController === 'object')
});

which, of course, we can pass by writing

chat.userController = {}

What is the next “baby step”?

it("should be a function", function() {
  assertThat(typeof chat.userController.login === 'function')
});

And, again, it’s very easy to make this pass.

chat.userController = { login: function() {} }

I think these are not the right kind of “baby steps”. These tests give us very little value.

Where is the value in a test? In my view, a test gives you two kinds of value:

  1. Verification value, where I get assurance that the code does what I expect. This is the tester’s perspective.
  2. Design feedback, where I get information on the quality of my design. And this is the programmers’s perspective.

I think that in the previous two tests, we didn’t get any verification value, as all we were checking is the behaviour of the typeof operator. And we didn’t get any design feedback either. We checked that we have an object with a method; this does not mean much, because any problem can be solved with objects and methods. It’s a bit like judging a book by checking that it contains written words. What matters is what the words mean. In the case of software, what matters is what the objects do.

Example 2: Testing UI structure

Another example: there are tutorials that suggest that we test an Android’s app UI with tests like this one:

public void testMessageGravity() throws Exception {
  TextView myMessage = 
    (TextView) getActivity().findViewById(R.id.myMessage);
  assertEquals(Gravity.CENTER, myMessage.getGravity());
}

Which, of course, can be made to pass by adding one line to a UI XML file:

<TextView
  android:id="@+id/myMessage"
  android:gravity="center"
/>

What have we learned from this test? Not much, I’m afraid.

Example 3: Testing a listener

This last example is sometimes seen in GUI/MVC code. We are developing a screen of some sort, and we try to make progress towards the goal of “when I click this button, something interesting happens.” So we write something like this:

@Test
public void buttonShouldBeConnectedToAction() {
    assertEquals(button.getActionListeners().length, 1);
    assertTrue(button.getActionListeners()[0] 
                 instanceof ActionThatDoesSomething);
}

Once again, this test does not give us much value.

Bureaucracy

The above tests are all examples of what Keith Braithwaithe calls “pseudo-TDD”:

  1. Think of a solution
  2. Imagine a bunch of classes and functions that you just know you’ll need to implement (1)
  3. Write some tests that assert the existence of (2)
  4. [… go read Keith’s article for the rest of his thoughts on the subject.]

In all of the above examples, we start by thinking of a line of production code that we want to write. Then we write a test that asserts that that line of code exists. This test does nothing but give us permission to write that line of code: it’s just bureaucracy!

Then we write the line of code, and the test passes. What have we accomplished? A false sense of progress; a false sense of “doing the right thing”. In the end, all we did was wasting time.

Sometimes I hear developers claim that they took longer to finish, because they had to write the tests. To me, this is nonsense: I write tests to go faster, not slower. Writing useless tests slows me down. If I feel that testing makes me slower, I should probably reconsider how I write those tests: I’m probably writing bureaucratic tests.

Valuable tests

Bureaucratic tests are about testing a bit of solution (that is, a bit of the implementation of a solution). Valuable test are about solving a little bit of the problem. Bureaucratic tests are usually testing structure; valuable tests are always about testing behaviour. The right way to do baby steps is to break down the problem in small bits (not the solution). If you want to do useful baby steps, start by writing a list of all the tests that you think you will need.

In Test-Driven Development: by Example, Kent Beck attacks the problem of implementing multi-currency money starting with this to-do list:

$5 + 10 CHF = $10 if rate is 2:1
$5 * 2 = $10

Note that these tests are nothing but small slices of the problem. In the course of developing the solution, many more tests are added to the list.

Now you are probably wonder what would I do, instead of the bureaucratic tests that I presented above. In each case, I would start with a simple example of what the software should do. What are the responsibilities of the userController? Start there. For instance:

it("logs in an existing user", function() {
  var user = { nickname: "pippo", password: "s3cr3t" }
  chat.userController.addUser user

  expect(chat.userController.login("pippo", "s3cr3t")).toBe(user)
});

In the case of the Android UI, I would probably test it by looking at it; the looks of the UI have no behaviour that I can test with logic. My test passes when the UI “looks OK”, and that I can only test by looking at it (see also Robert Martin’s opinion on when not to TDD). I suppose that some of it can be automated with snapshot testing, which is a variant of the “golden master” technique.

In the case of the GUI button listener, I would not test it directly. I would probably write an end-to-end test that proves that when I click the button, something interesting happens. I would probably also have more focused tests on the behaviour that is being invoked by the listener.

Conclusions

Breaking down a problem into baby steps means that we break in very small pieces the problem to solve, not the solution. Our tests should always speak about bits of the problem; that is, about things that the customer actually asked for. Sometimes we need to start by solving an arbitrarily simplified version of the original problem, like Kent Beck and Bill Wake do in this article I found enlightening; but it’s always about testing the problem, not the solution!

Categories: Blogs

Personal Empowerment All-Stars: Jack Canfield, Ken Blanchard, and Stephen Covey at Microsoft

J.D. Meier's Blog - Mon, 03/28/2016 - 17:44

“You can, you should, and if you’re brave enough to start, you will.”  — Stephen King

One of the best things at Microsoft is the chance to meet extraordinary people.

Jack Canfield, Ken Blanchard, and Stephen Covey are a few that top my list.

They are personal empowerment all-stars.

As I was re-writing my posts on lessons learned from Jack Canfield, Ken Blanchard, and Stephen Covey, I noticed what they share in common.

What do Jack Canfield, Ken Blanchard and Stephen Covey have in common?

Their work has a heavy emphasis on personal-empowerment, positivity, and people.

I thought it would be interesting to write a narrative about lessons learned from each, to supplement my bullet point write ups.

Here we go …

Jack Canfield at Microsoft

Jack Canfield is all about taking full responsibility for everything that happens in your life.  And he starts with self-talk.  He says it’s not what people say or do, it’s what you say to yourself.  For example, it’s not what Jack says to Laura, it’s what Laura says to Laura.

From a personal empowerment standpoint, Jack reminds us that we have control over three responses: 1) what we say or do, 2) our thoughts, 3) the images in our head.  Jack is a big believer in the power of visualization and he reminds us that’s how athletes perform at greater levels — they see things in their minds, to guide what they can do with their bodies.

Jack shares a very simple formula for success.  Jack’s success formula is Event + Response = Outcome.  If you want to change the outcome, then change your response.  It sounds simple, but it’s empowering.

Jack Canfield also reminded us that we are the creative force in our life and to get out of victimism:

“You are not the victim of your circumstances–You are the creative force of your life.”

Grow your circle of influence and make tremendous impact.

Read more at Lessons Learned from Jack Canfield.

Ken Blanchard at Microsoft

Ken Blanchard is really about accentuating the positive.  So much of the world focuses on what’s wrong, but he wants to focus on what’s right, so we can do more of that.

Ken has an incremental model of leadership that starts with you and expands from there: you, your team, your organization.  The idea is that you can’t lead others effectively, if you can’t even lead yourself.

Ken’s model for leadership is really an adaptive model, that’s focused on the greater good, and it starts by helping everybody get an “A.”  Leaders that apply one style to all team members, aren’t very effective.  Ken suggests that leaders apply the right styles depending on what individuals need.  Ken’s 4 leadership styles are:

  1. Directive
  2. Coaching
  3. Supportive
  4. Delegating.

Perhaps, the most profound statement that Ken made is that “leadership is love.”  He said that leadership includes “loving your mission”, “loving your cusotmers”, “loving your people”, and “loving yourself — enough to get out of the way so others can be magnificent.”

Read more at Lessons Learned from Ken Blanchard.

Stephen Covey at Microsoft

Stephen Covey was really about personal effectiveness, realizing your potential, and leaving a legacy.

Covey really emphasized a whole-person approach: Body, Mind, Heart, Spirit.  His point was that if you take one of the four parts of your nature away, then you’re treating a person like a “thing” you control and manage.

Covey also emphasized the importance of a personal mission.  It gives meaning to your work and it helps you channel all of your efforts as you live and lead your legacy.  He also suggested writing your personal mission down and visualizing it to imprint it on your subconscious.

The other key to realizing your potential is finding your voice.  Use all of you, your best way, in your unique way, for your best results.  That’s how you differentiate and add value for yourself and others.

And, of course, Stephen Covey reminded us of the 7 Habits of Highly Effective People:

  1. Be proactive.
  2. Begin with the end in mind.
  3. Put first things first.
  4. Think win-win.
  5. Seek first to understand, then to be understood.
  6. Synergize.
  7. Sharpen the saw.

Habits 1,2,and 3 are the foundation for private victories and integrity.  Habits 4, 5, and 6 are the keys to public victories.

Read more at Lessons Learned from Stephen Covey.

All-in-all, I have to say that while individually each of these personal empowerment all-stars has great wisdom and insight for personal effectiveness, leadership, and success, they are actually “better together.”

Each day in the halls of Microsoft, I find myself reflecting on their one-liner reminders, whether it’s Covey’s “Put first things first,” or Canfield’s “You are the creative force of your life”, or Blanchard’s “None of us is as smart as all of us.”

You Might Also Like

How Tos for Personal Effectiveness at a Glance

Personal Effectiveness at Microsoft

Personal Effectiveness Toolbox

Categories: Blogs

It Just Got Easier to Get Important SAFe Content Updates

Agile Product Owner - Mon, 03/28/2016 - 17:23
Click to enlargeClick to enlarge

Hi Folks,

There is a continuous flow of SAFe information on the net. It comes fast, and in all forms: news, case studies, opinions, articles, videos, and more. We capture what we can on the SAFe blog, so that you can stay up to date.

But there is a lot of information there, so we’ve now made it easier for you to find out what’s most important via the the addition of a new “SAFe updates” feature on the Framework homepage,  You’ll see it at the top of the screen, just below the main menu. It allows you to quickly scan what we consider to be essential reading for SAFe practitioners engaged in a SAFe implementation. This is where new content will be developed, pointers to new guidance articles and more, and each post is open for comments so that we can get your feedback. This is how we develop SAFe, in full, public view. We need your input to keep SAFe safe.

The rest of SAFe news will continue as usual—see how it’s organized in the diagram pictured here—but for those of you who are implementing SAFe, you’ll want to keep an eye on the “SAFe updates” section so you can stay informed about the latest developments. And if you add value via comments, we’ll hope to capture that too.

Stay SAFe!

–Dean

 

Categories: Blogs

Dealing with Dead Letters and Poison Messages in RabbitMQ

Derick Bailey - new ThoughtStream - Mon, 03/28/2016 - 13:30

A question was asked on StackOverflow about handling dead letters in RabbitMQ. The core of the question is centered around how to handle what are known as “poison messages” – messages that are problematic and cannot be processed for some reason.

Dead letters

The person asking wants to know how to deal with these bad messages, when they have been sent to a dead-letter queue. The queue stacks up with messages while the code to handle them is fixed – but what happens when the fix is ready? How do you re-process the messages through the right queue and consumer?

The Original Question

TL;DR: I need to “replay” dead letter messages back into their original queues once I’ve fixed the consumer code that was originally causing the messages to be rejected.

I have configured the Dead Letter Exchange (DLX) for RabbitMQ and am successfully routing rejected messages to a dead letter queue. But now I want to look at the messages in the dead letter queue and try to decide what to do with each of them. Some (many?) of these messages should be replayed (requeued) to their original queues (available in the “x-death” headers) once the offending consumer code has been fixed. But how do I actually go about doing this? Should I write a one-off program that reads messages from the dead letter queue and allows me to specify a target queue to send them to? And what about searching the dead letter queue? What if I know that a message (let’s say which is encoded in JSON) has a certain attribute that I want to search for and replay? For example, I fix a defect which I know will allow message with PacketId: 1234 to successfully process now. I could also write a one-off program for this I suppose.

I certainly can’t be the first one to encounter these problems and I’m wondering if anyone else has already solved them. It seems like there should be some sort of Swiss Army Knife for this sort of thing. I did a pretty extensive search on Google and Stack Overflow but didn’t really come up with much. The closest thing I could find were shovels but that doesn’t really seem like the right tool for the job.

There’s a lot of text in the question, but it can be boiled down to a few simple things – summed up in the question’s own “TL;DR” at the start.

The TL;DR Answer

In the middle of the second paragraph, this person asks,

Should I write a one-off program that reads messages from the dead letter queue and allows me to specify a target queue to send them to?

Generally speaking, yes.

There are other options, depending on the exact scenario. However, there is no built-in poison message or dead-letter handling strategy in RabbitMQ. It provides the mechanism to recognize and isolate poison messages via dead-letter exchanges, but it does not give guidance or solutions on handling the messages one they are in the dead letter queue.

One Solution: Automatic Retry

Sometimes a message is dead lettered not because of your code or a problem with your system. For example, an external system may be down or unreachable. In cases like this, an automatic retry of a message can be a useful solution for a dead letter.

The best way to achieve this will vary with your needs, but a common solution is to set up a delay for the re-try. Using the delayed message exchange plugin as your dead letter exchange will allow you to set a timer on a message in the hopes that waiting for the service to come back up will be successful.

But, this would only automate the retries on an interval and you may not have fixed the problem before the retries happen.

The Needs Of The System

In the case of the question being asked, creating an app to handle the dead letters is likely the best way to go, for several reasons.

  • The need to examine messages individually and decide what to do with them
  • The need to find specific messages in a queue – which isn’t possible in RabbitMQ, directly
  • The need to wait for an unknown period of time, before retrying, while things are fixed

Each of these requirements are individually not difficult to do – but to do them from RabbitMQ, in an automated manner, may be difficult if not impossible. At least, it may not be possible all the time.

Examine Messages Individually

One of the stated needs is to look at messages in the dead letter queue and individually determine what should be done with them. There may be a lot of reasons for this – finding messages that have bad or missing data, identifying duplicates that can be thrown away, sending messages off for analysis to see what can be fixed or be done, etc. 

It’s possible to use a standard consumer for some of this work, provided the examination can be automated. 

For example, say you have previously fixed a bug and are dealing with a lot of messages that need to be re-processed. It may make sense to write a small app that reads the dead letters from the queue and re-publishes them to the original queue, assuming they meet some criteria. Messages that don’t meet the criteria could be sent to another queue for manual intervention (which would require it’s own app to read and work with the messages).

If you can automate the “examine messages individually” process, you should. But this is likely not possible early in a project’s life as the ability to do this typically means problems have already been solved and the solution can be automated. When you’re early in the life of an app, you’ll likely run into problems that don’t have a solution, yet, and you’ll need to manually sort through the messages to deal with them.

Find Message By Some Attribute

RabbitMQ is a true queue – first in, first out. You don’t get random access, seek or search in it’s queues. You get the next message in the queue, and that’s all you’ll ever get. Because of this, features like searching messages to look for certain ones, while ignoring others are not really possible. 

This means you’ll need a database to store the messages, and you’ll need to pull the messages out of the queue so they can be searched and matched correctly. Because you’re pulling the messages out of the queue, you’ll need to ensure you get all the header info from the message so that you can re-publish to the correct queue when the time comes.

But the time to re-publish messages is not always known.

Re-Publish At An Unknown Future Time

It may be ok to leave the messages stuck in the queue while you fix the problem. I’ve done this numerous times and with great success. Fix the problem, republish the code and the queue starts draining again. 

But this only works if you are dealing with a single message type in your queue. 

If you are dead-lettering multiple message types or messages from multiple consumer types, and sending them all to the same dead letter queue, you’re going to need a more intelligent way to handle the messages. 

Having an app that can read the dead letters and re-publish them when a user says to republish, will be necessary in some scenarios.

Not Inventing Anything

In the original question, a statement of this not being a unique problem appears:

I certainly can’t be the first one to encounter these problems and I’m wondering if anyone else has already solved them.

And certainly this person is not the first to have these problems!

There are many solutions to this problem, with only a few possible options listed here. There are even some off-the-shelf software solutions that have strategies and solutions for these problems baked in. I think NServiceBus (or the SaaS version of it) has this built in, and MassTransit has features for this as well.

There are certainly other people that have run into this, and have solved it in a myriad of ways. The availability of these features comes down to the library and language you are using, though.

Because of this, it often comes down to writing code to handle the dead letters. That code may require a database so you can search and select specific messages, get rid of some, etc. 

But whatever your language and library, you should automate the solution to a given problem. But until you have a working solution, you’ll likely need to build an app to handle your dead letter queue appropriately.

Categories: Blogs

Testing the Agile MBA

Agile Tools - Mon, 03/28/2016 - 07:12

man-person-hand-lens

“Research is formalized curiosity. It is poking and prying with a purpose.”

-Zora Neale Hurston

So as I thought about my earlier post on the idea of an Agile MBA, I realized that there is a whole lot that goes into putting together something like that. So before heading down that path, a guy might be well advised to check and see if there is any real interest in the idea before wasting a lot of energy on pursuing it further. So with that thought in mind, I created openagilemba.com.

The idea is simple, taken from the lean startup world. If you have an idea, put it out there and test whether or not there is a market for it. So I’m doing exactly that. Go check it out. I named it the “Open” Agile MBA because I’m not trying to sell anyone anything. What I have in mind is more of an open source MBA model. If we can assemble the resources, then anyone can use them. That’s kind of an exciting idea. It’s not new, there’s a NoPay MBA out there that is really cool and does something similar for a standard MBA.

So I’m starting with small, agile steps. Simply put up a web page and ask people if they are interested. If I get a few responses (feedback!), then I pursue it further, if it’s just crickets, then perhaps I tweak the idea and try again. I can’t wait to find out!


Filed under: Agile, Lean, MBA Tagged: AgileMBA, Lean Startup, Open Source, Testing
Categories: Blogs