Skip to content

Feed aggregator

Why is it Called an Agile Transformation?

Agile Management Blog - VersionOne - Thu, 09/25/2014 - 13:56

Guest post by Charlie Rudd of SolutionsIQ

We’re not in Kansas anymore.

Most organizational change initiatives are not called transformations. So how come agile change initiatives are? To answer this question, let’s first review some examples of organizational change as well as the impact they have on the people in the organization.

 

charlie-rudd-solutionsiq-image

 

Note in the above table how the magnitude of change deepens as you move from the top to the bottom of the table. At the top of the table, the change has little impact on the day-to-day work of most individuals. However, at the bottom, the impact of change can be quite profound. For example, changing the corporate culture uproots deeply embedded behavior and necessarily changes long-standing beliefs and assumptions.

Working with this principle, we can identify three levels of organization change magnitude:

Superficial org change

This type of org change has little impact on your day-to-day responsibilities and activities. Many corporate re-orgs are actually superficial. For example, your company may hire a new VP, but that doesn’t necessarily affect the project you’re working on, the work you perform on it, or the people you work with.

Significant org change

This type of org change has a greater impact on you. Your job or your work environment is affected, but usually not your role or job description. For example, some work procedures may change, and you may receive training or new software tools as a result. You may even have to join a new team or someone new joins your team. Perhaps you have to move to an office in a different building. Whatever the case, when your organization undergoes a significant org change, you are definitely affected even if the effect is rather minimal.

Transformational org change

Often, after a transformational org change, you have a new role that you have never performed before. Your job responsibilities change radically. So do your supervisors and their responsibilities. You have to approach your work in an entirely novel way, learning completely new skills, and often a new vocabulary. The unwritten rules that used to guide how you act and what you do no longer apply. Your work world is completely new. It’s surprising. It’s refreshing. It’s strange.

Let’s now consider where agile transformation fits within this org change magnitude framework:

Agile org change

In an agile transformation initiative, some of the change is superficial – some more significant – but when successful, the bulk is transformational. Learning new practices and tools are needed, but not sufficient for lasting success. Change must go deeper, in many cases down to the roots of corporate values, and what had been unquestioned management principles. It must encompass a new way of thinking about work and your place in this new world. Long-standing assumptions and unwritten rules go out the window as you discover new ways to work together with your colleagues, every day. This is why it’s called an agile transformation:  you, your job, your responsibilities, your management, and your organization are transformed into something new.

So are we in Kansas anymore? In a word:  nope. We’re not in Kansas. Agile transforms your world. It’s surprising. It’s refreshing. It’s strange. Whatever you think agile may be, there’s one thing it’s not:  business as usual.

Categories: Companies

So You Want to Scale Your Agile?

Agile Management Blog - VersionOne - Thu, 09/25/2014 - 13:45

Guest post by AgileBill Krebs of Agile Dimensions

Some teams – such as sustaining engineering teams – are fine with Kanban. This method shows work on a taskboard as it progresses from waiting, to doing, to done, with an eye toward cycle time and focusing on a few things at once.

Some teams are fine with traditional Project Management Body of Knowledge (PMBOK – from the PMI) – as in special client engagements.  This covers key project aspects such as cost, procurement, communications, and may use a Gantt chart to plan jobs of known but varying sizes.  Unlike most of our agile projects, this will assume there is little likelihood of change.

Some teams are fine with Scrum development.  Two-week iterations and sprints give vital visibility into progress, and also windows for change.  Teams learn empirically and use these iteration windows to tune and adjust their product and procedures.  Extreme Programming (XP) plans in a similar fashion, but adds some technical practice to build quality in.

These teams often have nine or fewer people, with three to six months for work.  But what if the project or product you deliver has 40 people?  100 people? 300 people?

Projects like these would require scaling up your agile.  Authors such as Jutta Eckstein, Scott Ambler, Kenny Rubin, and Dean Leffingwell have all described ways to do this.

Using such concepts in practice with dozens of teams has shown six key areas:

1)  Roll-out – does management and the teams know agile at scale is different?  Are they committed to doing it, and have they had some training on multi-team agile?   Do you have someone prepared to program manage a team of teams?

2)  Common language – one team can use whatever terms they wish.  But when team of teams (aka, Scrum of Scrums) forms, it quickly becomes a mess where some say ‘Feature’ and some say ‘Epic.’  Pick a scaled agile methodology and vocabulary, and make that your organization’s common language.

3)  Features – it helps organize larger projects to use the ‘parent’ of a story.  Let’s call this a ‘Feature’ (like Dean Leffingwell’s Scaled Agile Framework® or SAFe™).  While we still use small Stories, it becomes more important to plan ahead using larger Features so other teams can know when to depend on our output.  This also helps stakeholders gain a picture of our product even as the number of Stories multiples as the team grows in size.

4)  Align the business – with larger projects, it becomes even more important for business-facing parts of your organization to connect with your scaled agile process. Planning at higher levels such as portfolio (multiple products) and program (multiple teams) becomes more integral to how we work.  But working with our market experts is harder if they do not understand our terminology and approaches to delivering at scale.  Other areas such as accounting and your project management office may need to understand your scaled approach as well.

5)  Kick off each release – spend a little more time before each release to look for dependencies between high-level features.  Let teams know what’s coming, and have them discuss how they will deliver together.

6)  More than a dozen teams – it makes sense to forge four or even 10 teams into a group that delivers a product.  But if you product requires more than a dozen teams, that will be big enough to demand another sub organization with its own program manager.

Agile development has been used in many projects, large and small.  Keeping these six factors in mind will help with your enterprise-strength projects. Taskboards may be sufficient with smaller projects, but larger projects benefit from tools designed to see things at more levels – portfolio, program, and team.

What steps can your project take to work smoothly between teams? 

agilebill“AgileBill” Krebs has over 20 years of programming, performance, project management, and training in the IT industry at 5 IBM labs, banking, telecom, and healthcare.  He has used agile since 2001, and taught it to over 2,000 people worldwide. He has presented at agile conferences, IBM Research, and conferences on education. Bill’s certifications and groups include SPC, PMI-ACP, CSP, ICE-AC, and others.  He hosts the Distributed Agile Study Group and is a member of ALN, the Scrum and Agile Alliances, ACM, AERA, PMI, and more. He is pursuing a graduate degree in education technology, while working as an enterprise agile coach in the healthcare IT industry.

Scaled Agile Framework is a registered trademark and SAFe is a trademark of Scaled Agile, Inc.

 

Categories: Companies

WIP and Priorities – how to get fast and focused!

Henrik Kniberg's blog - Thu, 09/25/2014 - 12:23

Many common organizational problems can be traced down to management of Priorities and WIP (work in progress). Doing this well at all levels in an organization can make a huge difference! I’ve experimented quite a lot with this, here are some practical guidelines:

WIP = Work In Progress = stuff that we have started and not yet finished, stuff that takes up our bandwidth, blocks up resources, etc.. Even things that are blocked or waiting are WIP. Visualize and limit WIP
  • WIP is what it is, regardless of priorities (priorities are what we should be focusing on, WIP is what we are actually doing. They sometimes match…)
  • It’s almost always a good idea to visualize WIP, whether it is at a team level or company level or whatever. Invisible WIP is hard to discuss, hard to challenge, and hard to limit. And hence, invisible WIP tends to grow and slow us down.
  • Make WIP in-your-face-Visible! Ideally on the wall, since people will see it (and hence discuss it) without having to actually find and open a document. If it’s not on the wall, it tends to be ignored or forgotten. Analog visualization (stickynotes etc) usually works best, but showing a digital visualization on a screen works too.
  • Use a “noise threshold” to avoid micromanagement. Avoid cluttering the visualization with hundreds of tiny things. For example, the noise threshold for a team-level visualization could be “if it’s less than 2 hours of work, just do it, don’t bother putting up a sticky”. At a department level the noise threshold could be “things that involve more than one team for more than a week”. Items smaller than that are allowed to “fly under the radar” (which has the added bonus of provided an incentive to break work down into small chunks).
  • If there’s lots of noise, aggregate and visualize the noise (so that it too can be discussed/challenged/limited). For example “# of currently open Jira tickets” instead of displaying each individual ticket.
  • The goal is to visualize all significant WIP that is burdening this team or department or project or whatever, and to do it in a way that doesn’t involve managing hundreds of individual notes on a board.
  • It’s almost always a good idea to define WIP limits. The WIP limit is just “how much stuff can we have in progress before we start getting bogged down with multitasking and coordination overhead”. Start high and gradually reduce it, it’s an awesome way to drive out waste! This is one of the key principles in Kanban.
  • Current WIP will sometimes be higher than the WIP limit! That’s OK. It’s an alerting system. Current WIP is simply our current reality, while WIP limit is our desired reality. Visualizing both makes the problem explicit. As long as we’re over the limit, our main focus is finishing things (or canceling them) and we are super-restrictive about starting new things.

Here’s an interesting study that shows how WIP limits dramatically benefit quality and speed “The Impact of Agile – Quantified

Priorities are something different.

Cascading priorities

  • Priorities are a guidelines to help us decide what new WIP to take on (when our WIP limits allow it), and what things to reject or postpone. Without clear priorites, we risk unaligned autonomy and suboptimization.
  • Priorities also help us resolve resource conflicts within our current WIP (“Joe is needed for both of these tickets, which should he focus on first?” or “Let’s help Team X first, Team Y’s stuff is lower prio”)
  • Priorities should be limited, too! Something like 1-3 items is usually sufficient. Because if everything is important, nothing is important! And if the list is too long, no one will read it or remember it.
  • Priorites can be fluffy ( “our current priorities are 1) repay technical debt, and 2) improve the backoffice UX”.)
  • …. or specific (“our current priorities are 1) Deliver feature X, and 2) Prototype feature Y, and 3) Install the new build tool”)
  • Priorities may correspond directly to the WIP items (“these 3 WIP items are top priority”, or “the WIP items on the board are stack-ranked in priority order”). But they don’t have to be that specific.
  • The test for useful priorties is:
    • 1) “This list of priorities helps us decide what to do today, and what NOT to do today!”
    • 2) “This list of priorities is so short and clear that everyone involved knows it by heart!”.
    • 3) “We all understand why these priorities make sense for the company”
  • Priorites are not exhaustive. We may have lots of things going on that are not directly related to our top 3 priorities. That’s OK, but:
    • Non-priority work should by default not conflict with priority work. There are of course exceptions (“the server just crashed, bring it up again NOW!”), use common sense and talk about it.
  • Guiding principle:
    • 1) Can you contribute to a high-prio item today (directly or indirectly)? If so, do it! Not sure? Ask!
    • 2) If you can’t contribute to a high-prio item, then work on something lower prio, but don’t let it conflict with people working on high prio work.
    • 3) Be explicit about your choice and why.
  • Priorities are cascading (or hierarchical, if you prefer, I just tried to find a less ominous-sounding word)
    • Your team has priorities. So does your department. So does the whole company. Higher level priorities trump lower level priorities, and are essential for alignment. That means:
      • 1) Lower level priorities should, at best, be aligned with higher level priorities (ex: “Department’s priority is Y, Team’s priority is Y”, or “Department’s priority is Y, team’s priority is X which contributes to Y”)
      • 2) Lower level priorties should, at least, not conflict with higher level priortities (ex: “Department’s priority is Y, but that involves mostly other teams, our team can’t really contribute, so we’ll instead prioritize X, and make sure we don’t take time from anyone working on Y”).
  • Avoid individual priorties. That tends to kill teamwork. Better to share priorities at a slightly higher level, such a team or workstream or project.
  • Priorities change (of course!)
    • It’s useful to have a recurring prioritization meetings to reevaluate priorities (every sprint for a team, every 6 weeks for a department, or whatever).
    • … but priorities can also change at any time in between!
    • Higher level priorities shouldn’t change as often, as they have ripple effects on lower level priorities and WIP. Causes confusion. The frequency of prioritization meetings should correspond roughly to how often we expect priorities to need to change.
  • Long-lived and short-lived priorities can be on the same list!
    • For example “our priorities are 1) customer support, 2) project X, 3) tech debt”. Project X might be short-lived and soon to be replaced by Project Y, while “customer support” might stay top priority for years!
  • When a priority list has more than one item, be clear about what this really means.
    • Ex: If our top priorities are “Project A and Project B”, what does it mean? Is A more important then B? Or are they equally important, but more important than other projects? Should we try to work exclusively on A if possible, or should we balance our time between both A and B?
    • No exact rules needed, just some guiding principles.

If used appropriately, cascading priorities and WIP visualization and WIP limits can really help your organization be focused and fast!

Categories: Blogs

The Secondary Indirective

Business Craftsmanship - Tobias Mayer - Thu, 09/25/2014 - 08:53

This is an alternative to the Prime Directive for opening a retrospective. It isn’t a directive, and it needn’t be primary, just something you may like to reflect on with your fellow workers.

We are emotional and vulnerable beings, subject to a continuous flow of influences from a myriad of sources. Sometimes we perform magnificently, other times we mess up. Mostly we are somewhere between these extremes. In this last period of work everyone did what they did, and likely had reasons for doing so. Accept what is. And now, what can we learn from our past actions and thinking that will inform and guide our future ones?

___

originally posted on AgileAnarchy in 2010

Categories: Blogs

Coping with a Fear of Inaccuracy

Agile Tools - Thu, 09/25/2014 - 07:53

“Even imperfect answers can improve decision making.” – Donald Reinertson

When I read this from Reinertson’s book on flow, I realized that I had found the reason that people have so much trouble with story points. It’s a matter of overcoming their fear of inaccuracy. They are under the misguided belief in the accuracy of using hours or days to estimate work on projects. They’re basically afraid of being wrong (aren’t we all?) and that is the source of a lot of resistance to change. Being wrong sucks. I get that. Nevertheless, I’m wrong a lot.

Fortunately, wrong isn’t always boolean (unless you happen to step in front of a swiftly moving bus). There are shades of wrong. You can be just a little wrong, your aim just a little off, and still be headed in the right direction. Or you can be a lot wrong (the bus). That’s where frequently re-examining your decisions can help you catch the stuff that’s a lot wrong and fix it. What about the stuff that’s a little wrong? Don’t sweat it.

In the software world, a little wrong is still pretty useful. There is a tremendous amount of error and missing information. Compared to that, being slightly wrong isn’t so bad. Being slightly wrong gets you started – at least in mostly the right direction. You’re going to fine tune it anyway, so there’s really no need for decision making precision. That will come later, when you know more.

To me, the ability to overcome our fear of being wrong stems from an all-or-nothing mindset. We can’t allow ourselves to be even a little wrong for fear of failure. As Reinertson rightly points out, there is a time and a place for precision in decision making, but it’s not ALL the time.


Filed under: Agile, Process Tagged: error, estimation, Planning, sizing
Categories: Blogs

Analyzing Objective-C: the World of OS X and iOS within your Grasp

Sonar - Thu, 09/25/2014 - 06:50

With version 3.0 of the C / C++ plugin in August, 2014, support of the Objective-C language arrived.

Support of Objective-C in SonarQube was heavily awaited by the community, and has been in our dreams and plans for more than one year. You might wonder – why did it take us so long? And why now, when Apple has announced Swift? Why as a part of the existing plugin? I’ll try to shed light on those questions.

A year ago, there were only two developers in SonarSource’s language team, Dinesh Bolkensteyn and me. We’re both heavy hitters, but with more than a dozen language plugins, we weren’t able to give most of them, including C / C++, as much time as we wanted. Also we had technological troubles with analysis of C / C++. As you may know, source code in C / C++ is hard to parse, because… well, it’s a long story, which deserves a separate blog entry, so just take my word for it, it’s hard. And we didn’t want to provide a quick-win solution by locking ourselves and our users in to third-party tools, which wouldn’t play well in the long-term for the same reasons that third-party tools were a problem in other languages.

Today all that has changed. There are now seven developers on the language team (and room for more), with two dedicated to C / C++. We’ve spent the year not only on the growth of the team, but also on massive improvements to the entire C / C++ technology stack, while preserving its ease of use. At the same time, we’ve delivered eight new releases, with valuable new rules in each release. Since March, we’ve released about once a month, and plan to keep it up.

With solid new technical foundations in place, we were able to dream again about new horizons. One of them was Objective-C. It’s a strict superset of C in terms of syntax, so the work we had done improving the plugin also prepared us to cover Objective-C. Of course, with the announcement of Swift, actually covering Objective-C may not make sense to some, but there’s a lot of Objective-C code already out there, and as history has shown, old programming languages never die.

That’s why we decided to extend the existing plugin to cover Objective-C, and rebrand the plugin “C / C++ / Objective-C”, which is exactly what you see in SonarQube Update Center. Still, to better target the needs of the audiences we decided to have two separated licences: one for C / C++ and one for Objective-C.

And of course this means that out of the box, you get more than 100 Objective-C rules starting straight from the first version, as well as a build-wrapper to simplify analysis configuration. However, during implementation we also realized how unlike C Objective-C is, and for that reason we plan to add new rules targeting specifically Objective-C in the upcoming releases.

So don’t wait any longer, and put your software to the quality analysis!

Categories: Open Source

Scrum Alliance, Scrum.org and Scrum Inc. Announce Collaboration

Learn more about our Scrum and Agile training sessions on WorldMindware.com

My heartfelt congratulations on this important and historic event!  Scrum is one, again!

From the official announcement issued by Scrum Alliance:

SCRUM ORGANIZATIONS ANNOUNCE OFFICIAL
COLLABORATIVE ADOPTION OF SCRUM GUIDE

Scrum Alliance, Scrum.org, and Scrum Inc. announce the release and joint endorsement of a new community website, ScrumGuides.org. The new website is the official source of “The Scrum Guide, The Definitive Guide to Scrum: The Rules of the Game.”

 

Dr. Jeff Sutherland and Ken Schwaber created Scrum and authored “The Scrum Guide” to ensure Scrum remains true to its core principles and values.

 

“The Scrum Guide is the canonical definition of Scrum. Ken and I have worked closely together for decades to keep it simple, clear, and, in the true spirit of Scrum, to include only what is absolutely necessary,” says Sutherland, CEO of Scrum Inc. “Scrum is a powerful tool to radically increase productivity. Every implementation of Scrum is different, as teams and organizations apply it within their context, but the fundamental framework always remains the same. For Scrum Alliance, Scrum.org, and Scrum Inc. to come together to recognize the central place the Scrum Guide holds will provide clarity to the hundreds of thousands of Scrum practitioners across the planet.”

 

The explosive growth of people and organizations using Scrum in recent years has led to some market confusion as to the precise definition of Scrum. The preeminent certifying bodies, Scrum Alliance and Scrum.org, coming together in support of a common definition of Scrum is a win for Scrum practitioners around the world.

 

“The pieces of Scrum are carefully fit to each other to yield the best possible results. This has taken years for Jeff and myself to achieve. Watch for new versions as we continue to refine,” said Ken Schwaber, founder of Scrum.org.

 

“It’s time for convergence in the Scrum community,” said Scrum.org’s operations chief David Starr. “Giving this clear explanation of Scrum clarifies the framework for the entire industry. We are pleased to support a shared and unambiguous source of truth defined by Scrum’s creators.”

 

Carol McEwan, Scrum Alliance Managing Director, said, “This makes the most sense for the Scrum community. The Scrum Guide is based on the principles on which Scrum was founded. It offers Scrum practitioners worldwide a common standard and understanding of the foundations of Scrum. This collaboration adds real value and can only benefit everyone practicing, or considering practicing, Scrum.”

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

More Agile Testing by Lisa Crispin and Janet Gregory available on October 10th, 2014

TestDriven.com - Thu, 09/25/2014 - 02:05
http://www.amazon.com/More-Agile-Testing-Addison-Wesley-Signature/dp/0321967054
Categories: Communities

Growing Agile: A Coach’s Guide to Agile Testing

TestDriven.com - Thu, 09/25/2014 - 00:03
https://leanpub.com/AgileTesting/read
Categories: Communities

The ART of the All-Hands Release Planning Meeting

Agile Product Owner - Wed, 09/24/2014 - 23:36

Hi Folks,

SAFe is powered by agile.

“The most efficient and effective method of conveying information to and within a development team is face-to-face conversation”.

Check out this blog post from down under SPC Em Campbell-Pretty to “feel the power” behind face-to-face Release Planning with SAFe:

http://www.prettyagile.com/2014/09/SAFe-ART-PSI-release-planning.html

A cool little video vignette is included for no additional charge.

Thanks Em!

Categories: Blogs

Neo4j: LOAD CSV – Column is null

Mark Needham - Wed, 09/24/2014 - 22:21

One problem I’ve seen a few people have recently when using Neo4j’s LOAD CSV function is dealing with CSV files that have dodgy hidden characters at the beginning of the header line.

For example, consider an import of this CSV file:

$ cat ~/Downloads/dodgy.csv
userId,movieId
1,2

We might start by checking which columns it has:

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line;
+----------------------------------+
| line                             |
+----------------------------------+
| {userId -> "1", movieId -> "2"} |
+----------------------------------+
1 row

Looks good so far but what about if we try to return just ‘userId’?

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line.userId;
+-------------+
| line.userId |
+-------------+
| <null>      |
+-------------+
1 row

Hmmm it’s null…what about ‘movieId’?

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line.movieId;
+--------------+
| line.movieId |
+--------------+
| "2"          |
+--------------+
1 row

That works fine so immediately we can suspect there are hidden characters at the beginning of the first line of the file.

The easiest way to check if this is the case is open the file using a Hex Editor – I quite like Hex Fiend for the Mac.

If we look at dodgy.csv we’ll see the following:

2014 09 24 21 20 06

Let’s delete the highlighted characters and try our cypher query again:

$ load csv with headers from "file:/Users/markneedham/Downloads/dodgy.csv" as line return line.userId;
+-------------+
| line.userId |
+-------------+
| "1"         |
+-------------+
1 row

All is well again, but something to keep in mind if you see a LOAD CSV near you behaving badly.

Categories: Blogs

Reframing to Reduce Risk

Leading Agile - Mike Cottmeyer - Wed, 09/24/2014 - 14:53

From what I have been able to decipher in my career businesses are around to make money. The way they make money is by offering goods and services to people willing to pay for them. Each business has their idea of the best way to deliver these goods and services they believe in some way sets them apart. Most businesses I have come across have come to settle on an approach that allows them to work on individual separately managed projects resulting in an increment of business value being delivered. Something similar to this:

  • Organize around projects
  • Present the work that needs to be done in “clearly defined” requirements
  • Staff the project appropriately according to the initial understanding of the scope and demand of the project
  • Work hard on this project until it is completed and delivered to the market

In my experience some risks for delivering these projects on time and on budget might be identified by a project manager early on in the initial inception phase of the project and might be reviewed at the end of each project, but not much attention is paid to the risks during the life of the project.

I understand that the following might not be a revolutionary brand new concept, but I wanted to work on exploring the intricacies of this idea with you while I continue to work through this myself.

Staying with a “Known Loser”

By not reassessing the risk throughout the life of the project we miss out on some great information to help us determine if the work can be completed on time (or at all for that matter) until the project is actually released.

We will continue to sink money into the project until it is “finished” even if everyone understands that there is no way to recoup our money through the sales of the eventual end result. I believe a big reason for staying with a “known loser” is that we become emotionally invested in these projects and stop looking at them objectively.

What if we were to stop looking at projects as work that needs to be done, and reframed them into a bucket of risk that needs to be removed.

Shifting Focus

By shifting the focus to reduce risk it allows us to have a different conversation about the work we are evaluating. If we are performing a lot of work without reducing our risk, we are simply pushing out the inevitable delay of a project thus causing us to spend even more money. However, if we were to focus on removing the risk associated with the funded work we can determine if it is still worth pursuing or if the money earmarked for this project would be better suited spent somewhere else.

I have recently begun to realize that the project progress drivers I used to focus on and track are actually not very helpful to measuring the likelihood of a project being completed on time.

Buckets of Risk

I believe that reframing our projects as a reduction in our exposed risk will allow us to focus our conversations on how valuable this investment is as opposed to how much money we might possibly make. If we continue this theory as we get more granular with the work and identify specific “buckets” of risk that are threatening to derail our progress we can more accurately predict if we will be able to deliver our work on time. The buckets I have used to date are Technical, Business, Verification & Validation, Organization and Dependencies. Let’s see some of the questions I’ve asked for each of these risk drivers and then you can decide if you think that these drivers will help you track the progress of your project.

Technical:

  • Do we have the technology in house already?
  • Is this going to impact our current architecture?
  • Do we have the appropriate skill set for the needed technology?

Business:

  • Do we have clarity of scope?
  • Do we understand the target market?
  • Can we deliver on the intended business value?

Verification & Validation

  • Do we know how we are going to test this work?
  • Do we have sufficient information to be able to know when we should address this risk?

Organization

  • Are the teams that need to work on this risk stable and fully staffed?
  • Do we have the appropriate environments available to properly develop and test our work?

Dependency

  • Is there anything that must be done outside of this team in order to deliver this work?
  • If there is work required by others, do we know their lead times?

Answering these questions will provide us the information necessary to truly understand our progress and make key decisions around the work we are evaluating.

The post Reframing to Reduce Risk appeared first on LeadingAgile.

Categories: Blogs

Tackle your organization impediments with 59 minute sprints!

Scrum Breakfast - Wed, 09/24/2014 - 12:48
As a comment on "How much do you let a new team self organize?" IanJ asked
Why are 59-Minute Scrums cool and what are they good for?The 59 Minute Scrum has been a popular training exercise for years in the Scrum community. The team gets a problem to solve, and goes through a "simulated" sprint to solve the problem. You get to experience real Scrum in a safe environment.

Fifty-nine minute sprints are an excellent learning tool because so much happens in such a short time. Everything that goes wrong in a real sprint can happen in a 59-minute sprint.  They are great for problem solving!

In my classes, I usually simulate a 3-day sprint, using 5 minutes for each half of the Sprint Planning and the Sprint Review, 12 minutes for each day, and 4 minutes for each daily scrum. If you add it up, it comes to 59 minutes. Each Scrum Team has a Scrum Master, a Development Team and a Task Board. Depending on the context, there may be one Product Owner for all the teams in the room, or each team gets it's own PO. Note the 59 minutes does not include the Retrospective, which I handle separately in my courses. I usually don't too much about the length of the Sprint Review either.

I have found that if a team does three of these over three days, by the end of third iteration, they are really good at the basics of Scrum and ready to do it "in real life."

What is the difference between this being a Scrum simulation and a real-life "micro Sprint"? Well not much, except for the importance of the results you are producing. If you plan to throw away the results, it's a simulation. If plan to use them, it's a micro-sprint.

What would be an example of using micro-sprints? One area is to address organizational impediments. Imagine you have assembled in one room not just the Development Team, but also their management, important stakeholders, the operations group and other interested parties for the product. Ask them, "What are the biggest impediments to success of our project?" Have them write them on stickies and post them on the wall. Like in a retrospective, have their owners present them. The others can ask clarifying questions.

Finally you do some sort of dot-voting to identify the top issues. Exactly how you do this depends on many things, most importantly the number of people in the room. One way is to each table create a short list of two or three items, merge the lists to get about 10 to 15 items, then have everyone dot vote the this short list to prioritize the problems.

Now you are ready to use Scrum 59's to tackle your challenges. What could you do to solve the issues identified? Form teams around the top priority cards, usually one team per card and one team per table (did I mention, island seating with about 6 to 8 people per group?) Whoever wrote the top priority cards becomes Product Owner for those cards.

Iterate once or twice to come up with possible solutions for those impediments.

I have seen awesome results with this approach! Usually the people who are most surprised are the managers in the room, because they have never seen their staff working so creatively and with such energy. And the teams come up with good ideas that the managers never would have thought of. Oh, and they get really good at the mechanics of Scrum and ready to take on the challenges of their project using Scrum to help them.

Prepare to be amazed! And don't forget you can start implementing those solutions right away!

Categories: Blogs

Spotify Engineering Culture (part 2)

Henrik Kniberg's blog - Wed, 09/24/2014 - 11:59

Here’s part 2 of the short animated video describing Spotify’s engineering culture (also posted on Spotify’s blog). Check out part 1 first if you haven’t already seen it!

This is a journey in progress, not a journey completed, so the video is somewhere between “How Things Are Today” and “How We Want Things To Be”.

Here’s the whole drawing:
Spotify-Engineering-Culture-Part2

(Tools used: Art Rage, Wacom Intuos 5 drawing tablet, and ScreenFlow)

Categories: Blogs

On the road to high performance teams

Scrum Breakfast - Wed, 09/24/2014 - 11:49
I have been thinking about continuing education for Scrum Masters.

The objective of a Scrum Master is to create a high performance team, which is in turn part of a high performance organization. So both team and Scrum Master must develop their skills moving forward. Just facilitating the Scrum meetings won't get you there.

The Scrum Alliance has defined the Certified Scrum Professional (CSP) program. This is the journeyman-level Scrum certification (think Apprentice -- Journeyman -- Master ). This certification is not achieved by passing a test, but rather by demonstrating a commitment to Scrum by doing Scrum and learning about Scrum.

How do you achieve the continuing education needed to achieve journeyman status? My answer is the Scrum Breakfast Club. The Scrum Breakfast Club is an inexpensive, recurring open-space format for solving problems related to Scrum and Scrum Projects (and learning advanced Scrum as you do). You bring your problems and find solutions, with me and with other people who face similar challenges. I also provide an opportunity for one-on-one coaching during this time.

Each Scrum Breakfast Club workshop earns you four Scrum Education Units. (If you are familiar with the PMI, these are like PDUs, and can also be used as PDUs).

From a career point of view, if you take a CSM Certified Scrum Master course, follow it up with a CSPO Certified Scrum Product Owner course 6 months later, and participate regularly in the Breakfast Club, after 2 years, you will have accumulated enough Scrum Education Units to qualify as a Certified Scrum Professional. And you have had plenty of opportunities to address the actual issues in your organization.

Here is a description of the Scrum Breakfast Club. How does this fit into your plans?
Categories: Blogs

How Gilt Scales Agile

Scrum Expert - Wed, 09/24/2014 - 09:16
As Agile project management is being widely adopted, the questions of if and how it could scale is a main topic of discussion. In this blog post, Gilt explain how it scales Agile with teams, ingredients, initiatives and KPIs. The basis of the Gilt process is the initiative, which is a project that should be started in the coming 3 to 12 months. This is the “portfolio” vision for the company, but no roadmap is maintained. Gilt uses key performance indicators (KPIs) used to measure initiatives. Gilt is organized in teams ...
Categories: Communities

How to Slow Down Your Team (and Deliver Faster)

Agile Tools - Wed, 09/24/2014 - 08:39

Is your team in need of a little improvement? Are they getting a little stale? Are you looking for a way to bring their performance “to the next level?” Well, maybe you should slow it down.

Oh I know, those other consultants will tell you that they can speed up your team. It’s the siren song of the wishful manager, “Speed my team up: faster!” But I’m here to tell you they’ve got it all wrong.

let me ask you this:

When you get a flat tire in your car, do you speed up? No! If there is a burning smell coming from the oven, do you heat it up? No! So if you see your teams start to slow down, why on earth would you try to make them go faster?

Let’s face it, when teams slow down there is usually a damn good reason. So rather than speeding up, perhaps it’s time to slow down, pull over, and take a look under the hood.

How do you slow down? Nobody really teaches that. Everybody is so focused on speeding up they seem to have forgotten how to slow down. Here are my top 10 ways to slow down your team (and hopefully address your problem):

  1. Apply a WIP Limit
  2. 1 piece flow
  3. A more rigorous definition of done
  4. Pair programming
  5. Promiscuous programming
  6. Continuous Integration
  7. Continuous Delivery
  8. Acceptance Test Driven Development
  9. Spend more time on impediments
  10. Hack the org

Taking time to implement any one of these things is almost guaranteed to slow you down. That’s a good thing, because your team probably needs to pull over to the side of the metaphorical road and repair a few things.


Filed under: Agile Tagged: improvement, slowing down
Categories: Blogs

October Agile Ottawa Event – Estimation

Agile Ottawa - Wed, 09/24/2014 - 02:17
October meetup has been posted for Agile Ottawa… Mark Levison (@mlevison) of Agile Pain Relief will be our main presenter and is tackling the tricky topic of “Estimation”. Agile 101 session will come to us thanks to Bill Bourne (@abbourne) … Continue reading →
Categories: Communities

Agile Quick Links #23

Notes from a Tool User - Mark Levison - Tue, 09/23/2014 - 17:04

feedback.jpgSome interesting reading for the Agile community:

Categories: Blogs

The value proposition of Hypermedia

Jimmy Bogard - Tue, 09/23/2014 - 16:13

REST is a well-defined architectural style, and despite many misuses of the term towards general Web APIs, can be a very powerful tool. One of the constraints of a REST architecture is HATEOAS, which describes the use of Hypermedia as a means of navigating resources and manipulating state.

It’s not a particularly difficult concept to understand, but it’s quite a bit more difficult to choose and implement a hypermedia strategy. The obvious example of hypermedia is HTML, but even it has its limitations.

But first, when is REST, and in particular, hypermedia important?

For the vast majority of Web APIs, hypermedia is not only inappropriate, but complete overkill. Hypermedia, as part of a self-descriptive message, includes descriptions on:

  • Who I am
  • What you can do with me
  • How you can manipulate my state
  • What resources are related to me
  • How those resources are related to me
  • How to get to resources related to me

In a typical web application, client (HTML + JavaScript + CSS) are developed and deployed at the same time as the server (HTTP endpoints). Because of this acceptable coupling, the client can “know” all the ways to navigate relationships, manipulate state and so on. There’s no downside to this coupling, since the entire app is built and deployed together, and the same application that serves the HTTP endpoints also serves up the client:

image

For clients whose logic and behavior are served by the same endpoint as the original server, there’s little to no value in hypermedia. In fact, it adds a lot of work, both in the server API, where your messages now need to be self-descriptive, and in the client, where you need to build behavior around interpreting self-descriptive messages.

Disjointed client/server deployments

Where hypermedia really shines are in cases where clients and servers are developed and deployed separately. If client releases aren’t in line with server releases, we need to decouple our communication. One option is to simply build a well-defined protocol, and don’t break it.

That works well in cases where you can define your API very well, and commit to not breaking future clients. This is the approach the Azure Web API takes. It also works well when your API is not meant to be immediately consumed by human interaction – machines are rather lousy at understanding following links, relations and so on. Search crawlers can click links well, but when it comes to manipulating state through forms, they don’t work so well (or work too well, and we build CAPTCHAs).

No, hypermedia shines in cases where the API is built for immediate human interaction, and clients are built and served completely decoupled from the server. A couple of cases could be:

image

Deployment to an app store can take days to weeks, and even then you’re not guaranteed to have all your clients at the same app version:

image

Or perhaps it’s the actual API server that’s deployed to your customers, and you consume their APIs at different versions:

image

These are the cases where hypermedia shines. But to do so, you need to build generic components on the client app to interpret self-describing messages. Consider Collection+JSON:

{ "collection" :
  {
    "version" : "1.0",
    "href" : "http://example.org/friends/",
    
    "links" : [
      {"rel" : "feed", "href" : "http://example.org/friends/rss"},
      {"rel" : "queries", "href" : "http://example.org/friends/?queries"},
      {"rel" : "template", "href" : "http://example.org/friends/?template"}
    ],
    
    "items" : [
      {
        "href" : "http://example.org/friends/jdoe",
        "data" : [
          {"name" : "full-name", "value" : "J. Doe", "prompt" : "Full Name"},
          {"name" : "email", "value" : "jdoe@example.org", "prompt" : "Email"}
        ],
        "links" : [
          {"rel" : "blog", "href" : "http://examples.org/blogs/jdoe", "prompt" : "Blog"},
          {"rel" : "avatar", "href" : "http://examples.org/images/jdoe", "prompt" : "Avatar", "render" : "image"}
        ]
      }
    ]
  } 
}

Interpreting this, I can build a list of links for this item, and build the text output and labels. Want to change the label shown to the end user? Just change the “prompt” value, and your text label is changed. Want to support internationalization? Easy, just handle this on the server side. Want to provide additional links? Just add new links in the “links” array, and your client can automatically build them out.

In one recent application, we built a client API that automatically followed first-level item collection links and displayed the results as a “master-detail” view. A newer version of the API that added a new child collection didn’t require any change to the client – the new table automatically showed up because we made the generic client controls hypermedia-aware.

This did require an investment in our clients, but it was a small price to pay to allow clients to react to the server API, instead of having their implementation coupled to an understanding of the API that could be out-of-date, or just wrong.

The rich hypermedia formats are quite numerous now:

The real challenge is building clients that can interpret these formats. In my experience, we don’t really need a generic solution for interaction, but rather individual components (links, forms, etc.) The client still needs to have some “understanding” of the server, but these can instead be in the form of metadata rather than hard-coded understanding of raw JSON.

Ultimately, hypermedia matters, but in far fewer places than are today incorrectly labeled with a “RESTful API”, but is not entirely vaporware or astronaut architecture. It’s somewhere in the middle, and like many nascent architectures (SOA, Microservices, Reactive), it will take a few iterations to nail down the appropriate scenarios, patterns and practices.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.