Skip to content

Blogs

Using “Status” in Agile Coaching & Training

Learn more about transforming people, process and culture with the Real Agility Program

Recently after attending a Scrum Alliance webinar on “Best Practices in Coaching,” I was reminded of my experiences teaching Acting students at university, and how I used changing status to help them achieve their best.

Status refers to the position or rank of someone within a particular group or community. I believe it was Canadian Keith Johnstone who introduced the idea of “playing status” to theatre improv teams. It is used to create relationships between characters onstage, and to change those relationships to move a story forward.

Status can be indicated through position, posture, facial expression, voice and clothing. It is a fascinating tool for any trainer or coach to use.

At the beginning of a semester with new students, I would invite them to sit on the stage floor in a circle with me. I would welcome them, discuss my expectations of their learning, and tell them what they could expect from me. We’d go over the course syllabus and I’d answer questions. I purposefully put myself in an equal status to them, as a way of earning their trust, because the processes of acting* requires huge amounts of trust. I also wanted to establish a degree of respect in them for the stage by all of us being in a “humble” position on the stage floor.

However, when I would introduce a new exercise to them that required them to go beyond their comfort zones, I would deliver instructions from a standing position while they were seated. By elevating my status, I conveyed the importance of the exercise, and it was a signal that it was not something they could opt out of. In this way, I could help them to exercise their creativity to a greater extent.

Another way I encouraged my students to take risks was to take risks myself. Sometimes I would illustrate an acting exercise by doing it myself first. For those few minutes I became a colleague with my students, one of them, equal in status. If I could “make a fool of myself” (which is how it may look to an outsider), then they could too.

I had one student who had great potential, but who took on the role of class clown and would not give it up. He fought against going deeper and getting real. One day in an exercise where they had to “own” a line of dialogue, I had him in a chair onstage, while I and the rest of the students were seated. He had to repeat the line of text until it resonated with him and became real. After some minutes, nothing was changing in him. In desperation had him turn his chair around so his back was to us. I then indicated to the other students to quietly leave the room. He could hear something happening but was confused about it. He was not able to turn around and look.

When I allowed him to turn around it was only him and me left in the theatre. I had him go through the repetition exercise again. Without an audience, and with me still seated, he finally broke through the wall he had erected and connected with the line of text from his inner self. It was a wonderful moment of truth and vulnerability. I then allowed the other students back in, and had him find that connection again with the students there. He was able to do it.

He is grateful to me to this day for helping him get beyond his comfortable role as clown to become a serious actor.

When training or coaching, it seems to me there can be huge value in playing with status. Sometimes taking a lower status, an equal status, or a higher status, can move a team or upper management into discovering whatever may have been blocking the process. Again, there are many ways to indicate status and even a status change to effect progress.

In his book, “Improv-ing Agile Teams,” Paul Goddard makes some important observations about using status. He writes: “Even though status is far less obvious than what is portrayed on stage, individuals still can take small steps to encourage status changes within their own team. For example, asking a team member who exhibits lower status behaviours to take ownership of a meeting or oversee a process not only boosts that person’s confidence but also increases status among peers…these subtle actions can help make lower-status team members feel more comfortable when expressing new ideas or exposing hidden problems.”

A colleague reminded me of a 1975 publication called “Power: How to Get It, How to Use It,” in which author Michael Korda gives advice about facial expression, stance, clothing and innumerable ways to express “power.” The idea of using status in the context I’m writing about is not about gaining power, but about finding ways through one’s own status changes to help unlock the capacity and potential of others.

How can a coach use status to help someone in management who is blocking change? Is someone on a team not accepting what others have to offer because s/he is keeping his/her status high? Is a Scrum Master necessarily a high-status team member, or rather a servant to the team (low status)?

I am curious if any coaches or trainers out there have used status in a way that created growth and change.

*Good acting is a matter of the actor finding the truth in oneself as it relates to the character he or she is playing. It requires vulnerability and courage to step out of one’s known persona and take on another as truthfully as possible. Inherent truthfulness also applies to work in any other endeavour.

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Using “Status” in Agile Coaching & Training appeared first on Agile Advice.

Categories: Blogs

Change Artist Super Powers: Curiosity

Esther Derby - Mon, 01/09/2017 - 18:08

In my work, I draw on models, frameworks, and years of experience. Yet, one of my most valuable tools is a simple one: Curiosity.

In an early meeting with a client, a senior manager expressed his frustration that development teams weren’t meeting his schedule. “Those teams made a commitment, but didn’t deliver! Why aren’t those people accountable?” he asked, with more than a hint of blame in his voice. As I spent more time in the organization, I heard other managers express similar wonderment (and blame).

I also noticed that whenever someone asked, “Why aren’t those people accountable?”—or some other blaming question, problem-solving ceased.

I know these managers wanted to deliver software to their customers as promised. But, their blaming questions prevented them from making headway in figuring out why they were unable to do so.

I started asking different questions–curious questions.

  • Who makes commitments to the customers, and on what basis? How do customer commitments, team commitments, and team capacity relate to each other?
  • When “those teams” make commitments, Is it really the people who will do the work committing, or someone else?  
  • What does “commitment” really mean here? Do all parties understand and use the term the same way? 
  • What hinders people from achieving what the managers desire?  Do teams have the means to do their work?
  • What is at stake, for which groups of people, regarding delivery of this product? 
  • What is it like to be a developer in this organization? 
  • What is it like to be a manager in this organization?
  • What is it like to be a customer of this organization?

I worked with others in the client organization to learn about these (and other) factors. We developed and tested hypotheses, engaged in conversations, made experiments, and shifted the pattern of results. 

For the most part, managers no longer ask blaming questions. They ask whether teams have the data to make decisions about how much work to pull into a sprint. They examine what they themselves say and do to reduce confusing and mixed messages. They review data, and adjust their plans.

Curiosity uncovered contradictions, hurdles, confusion, and misunderstandings. All of which we could work on to improve the situation.

So, there you have it. Curiosity is my number one Change Artist Super Power, and it can be yours, too.

© Esther for esther derby associates, inc., 2017. | Permalink | No comment | Add to del.icio.us
Post tags:

Feed enhanced by Better Feed from Ozh

Categories: Blogs

A Sneak Peak at Docker Recipes for Node.js Development

Derick Bailey - new ThoughtStream - Mon, 01/09/2017 - 14:30

On Monday, January 16th, the pre-sale for my Docker Recipes for Node.js Development ebook opens up. As I said in the last post, I need to sell 100 copies in the pre-order period, to ensure the book moves forward.

Docker recipes node sneak peak cover

But before the pre-sale starts, though, I wanted to give you a sneak peak at the first bits of content that I’ll have ready, for the pre-sale.

Writing That First Recipe Was Difficult

I mentioned previously that the first bits of content would likely be around debugging, and that has held true in the content I’ve worked on, so far. But things didn’t quite work out the way I had expected. 

When I sat down to write the first recipe, I had intended to write a small bit on how to use the built-in Node.js command-line debugger within a container. But after doing that and asking a few friends for some feedback, I realized that I had not shown enough to get a sense of what the book would be like.

So I decided to write a few additional recipes to get a better sense of the flow and layout, and things started changing pretty quickly. 

I’m still not completely happy with the writing at this point, but I think the recipe structures are starting to solidify, and I want to give you a peak into what that content will look like.

A Preview of Debugging In A Container

The content around debugging will likely be it’s own “Part ##” in the book, since I already have 3 recipes basically written as rough drafts and may one or two more. 

The opening for that part of the book is roughly outlined, here:

NewImage

Within the recipes (chapters), there will be a short scenario description to help you understand when the recipe in question would be best suited.

NewImage

There will be recipe listings, of course, which are meant to be copy-and-paste chunks of code and configuration, to solve a specific problem.

NewImage

And each recipe will come with cooking instructions, to provide additional description and detail on how to use the code and configuration found in the recipe listings.

NewImage

Depending on the specific recipe, there will also be some additional detail about specific commands, or notes on items related to the recipe in question. I’m trying to keep the book as short as possible, while still providing enough information to be valuable.

This won’t be an introduction, but a collection of solutions for someone familiar with Docker, but not yet comfortable using all their favorite development tools and techniques within Docker.

To Be Edited … Heavily

I do have a fair number of pages written already, but I don’t expect the content and structure that I’ve shown to be the final form of the book. Remember, the goal of the pre-sale is to get feedback, input and ideas from early readers. That’s where you come in.

The pre-sale starts on January 16th, and ends on the 31st.

If you buy the ebook in that period, you’ll have the opportunity to provide direct feedback on how to best move forward with the content and structure. You’ll receive updates to the book as they happen. And you’ll get much more content than just the ebook (some screencasts, cheatsheets, etc) at a significantly reduced price.

Stay tuned in to the pre-sale and how it’s going, by joining my mailing list (below). And be ready for the pre-sale launch – it starts in only a few days!

The post A Sneak Peak at Docker Recipes for Node.js Development appeared first on DerickBailey.com.

Categories: Blogs

The Story of Tesla, by Elon Musk

Scrum Breakfast - Mon, 01/09/2017 - 10:00
They didn't know it at the time but they created the first Tesla Roadster by taking a working prototype and iterating on the design. By the time the Roadster was announced, they had replaced 96% of the original prototype. "It's amazing what we can do with small teams and tiny budgets." BTW this is part one, you'll want to stay for most of part two. Another video I had to watch to the end!




Categories: Blogs

Playing Whack-A-Mole With Risk

Tyner Blain - Scott Sehlhorst - Mon, 01/09/2017 - 05:44

Man playing whack-a-mole carnival game

Assumptions are interesting things – we all make them all the time, and we rarely acknowledge that we’re doing it.  When it comes to developing a product strategy – or even making decisions about how best to create a product, one of these assumptions is likely to be what causes us to fail.  We can, however, reduce the chance of that happening.

Being Wrong

What does it feel like to be wrong?  Watch about 25 seconds of this TED talk from Kathryn Schultz, starting at 4:09

Go back later and watch her entire talk – it is really worth it.  But stay with me for now.  All you need for this article is the 25 seconds, and the realization that you don’t know you are wrong until you know you’re wrong.

Hidden in Plain Sight

Assumptions are like being wrong.  But with an added degree of difficulty.  Not only do you not know you’re wrong – but you didn’t realize you were incorrectly asserting something, and then betting on it to be right.

Every strategy, every product idea, every design approach, and every planned implementation is built upon a pile of assumptions.  Those assumptions are there, if you just look at them.  But you have to look for them in order to see them.  They are hidden in plain sight.

The only question is if they are going to cause you any trouble.  You might not be wrong, in the assumptions that really matter.

Wouldn’t it be nice to know when you are wrong?  Before it’s too late?  Before it’s really expensive?  Before your window of opportunity closes?

Identifying Risky Assumptions

Laura Klein spoke at the Lean Startup Conference about identifying risky assumptions and her talk was published in Dec 2014.  Laura is also rapidly becoming on of my favorite gurus.  I just wish I’d become aware of her work sooner.

Laura identifies that every product has at least three different classes of assumptions.

  1. Problem Assumptions – we assume there is a market-viable problem worth solving.
  2. Solution Assumptions – we assume our approach to addressing the problem is the right one.
  3. Implementation Assumptions – we assume we can execute to make our solution a reality, such that it solves the problem and we succeed.

Hold onto this thought – I need to segue and dust off a tool I found five years ago, and some work I’ve done with clients over the last couple of years.  We’ll look at how to incorporate some of those ideas with the ones Laura shared.  And eventually, the whack-a-mole reference will make sense.

Hypotheses and Assumptions

With a client last year, I ran a workshop to elicit assumptions on our project.  We were working to develop what Harry Max calls the theory of our product.  Basically, we were working to develop the vision, the value propositions (for a two-sided market problem), the business model that would enable a credible market entry strategy given the company’s current situation, and a viable solution approach.  Essentially, product strategy and product ideation.

My assertion in that workshop was that assumptions and hypotheses, practically speaking, are risks.

assumptions are implicit risks. hypotheses are explicit risks

Product strategy and product design are a formulated plan of action, built upon a set of beliefs – assumptions and hypotheses.  The risk is that those beliefs are wrong.  And we don’t realize it.  Materially, the only difference between an assumption and a hypothesis is that the assumption is something no one has said out loud.  It represents an implicit risk.  Once you acknowledge the assumption, you can then treat it explicitly – and explicitly decide to do something about it or not. In the workshop I prompted the participants (senior executives, domain experts, product stakeholders and team members) to identify their assumptions and hypotheses.  I started by presenting several hypotheses and assumptions that had been part of conversations prior to the workshop. prompting assumptions - grayed out This helped elicit ideas from the group, but it wasn’t really enough.  What did get things moving was some prompts from Harry, such as the suggestion to complete the sentence “It will never work because..” or “The only way it will work is if…” sticky notes from elicitation workshop We were able to elicit and then organize (affinity mapping) the inputs into a collection of testable hypotheses. What To Do With a Pile of Hypotheses? Now, armed with a list of hypotheses, and limited time and resources to go test them all, we were faced with the challenge of determining which risk to address first.  Remember – hypotheses and assumptions are risks.  Risks of being wrong (and not knowing it).  Risks of product failure. I’ve historically used potential impact and likelihood of happening to manage risks.  I first learned to assign a score from 1 to 3 for likelihood of the risky thing happening, and a score from 1 to 3 of how bad would it be if it did happen.  Multiply the two together, and you get a score from 1 to 9 (1,2,3,4,6,9).  I learned this from PMO-trained people in the late 1990’s.  Maybe their thinking has evolved since then.  There are two problems with creating a score like this.
  1. Likelihood of occurrence and potential impact are treated as equally important factors.  An unlikely but major impact risk would be “as important” as a likely risk with minimal impact.  Each particular approach to risk management will value these differently.
  2. Combining the two pieces of information into a single piece of information is discarding useful information.  If I tell you one risk is a “3” and the other is a “4”, you cannot know which risk is more important to you.  The “4” is something that reasonably could happen, and would be “bad.”  Would that be more important than understanding the risk of an unlikely, but company-ending risk? Would it be more important than a very likely annoyance – one which may cause death by a thousand cuts for your company is large volumes of support costs absorb profits.
That’s why I’ve treated this as a two-dimensional space – visualizing a graph of likelihood vs impact. Laura proposed my now-favorite labels for this graph, relabeling my vertical axis.  I’m shamelessly stealing this from Laura.  It seemed fitting as Laura credits part of her presentation to Janice Frasier.  Maybe one of the ideas I’m adding to the mix will be stolen by the next person to add to our blog-post conga line. likelihood vs impact graph As a team, you can reach consensus around the relative placement of all of the risks.  We then began tracking against our top 10. top 10 risks mapped against impact and likelihood scales As Laura would say – you start with the “uppiest and rightiest.”  What you are doing is asking the question  – what risk is most likely to kill your product, damage your stock price, get your CEO fired, etc. There’s another dimension which makes treating risks this way difficult – uncertainty.  You don’t actually know that this risky think is likely to happen.  You’re incept-assuming as you make assumptions about your assumptions. The easiest way to think about this it acknowledge that your impact and likelihood “measurements” are not measurements – they are estimates.  They may be calibrated estimates, ala Hubbard’s How to Measure Anything or they may be guesses based on which way the wind is blowing.  Treat them as estimates, and then – plot them either as your “most likely” or your “worst case” point of view – that’s a stylistic call, I think. Removing Risks

man playing whack-a-mole game

The reason you test a hypothesis is to reduce a risk.  I think Laura used the phrase “to de-risk” the risk. To de-risk the risk, the first think you need to do is remove the uncertainty you have about how bad things could really possibly be.  You need to run an experiment.  In the example above, you would prefer to test hypothesis 7 first if you can – it is the uppiest and rightiest.  You would not be far wrong if you tested 4 or 8 first (assuming it is easier, faster, or cheaper to test one of those).  If you were to first test anything other than 4,8,7, you really should have a good reason. Once you run your experiment and determine that the risk is not a risk, go back and address the next-most-important risk.  This is a game of whack-a-mole.  You will never run out of testable risks.  You will only eventually reach a point where the economic value of delaying your product to keep testing risks no longer makes sense. Note that an experiment could result in multiple outcomes and next steps.  Here are a couple
  • This risk is not as impactful as we thought, we won’t address it with product changes, we will absorb those costs into our profitability model and revisit pricing to assure the business case still holds up.
  • This risk is every bit as likely as we were afraid.  Let’s determine a problem restatement (or solution design approach) where this risk no longer has  the impact or likelihood it did before.  As an example – a risk of users not adopting a product with an inelegant experience may justify rethinking the approach and investing to improve the user experience.

Trying to tackle all the ways you can respond to risks (and de-risked risks) would make this overly long article ridiculously long.

Validation Board

validation board

n 2012 I came across the hypothesis board from leanstartupmachine.com.  At the time, it was free for use by consultants :)  I don’t believe it has gained widespread adoption.  At least people look at me funny when I mention it. Maybe now, more people will know about it. I personally never used it because something felt not-quite-helpful enough for me, for the problems I was helping my clients to solve.  I could never figure out why, however.  The board has many of the important components.  In hindsight, this is an indicator that the validation board is likely solving a problem I don’t have (as opposed to being a bad solution to a problem I do have). The validation board is structured more for early-startup customer-discovery work – with three categories of hypotheses to track – customer, problem, and solution
  • How big is the potential market?
  • How valuable is the problem we would solve?
  • Are we able to solve the problem for these people?
The tool was positioned as something to help you pivot as you discover that you have the wrong customers, or problems, or solutions. What I need is to know what hypothesis to test next.  I think that may be best done with a simple graph like the ones Laura and I use.  but use her labels. Whack Some Moles Instead of debating about implementation details, consider assessing the risks to your product.  Determine if those risks warrant making an investment to reduce them.  Form a measurable hypothesis and validate it. Then go after the next risk.  Until the remaining risks are no longer big enough for you to pursue.
Categories: Blogs

Invitation-based SAFe implementation

Agile Product Owner - Sun, 01/08/2017 - 17:54

Howdy folks:

Yuval Yeret, CTO of AgileSparks, is an SPC and a SPCT candidate.  He is a prolific blogger on the topics of Agile, Lean, Kanban, SAFe and more. Yuval has over 17 years of industry experience and always has an interesting viewpoint and pragmatic advice. AgileSparks is a Scaled Agile gold partner.

Yuval has written a novel guidance article on how an invitation-based approach to implementing SAFe can create a more collaborative organizational change effort. The article describes ways to invite leaders and team members to understand SAFe, while decentralizing the timing and details of the change.  I found the following ideas to be particularly innovative and useful:

  • The implementation workshop
  • Invitation based ART launch
  • Self-selection of teams within an ART

Please let us know what you think about this approach, and feel free to share any similar ideas and techniques that you have successfully implemented, in the comments section below.

Thanks Yuval for sharing this approach with us. Please click here to read the article.

The team at Scaled Agile would also like to wish you a Happy and SAFe New Year!

—Richard and the framework team
SAFe Fellow and Principal Consultant
@richardknaster

Categories: Blogs

Clean Disruption

Scrum Breakfast - Sun, 01/08/2017 - 15:13
Why Energy & Transportation will be Obsolete by 2030 by Tony Seba. The horse was displaced by the automobile in just 13 years. Oil, Cars and the Power Grid are about to be transformed in a similar way. What other technologies will be displaced faster than you think, and why?


I don't usually have patience to watch a 45 minute video, but I had to watch this one to the end!
Categories: Blogs

The Simple Leader: Continue to Learn

Evolving Excellence - Sun, 01/08/2017 - 11:16

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


Learning is not compulsory…neither is survival.
– W. Edwards Deming

Many, if not most people go to school and college, and then, when they are finished, rarely open another book (at least one with big words in it). They may continue to grow their skills and knowledge through experience, but this is the slow boat to improvement.

Over the years, I’ve found that the primary predictor of executive leadership competency is the desire to seek, learn, analyze, distill, and share new knowledge. It doesn’t necessarily have to be within the leader’s current field or competency, nor does it have to be strictly via reading books. There are multiple path- ways to new knowledge, including online courses, magazines, and workshops.

Gaining new knowledge can also mean gaining new perspectives. As I discussed earlier, in a world of multiple sources of information, it is very easy to succumb to confirmation bias and only embrace information that fits our existing perspective. In reality, there is almost always some truth in every perspective. Challenge yourself to mindfully look at other perspectives on political, scientific, or social issues in an unbiased manner. You may not change your mind, but you will grow and your positions will be more authentic.

I try to read one fiction and one non-fiction book each month, which is sometimes difficult with my schedule. The non- fiction books, generally business-related, challenge me intellectually. The fiction, often science fiction or action thrillers, challenge my imagination. Each morning, I read The Wall Street Journal on my iPad, forwarding articles to friends and family that I find interesting. I purposely try to read articles from different political sources instead of only the ones that agree with my perspectives. I try to continually evaluate my perspectives, think about where bias is setting in and develop countermeasures to overcome the bias.

Think about your own pursuit of knowledge. What have you learned recently? What do you want or need to learn this year? How will you do it? What will you do with the new knowledge? How does it fit in with your new self-awareness? How will you encourage and provide opportunities for your team to learn?

Categories: Blogs

Case Study Update: LEGO finds the sweet spot

Agile Product Owner - Fri, 01/06/2017 - 20:30

“ … this has ​improved the motivation​ of the team members. Going to work is more fun when there’s less confusion and less waste. And motivated people do better work, so it’s a positive cycle!

Another impact we’ve seen is that other parts of LEGO visit the meeting, get super inspired, and start exploring how to implement some of these principles and practices in their own department. In fact, agile is spreading like a virus within the company, and the highly visible nature of the PI planning event is like a a catalyst.

case-study-box-lego-2About a year ago, the folks from LEGO® shared experiences from the first leg of their SAFe journey. What captured our attention was their innate understanding, right from the start, that every step of the implementation was going to involve discovery and learning and adapting. When something didn’t seem like a good fit, they weren’t afraid to experiment. Taking results from Inspect and Adapt, they tweaked SAFe to their needs with a simple guiding principle, “Keep the stuff that generates energy.”

One year later, Henrik Kniberg and Eik Thyrsted are back with the next chapter of their story. Their 20-team working group, LEGO Digital Solutions, is at the forefront of LEGO’s movement toward adapting to the faster-paced digital world, so the need to get it right is critical as it ultimately impacts the entire 17,000-person organization.

Their nipping and tucking of SAFe for optimal results runs the gamut from large edits to small tweaks. For instance, to keep energy and engagement up, they cut PI Planning from two days to one, and now limit the presentation of the draft plans to four teams doing 7.5 minute presentations. They started doing their program backlog on a physical board with printed cards, but moved that online to their backlog management tool, and projected it on the wall. For reality checks to avoid over-commitment, they use ‘Yesterday’s Weather,’ a feature from Extreme Programming (XP).

While they are sticking with one-day PI Planning, their consensus is that they needed the two-day event in the beginning to help them learn how to do it more effectively. It’s noteworthy that while they reduced length of PI Planning, to ensure that PI Planning is effective they now have three pre-planning sessions before each boundary.

Their determination to make it work has had an impact. They talk about the experience being “surprisingly positive,” and nobody seems to want to go back to how things were before SAFe. This is their latest summary of the outcome:

  • Less duplicated work​. Teams are more in tune with each other, so they waste less time on redundant work.
  • Less dependency problems. ​Teams waste less time being blocked waiting for each other. Teams interact more smoothly with other departments and stakeholders.
  • Managers can update priorities and resolve impediments faster​, because they have a better idea of what is actually going on.
  • Client trust has improved​, because they have a better understanding of what the teams are working on and why.
  • Planning is easier and commitments are met more often​, because the teams and portfolio planners learn how much work we can commit to and what our actual capacity is.

We’re glad to see LEGO getting these kinds of results. SAFe is a framework and as such, it is intended to be applied and evolved in context. We don’t care if people modify it, so long as they make it leaner and get the right business results!

Their downloadable 36-page in-depth summary makes for fascinating reading as it’s full of candid commentary and generously describes the thought process behind each decision. It also includes the top four things that helped them get a successful start. Go to the LEGO case study page to get the download. There you will also find the original video from Henrik and Eik discussing the first phase of the implementation.

Thanks, as always, to Henrik (aka ‘Dr. Agile’) and Eik (‘Captain Agile’) for documenting the LEGO journey. It’s a great service to the community and showcases what is possible when people approach new ideas with open minds and a commitment to learn.

Stay SAFe!
–Dean

Categories: Blogs

TASTE Success with an X-Matrix Template

AvailAgility - Karl Scotland - Fri, 01/06/2017 - 14:24

I’ve put together a new X-Matrix A3 template to go with the Backbriefing and Experiment A3s I published last month. Together, these 3 templates work well together as part of a Strategy Deployment process, although I should reiterate again that the templates alone are not sufficient. A culture of collaboration and learning is also necessary as part of Catchball.

 

While creating the template I decided to change some of the language on it – mainly because I think it better reflects the intent of each section. However a side-benefit is that it nicely creates a new acronym, TASTE, as follows:

  • True North – the orientation which informs what should be done. This is more of a direction and vision than a destination or future state. Decisions should take you towards rather than away from your True North.
  • Aspirations – the results we hope to achieve. These are not targets, but should reflect the size of the ambition and the challenge ahead.
  • Strategies – the guiding policies that enable us. This is the approach to meeting the aspirations by creating enabling constraints.
  • Tactics – the coherent actions we will take. These represent the hypotheses to be tested and the work to be done to implement the strategies in the form of experiments.
  • Evidence – the outcomes that indicate progress. These are the leading indicators which provide quick and frequent feedback on whether the tactics are having an impact on meeting the aspirations.

Hence working through these sections collaboratively can lead to being able to TASTE success

Categories: Blogs

Advanced Agile Practices Workshop at GDC - Free book

Agile Game Development - Thu, 01/05/2017 - 18:45
Announcing the “Advanced Agile Game Development Practices” workshop for the 2017 Game Development Conference on Monday February 27th, 2017:“Agile practices are no longer considered experimental, but mainstream, yet many still struggle with them. In this workshop you will learn and share the successful practices and techniques that agile studios have created over the past decade of it's application.”This workshop is intended for game developers, who have used agile practices to share what has worked and what hasn’t with other game developers.

Update: Attendees to this workshop will receive a free copy of the draft of my next book.

Categories: Blogs

Connecting with Humans

Johanna Rothman - Thu, 01/05/2017 - 17:59

I just read Zappos is struggling with Holacracy because humans aren’t designed to operate like software. I’m not surprised. That’s because we are humans who work with other human people. I want to talk with people when I want to talk with them, not when some protocol tells me I must.

It’s the same problem when managers talk about “resources” and “FTEs” (full-time equivalents). I don’t know about you. I work with resourceful humans. I work with people, regardless of how much time they work at work.

If the person I need isn’t there, I have some choices:

  • I can cc the “other” person(s) and create a ton of email
  • I can ask multiple people and run the risk of multiple people doing the same work (and adding to waste)
  • I can do it myself—or try to—and not finish other work I have that’s more important.

There are other options, but those are the options I see most often.

We each have unique skills and capabilities. I am not fond of experts working alone. And, I want to know with whom I can build trust, and who will build trust with me.

We build relationships with humans. (Okay, I do yell at my computer, but that’s a one-sided relationship.) We build relationships because we talk with each other:

  • Just before and just after meetings. This is the “how are the kids? how was the wedding? how was the weekend?” kind of conversation.
  • When we work with each other and explain what we mean.
  • When we extend trust and we provide deliverables to build trust.

When we talk with each other, we build relationships. We build trust. (Some of us prefer to talk with one person at a time, and some of us like to speak with more. But we talk together.) That discussion and trust-building allows us to work together.

This relationship-building is one of the problems of geographically distributed teams not feeling like teams. The feelings might be missing in a collocated team, too. Standups work because they are about micro-commitments to each other. (Not to the work, to each other as humans.)

I’m a Spock-kind of person, I admit. I work to build human relationships with colleagues. I work at these relationships because the results are worth it to me. Some of you might start with the people first, and you will build relationships because you like people. I’m okay with that

Categories: Blogs

Global warming – simplified summary

Henrik Kniberg's blog - Thu, 01/05/2017 - 13:59

OK, here’s a (very) simplified summary of what I’ve learned about global warming after digging deep the past few weeks.

  1. Global warming is a major threat to life as we know it. It’s ALOT worse than most people realize.
  2. Global warming is caused (mostly) by increasing CO2 in the atmosphere.
  3. The CO2 increase comes (mostly) from us burning oil & coal (“fossil fuels”). Adds about 20-30 billion tons of CO2 per year.
  4. So we need to (mostly) stop burning oil & coal.
  5. We burn oil & coal (mostly) for electricity and transport. Coal power plants, car/plane/ship fuel, etc.
  6. We want to keep electricity and transport, but we also want to stop global warming, therefore we need to get electricity and transport without burning oil & coal.
  7. We know how to do that (solar, wind, electric cars, etc). The technology has been figured out, and the prices are at the tipping point where oil & coal can’t compete economically.
  8. So now we just need to hurry up and roll out those solutions! Every single reduced ton of CO2 counts.
  9. Unfortunately shit is going to hit the fan either way (because it’s already launched so to speak), but at least we can slow it down, reduce the impact, and buy us some time.

So pull whatever strings you can to help out – technology, policy, economy, communication, etc. Inform yourselves & each other. People have varying degrees of discretionary time, money, knowledge, voting power, contacts, influence, and motivation. But the more people try to help in one way or another, the more difference it will make as a whole.

More info:

 

Categories: Blogs

Game Play-Throughs During the Sprint

Agile Game Development - Wed, 01/04/2017 - 20:19

Regular team play-throughs of the game can add a lot of value through improved focus on the sprint goal and increased cross-discipline collaboration.
PracticeDuring the sprint, when the game is in a state where progress can be seen by the team, they hold a play-through of the areas related to the sprint goal.  Anyone can take the controls of the game, but usually it’s not the Scrum Master.  Depending on the state of a feature or mechanic, the developer who is directly working on what is to be played may show it, but it’s preferable to have someone less family drive the play-through.  This shows areas of where the player interface might be confusing.  During the play-through, anyone on the team can suggest improvements or identify problems to be fixed.The duration and frequency of play-throughs can vary.  If they are short, they can be done daily but longer ones once or twice a week work too.
Coaching tips
If the team has nothing to show half-way through the sprint, this is a great opportunity to ask them if there is any way to demonstrate progress earlier.  Earlier play-throughs create more opportunity to take advantage of emergence and to avoid late-sprint crunch and compromise.Additionally, you may want to invite a senior designer or art director to listen in.  This creates the opportunity for feedback (after the play-through) among the disciplines.   Make sure that people outside the team understand that the play-through is not an opportunity to change the sprint goal.I've always found that play-throughs held just before the daily scrum or at the end of the day are best (for different reasons).  Experiment!
Categories: Blogs

Certified Agile Leadership (CAL1) Visual Summary

Agilitrix - Michael Sahota - Wed, 01/04/2017 - 19:54

I am so grateful to Zuzi Šochová creating this wonderful infographic to summarize what she learned at my Certified Agile Leadership (CAL1) training in California last month. You can see a detailed list of my course contents and learn more about this training on the course description page.

The post Certified Agile Leadership (CAL1) Visual Summary appeared first on agilitrix.com - Michael Sahota.

Categories: Blogs

#noprojects: If Start a Project, You’ve Already Failed

TV Agile - Wed, 01/04/2017 - 19:07
I want to be controversial for a moment and propose an end to IT projects, project management & project managers. I propose that the entire project process is flawed from the start for one simple reason. #noprojects means that if you need to run a project, you’ve already failed. By definition, an IT project is […]
Categories: Blogs

5 Qualities of a Bad ScrumMaster

Leading Agile - Mike Cottmeyer - Wed, 01/04/2017 - 15:02

A ScrumMaster is one of the three key roles of the Scrum Framework. Ken Schwaber and Jeff Sutherland conceived the Scrum process in the early 90’s. With so many years having passed, you’d think organizations would better understand qualities of a good ScrumMaster. More noteworthy, they should know qualities of a bad ScrumMaster.

Because of this, I created a simple infographic to focus on both good and bad qualities of ScrumMasters.  I’ve noticed, as organizations begin to scale, roles and responsibilities begin to blur. People may be asked to take on ScrumMaster responsibilities.  Do you have the right qualities?

View and download the free infographic:  10 ScrumMaster Qualities

5 qualities of a bad scrummaster

5 Qualities of a Good ScrumMaster

First, a Servant Leader is an empathetic listener and healer. This self-aware steward is committed to the growth of people. Second, a Coach can coach the other team members on how to use Scrum in the most effective manner.  Third, the Framework Champion is an expert on how Scrum works and how to apply it. Next, the Problem Solver protects the team from organizational disruptions or internal distractions or helps remove them.  Last, the Facilitator is a neutral participant who helps a group of people understand their common objectives and assists them to achieve these objectives.

5 Qualities of a Bad ScrumMaster

First, the Boss has the ability to hire and fire others.  Second, the Taskmaster myopically focuses on assigning and tracking progress against tasks. Third, a Product Manager is responsible for managing schedule, budget, and scope of the product. Next, if you are Apathetic you lack interest in or concern about emotional, social, or spiritual well being of others.  Last, the Performance Reviewer is responsible for documenting and evaluating job performance.

Summary

While you may call yourself a ScrumMaster, understand that people who understand Scrum are going to have expectations.  If you have any of the bad qualities that I listed above and in the infographic, maybe you should find someone else to do the job.

The post 5 Qualities of a Bad ScrumMaster appeared first on LeadingAgile.

Categories: Blogs

The Simple Leader: Connected Spirit

Evolving Excellence - Wed, 01/04/2017 - 11:27

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen

We are not human beings on a spiritual journey. We are spiritual beings on a human journey.
– Stephen R. Covey

The majority of humans believe in some type of a connection to a greater power, be it truly divine or just universal. Some may believe but simply go through motions drilled into them since birth, never questioning or validating the experience. Some, like myself, affirm the existence of something else—even if we don’t understand what that is.

The scientist in me stares up at the stars, knowing there are countless billions of them potentially with civilizations vastly older and more developed than ours. Then I contemplate recent advances in fields like quantum mechanics, where entanglement creates instantaneous connections over vast distances, making me wonder if we’re starting to see the connection between the physical world and the soul. I see how the evolution of the “internet of things” has already made billions of devices instantly accessible and controllable, and wonder how long will it be before every molecule in our world can be similarly addressed and manipulated.

The curious learner in me has spent years reading and analyzing numerous books on the history of religions, and I am amazed at the remarkable similarities between them. As one religious scholar friend once told me, it’s as if different groups of people were watching the same game from different parts of a stadium—some from the front row, others from high up in the standing-room-only section, still others from behind obstructions where they could only see part of the field. Each group recorded their experience in ways that were then distorted over time.

Episcopalian bishop and theologian John Shelby Spong has written about the impact of perspectives on religious literalism. One example he gives is the many ways ancient peoples described the rise of the sun each morning, from it being a star to being the powerful god Ra. Culture, religion, and knowledge shaped how different groups understood the same event. Other theologians, such as Catholic priest Thomas Merton, have found how seemingly disparate religions, such as Buddhism and Christianity, can be very complementary.

Like many people, I have felt an unequivocal, undeniable force at many times in my life. When dealing with exceptional stress, loss, or difficult decisions, it was there. It’s no longer faith for me—it’s real. I feel it while walking in nature, or even at this very moment, while looking out over the Caribbean while on vacation.

Each person’s experience is unique. But take time, perhaps while surrounded by the beauty of nature, to contemplate your spiritual existence. Being able to draw strength from that will bring peace. Peace will help calm your mind, enabling you to understand who you are.

Categories: Blogs

Estimating: Bottom-up vs. Top-down

Agile Estimator - Tue, 01/03/2017 - 22:39
bottom-up-top-downBottom-up vs. Top-down Table

When most people think about estimating, they are thinking about bottom-up estimating. When your car needs to be repaired, you bring it to a mechanic. If you need new brakes, you will get an estimate for the cost of the brakes and the amount of time that is required to install them. If you also need an oil change, then the cost of that is added to your estimate. Software developers tend to think the same way. They attempt to identify the tasks that must be performed. They estimate the time for each task and add up these estimates. Agile developers do this. The steps of agile estimating are explained in Traditional Agile Estimating.

Some organizations already have Software Development Life Cycles (SDLCs) that they have specified. These SDLCs give all of the tasks that must be performed to develop software. However, many of the steps have to be broken down into finer detail. For example, there may be a task called Code Modules. However, that is both difficult to estimate or control. It ends up being broken into Code Payment Screen, Code A/R Report and a host of others. Early in the life cycle, it is very difficult to specify all of these tasks and impossible to estimate them.

People involved in agile development usually think of estimating from the bottom-up. They will identify as many user stories as possible early in the life cycle. They will then use a technique like estimating poker to assign story points. In summary, estimating poker is a collaborative technique that involves the development team. User stories are considered one at a time. Each team member assigns a number of story points to the story. They discuss it until they reach consensus and then move on to the next user story.

Managers love the idea of  bottom-up estimating. If all of the tasks necessary to develop an application are estimated, they can be placed in a work breakdown structure and a Gantt chart. This gives the illusion of control. The developers love the idea of bottom-up estimating. Stories and tasks must be identified as part of the development process. Therefore, the bottom-up estimate is not extra work just associated with estimating. This is consistent with agile principles and practices. Statisticians love the idea of bottom up estimating. Whether estimating by task or user story, each component gets its own estimate. The estimates will usually be incorrect, but the errors will tend to cancel each other out. In theory, it is a winning approach. In practice, you just cannot do bottom-up estimating early in the life cycle. Project sponsors, end users and business analysts are developing any application artifacts like a feasibility study. Sponsors and users do not know what logical data models are. Business analysts know what they are, but probably have no idea how long it will take to develop one before the scope of the project is better established. For many applications, the development environment has not yet been decided on. Data warehouse applications may be developed using special software packages with entirely different development tasks than an organization typically specifies in its SDLC. In most cases, bottom-up estimating is impossible to do correctly early in the life cycle.

Top-down estimating begins with establishing the size of the application to be developed. Knowing this, algorithmic models were used to predict how much effort and how much calendar time would be required to develop the application. This approach was developed when the waterfall approach to software development was popular. Therefore, these models typically predicted how much time would be spent in the analysis, design and coding phases of the application development. Some approaches would predict the amount of time for various activities, like project management. In the beginning, that size was expressed in lines of code. There were two problems with this. First, you only know the number of lines of code after you have developed the application. Then you do not need the estimate. However, many organizations developed heuristics to help them predict lines of code. These rules of thumb were tied to the experience of the organization. For example, at one time NASA would predict the number of lines of code in satellite support software based on the weight of the satellite itself. The second problem can be summarized by Capers Jones’s statement the using lines of code should be considered professional malpractice. There are many problems with it. In one of his books, Capers shows that it often misrepresents the value of software. For example, is 2,000,000 lines of assembly language more valuable that 20,000 lines of COBOL?. Should it take 100 times longer to write? Even more to the point, with so many development environments being build around screen painters and other tools that do not actually have lines of code, the antiquated measure has become unusable. Function points, use case points and a host of lesser known measures have taken the place of lines of code. Barry Boehm (no relation) developed several estimating models that he called the Constructive Cost Model (COCOMO) in 1981. One of models was Basic COCOMO. It transformed the number of lines of code into the person-months of effort and the calendar months of schedule that would be required for application development. Practitioners at he time found ways to drive COCOMO from function points as opposed to lines of code.

Basic COCOMO was not as accurate as people wanted. Therefore, Boehm introduced Intermediate COCOMO at the same time. He actually introduced product level and component level versions of Intermediate COCOMO, but the difference is not important at this point. What is important is that Intermediate COCOMO utilized cost drivers. Cost drivers impacted the estimates. They were necessary and made sense. Imagine there are two applications that are 100,000 source lines of code. Will they take the same amount of time to develop? Probably not. There will be two types of differences between the two application projects. The first type are product differences. One application might be a computer game and the other an embedded system in a piece of medical equipment. The second application will have a higher required reliability. This will impact its development time. There are other product related cost drivers. The complexity of the products may also be different and impact the development time. The other class of cost drivers are associated with the development process. How experienced is the team with this type of application? How experienced is the team with the development language/environment being used? These cost drivers also impact development effort and schedule. In fact, cost drivers can change development effort by an order of magnitude.

COCOMO was not the only costing model around. At about this time, Larry Putnam introduced Software Lifecycle Management (SLIM). The Walston-Felix IBM-FSD Model and the Price-S model were two other top-down models that were introduced at about the same time. Which one was best? Nobody knows! There were several bake-offs but none actually answered that question. It turns out it was impossible to answer. Which car is best? In 1969, I saw a move called Grand Prix. Pete Aron is a race car driver who is just about unemployable. He was reckless. A Japanese car company hires him. He wins the race. Why? If you are reading this today, and obviously you are, then you might think it was because the Japanese are capable to making a fine automobile. In 1969, this would never of occurred to you. The Japanese had introduced motorcycle to America and they were a failure. Japanese cars would be the same. Pete Aron won the race because it is the driver, not the car that wins the race. He was driven to win and afraid to lose. That is all there was to it. Automotive enthusiasts might debate the. However, when it comes to estimating there is no debate. It is the estimator, not the model, that produces a useful estimate!

Practitioners started to use function points to drive the top-down models. Capers Jones had produced some tables that showed how many lines of code were required to implement a function point. Thus, function point could drive models like COCOMO. Some practitioners used unadjusted function points. There were complications when the Value Adjustment Factor (VAF) was used. Which General System Characteristics (GSCs) resulted in more lines of code? They were not adequate to use in place of cost drivers. A minimum VAF would make the adjusted function point size 65% of its unadjusted size; maximum would be 135%. The size difference is only a factor of 2.  Cost drivers could usually impact the estimates to a much greater extent. Now, the International Function Point Users Group (IFPUG) has introduced the Software Non-functional Assessment Practices (SNAP). This is a counting approach that might replace the product cost drivers, but not the process ones.

These top-down techniques can often be performed by someone who is not familiar with all of the nuances of system development. The individuals must be familiar with the model being used, such as COCOMO. In addition, they must be trained in the sizing measure being used, such as function point analysis. This means that there is usually a small group of estimators in most organizations. In an organization using agile development, this might be a function that the product managers take on. This way, they can report back to the sponsors and other users what they expect in terms of schedule for an application development. Many organizations rely on consultants to perform these estimates. An independent estimator is often a good choice. This estimator is not overstating an estimate in order to negotiate for more resources, nor understating the estimate in order to pressure the development team to deliver faster.

Estimators look for techniques that are orthogonal to one another. This means that they are statistically independent. Top-down and bottom-up estimating approaches can be orthogonal. The bottom-up method is usually performed during the development just by virtue of identifying tasks and assigning people to them. If a top-down estimate has been developed, then it can be compared to what is being indicated by the bottom-up estimate at any time.

In the perfect world of agile systems development, all of the activity goes directly into developing application code. This is a drawback of top-down estimating. The effort that goes into it does not directly implement the application. If that effort is performed by a non-developer, then it becomes more of a business decision of whether the time and effort spent in developing the effort is helping the project sponsor to make better business decisions. Another area of concern is the distraction that this may be for the development and user communities. If the developers must answer questions in order to size the application, then this detracts from development effort. If a user must answer questions, then that user may be distressed if and when a developer asks the same questions again. The value of the estimate must exceed these costs or it should not be done.

The most modern of the cost models do not fit neatly into the bottom-up or top-down category. COCOMO II has replaced COCOMO the model of choice among COCOMO fans. SPQR, Checkpoint and Knowledge plan were released by Software Productivity Research (SPR), then under the direction of Capers Jones. Dan Galorath’s SEER-SEM is one of the more recent, commercially successful estimating models. The pros and cons of these approaches are basically the same as top-down models.

Categories: Blogs