“Adopting SAFe has set in motion the skill development and mindset for successful organizational change even as we scale to new programs, Release Trains, and people.”
—Gary Dawson, Assistant Director, Solutions Delivery
For organizations operating in highly regulated industries, the transition from Waterfall to Agile adds an additional layer of risk to what is already a daunting undertaking. Rapid and vast change, if not done properly and with cross-organizational collaboration, has the potential to be disruptive and actually hinder advancement.
We know that SAFe is emerging as a solution in regulated industries, so we’re always glad when we get a chance to peek inside one of these transformations. The folks from the United Kingdom’s NHS Blood and Transplant (NHSBT) have shared their SAFe story, and there’s much to learn from what appears to be an exemplary model for how to make the move from Waterfall to Agile in a phased approach without tipping over the boat.
NHSBT supplies safe blood to hospitals in England, and tissues and solid organs to hospitals across the United Kingdom. When the organization set out to revolutionize the way it interacts with blood donors, it needed to adopt a new technical platform and architecture. Yet it was clear its previous waterfall approach wouldn’t support the change. IT leaders also worried that change could impact the core business and the working relationships of employees.
With the help of Scaled Agile Partner, Ivar Jacobson (IJI), NHSBT chose SAFe to help support the governance and manage both the organizational and technical changes. They committed to a coaching and training plan—including a strategic Program Increment (PI) cycle—that ensured SAFe was adopted by employees with secure checkpoints and feedback along the way.
From the first PI onward, they noticed a difference in team effectiveness. In that first PI, they were able to deliver a committed, finite number of product features, as well as prioritize IT operations alongside the business part of the organization. Having delivered the first MVP in one of its programs, it’s now clear that the introduction and embedding of SAFe within NHSBT has provided significant, early business benefits.
“We would never have had that level of interaction in a waterfall delivery. To achieve the levels of understanding of both the technology and deliverables—along with all the inter-dependencies— would have taken months of calls, meetings, and discussions. We planned the next three months in just two days and now we retain that level of engagement on a daily basis.”
—Gary Dawson, Assistant Director, Solutions Delivery
Today, SAFe is part of everyday procedures at NHSBT, and it is poised to reach even more programs and people. Already, they have held two SAFe planning events for a potentially much larger program to replace its core blood offering system.
Make sure to check out the full case study for insights and inspiration; there’s a good amount of substance there that would be useful to any organization considering a move to SAFe, especially for those working in regulated industries.
Many thanks to Gary Dawson, Assistant Director, Solutions Delivery, NHSBT; and Brian Tucker, Principal Consultant and SPCT, IJI.
In a previous post about productivity patterns, I wrote about how I tried countless systems to improve my productivity. I tried everything from having a Franklin Planner, to using GTD, to Personal Kanban and the Pomodoro Technique. I asked myself why some methods worked and some did not. Why did I abandon two systems when I knew so many others have been successful with them? Why has Personal Kanban worked for me for the last 7 years? I started listing common traits and saw relationships and discovered patterns. Not only are there three things I believe every system needs to work, I also see three things that are necessary to prevent you from abandoning that system.
Every personal or professional thing we do is part of a system or subsystem. Those systems have both success and failure patterns.Success Patterns
For a system (defined as a set of principles or procedures to get something done or accomplished) to be successful, you always need ritual and habit.
- A ritual is a series of actions or type of behavior regularly and invariably followed by someone.
- A habit is a regular tendency or practice, especially one that is hard to give up. You need to be habitual with your rituals, as part of your system.
Early indicators that your system will fail include a lack of clarity, progress, or commitment(Very similar to Mike Cottmeyer’s “Why Agile Fails)
- Lack of clarity creates confusion and waste. Each step of a system should be actionable and repeatable. In order to ensure certainty around your system steps, write them down.
- If you lack progress, you will lose momentum. If you lose momentum, you will lose commitment to the system.
- Lack of commitment to the system results in you no longer using the system. You move on to something new to get the results you seek.
After I identified the patterns, I wanted to present a useful model to visualize the indicators that will, in time, cause the system to fail. I decided to base my model on the Business Model Canvas by Alex Osterwalder. Below you will see the five areas that need to be considered. Once complete, if you notice one or more of the sections is ambiguous or short on details, you should view that as a warning.
Scrum Framework Success Patterns
By using the Scrum Framework as an example system, I completed my system design canvas. Upon completion of the worksheet below, I can see if there are any “gaps” in the system. As you may have guessed, there are no gaps, if Scrum is properly implemented and followed. But if it was modified without expert guidance, a gap will become visible and provide an indication that the system is at risk of failure.
Because you may have a large organization where you are dealing with different kinds of dependencies, you may need to create “sub” system design canvases to account for organizational complexity. Scrum may not be enough. Don’t worry. The same rules apply.Free Download
Interested in testing your system or subsystems? Download a free copy of the System Design Canvas and see if you are at risk of failure. Because I am providing this under a Creative Commons Attribution-Share Alike 3.0 Unported license, I welcome you to download it and modify it to meet your needs.
Let’s look at an example. It makes no sense to use a strict equality operator like === or !== on two operands which don’t have the same type: in such cases, === always returns false and !== always returns true. We have a rule to check that and this rule found the following issue in JQuery:
In this case, we know that “type” is either a string or undefined when it is compared to the boolean value false with a strict equality operator. This condition is therefore useless, and such a comparison is certainly a bug.
Of course, we can go further. SonarJS embeds some knowledge about built-in objects and their properties and methods. We added a new rule “Non-existent properties shouldn’t be accessed for reading” which is based on that knowledge. It detects issues which could be due to a typo in the name of the property or to a mistake about the type of the variable, such as the following issue which was found in the OpenCart project:
This piece of code confuses two of its variables: “number” and “s”. The first one is a number and the second is a string representation of the first. The “length” property therefore exists on “s”, but is undefined on “number”. As a result, this function does not return what it’s supposed to.
We know that “methodArgs” may be an array: when it is, comparing it to a number doesn’t make sense and that’s what the rule detects. The author of this code probably intended to use methodArgs.length in the comparison.
How can SonarJS catch such mistakes? Briefly, we rely on path-sensitive dataflow analysis: as we explained a few months ago, our analyzer can explore the various execution paths of a function and the possible constraints on the variables. In the last few versions, we improved our engine so that it tracks the types of the variables. We derive type information based on indicators in the code such as:
- Literals, e.g. 42 is a number,  is an array.
- Operators, e.g. the result of a + can be either a number (addition) or a string (concatenation).
- typeof expressions.
- Calls to built-in functions, e.g. we know that a call to Number.isNaN returns a boolean value.
That not only allowed us to implement the rules I just described, it also improved existing rules not directly related to types. The rule which checks for conditions which are always true or false is now able to find new issues such as the following one in the YUI project:
This piece of code tests whether “config” is a function twice. However, it’s re-assigned to null if the first test returns true. We therefore know for sure that the second test will return false. This rule doesn’t specifically check the types of the variables, but it is based on all the constraints we’ve derived on the variables and type is one of them.
This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen
To many people, Lean manufacturing was invented in Japan and is synonymous with the Toyota Production System (TPS). They will tell you that TPS is the manufacturing philosophy that enabled Toyota to effectively conquer the global automobile market by reducing waste and improving quality. While that is true, it is not the whole story. Lean has far deeper roots and broader potential.
Fundamentally, Lean is about creating value and empowering people, not just eliminating waste. It was developed long before Toyota—long before the
20th century, in fact. Some trace the roots of Lean all the way back to the Venice Arsenal in the 1500s, when Venetian shipbuilders could roll complete galley ships off the production line every hour, a remarkable achievement enabled by several weeks of assembly time being sequenced into a continuous, standardized flow. (The genius that helped the military engineers at the Venice Arsenal was none other than Galileo himself— perhaps the first-ever Lean consultant!)
By 1760, the French were using standardized designs and the interchangeability of parts to facilitate repairs on the battle-field. Eli Whitney refined the concept to build 10,000 muskets for the U.S. government at previously unheard-of low prices. Militaries around the world fine-tuned continuous flow and standardized processes throughout the 1800’s. Over time, standardization slowly entered into commercial manufacturing.
In 1910, Henry Ford moved his nascent automobile manufacturing operations into Highland Park, Michigan, which is often called the “birthplace of Lean manufacturing.” Ford used continuous flow and standardized processes, coupled with innovative machining practices to enable highly consistent, repetitive assembly. Ford often cited the frugality of Benjamin Franklin as an influence on his own business practices—especially Franklin’s advice that avoiding unnecessary costs can be more profitable than increasing sales.
Ford was able to reduce core chassis assembly time from twelve hours to less than three. This reduced the cost of a vehicle to the point where it became affordable to the masses and created the demand that helped build Ford’s River Rouge plant, which became the world’s largest assembly operation with over 100,000 employees. In 1911, Sakichi Toyoda visited the United States and witnessed Ford’s Model T production line. He returned to Japan to apply what he saw on his company’s handloom weaving machines.
As Ford and Toyoda were streamlining their operations, others were making parallel improvements in the quality and human factors of manufacturing. In 1906, the Italian Vilfredo Pareto noticed that 80% of the wealth was in the hands of 20% of the population, a ratio he found could be applied to areas beyond economics. J.M. Juran took the Pareto Principle and turned it into a quality control tool that focused on finding and eliminating the most important defects. A few years later, Walter Shewhart invented the control chart, which allowed managers to monitor process variables. Shewhart went on to develop the Plan-Do- Study-Act improvement cycle, which Dr. W. Edwards Deming then altered to create the Plan-Do-Check-Act (PDCA) cycle still in use today
In the early years of twentieth century, efficiency expert
Frank Gilbreth advanced the science of management by observing construction and factory workers. He and his wife, Lillian, started a consulting company to teach companies how to be more efficient by reducing human motion during assembly processes. Sakichi Toyoda, having already benefited from Henry Ford’s ideas, became an expert at reducing human-induced variability in his factories.
Then came World War II. At the beginning of the war, Consolidated Aircraft in San Diego was able to build one B-24 bomber per day. Ford’s Charles Sorensen thought he could improve that rate, and as a result of his efforts, a couple years later the Willow Run plant was able to complete one B-24 per hour.
With almost all of the traditional male factory workforce deployed overseas for the war, the human aspect of manufacturing moved front and center. Training Within Industry (TWI) was born as a method to rapidly and effectively train women to work in the wartime factories. After the war, TWI found its way to Japan even as it faded away in the U.S. (only recently has it returned).
The end of the war saw a divergence in philosophies between the two countries. In the U.S., Ford adopted the GM style of top-down, command-and-control management and effectively abandoned Lean manufacturing. Meanwhile in Japan, Toyota led the acceleration of the development and implementation of Lean methods. The company transitioned from a conglomerate that still included the original loom business to a company focused on the auto market. Taiichi Ohno was promoted to machine shop manager and under his watch, Toyota developed the elimination of waste, and creation of value, concepts. The human side of manufacturing was especially important to Ohno, who transferred increasing amounts of authority and control directly to workers on the shop floor.
After being sent to Japan in 1946 and 1947 by the U.S. War Department to help study agriculture and nutrition, Dr. Deming returned to Japan in the early 1950s to give a series of lectures on statistical quality control, demonstrating that improving quality can reduce cost. Toyota embraced these concepts and embedded them into the Toyota Production System (TPS), leading to Toyota winning the Deming Prize for Quality in 1965. Over several years, Taichi Ohno and Shigeo Shingo continued to refine and improve TPS with the development of pull systems, kanban, and quick changeover methods.
By the early 1970s, the rest of the world was beginning to notice Japan’s success, and managers assembled for the first study missions to Japan to see TPS in action. Norman Bodek and Robert Hall published some of the first books in English describing aspects of TPS, and by the mid-1980s, several U.S. companies, notably Danaher, HON, and Jake Brake, were actively trying the “new” concepts.
The term “Lean” was first coined by John Krafcik in his MIT master’s thesis on Toyota, and then popularized by James Womack and Daniel Jones in the two books that would finally spread a wider knowledge of TPS: The Machine That Changed the World in 1990 (written with Daniel Roos) and Lean Thinking in 1996. Lean Thinking described the core attributes of Lean as:
- Specify value from the perspective of the customer.
- Define the value stream for a product, then analyze the steps in that stream to determine which are waste and which are value-added.
- Establish continuous flow of products from one operation to the next.
- Create pull between process steps to produce the exact amount of products required (i.e., make to order).
- Drive toward perfection, both in terms of quality and eliminating waste.
Those books, as well as organizations such as the Association for Manufacturing Excellence (AME) and the Lean Enterprise Institute, drove a widespread acceptance of Lean as a path to productivity and profitability. By the year 2000, Lean methods were moving out of manufacturing and into office and administrative environments. The spread of Lean continues today, and currently, Lean healthcare, Lean government, Lean information technology (and Agile software development), and Lean construction are particularly popular.
A couple of weeks ago, I spoke locally about Manage Your Project Portfolio. Part of the talk is about understanding when you need project portfolio management and flowing work through teams.
One of the (very sharp) fellows in the audience asked this question:
As you grow, don’t you need component teams?
I thought that was a fascinating question. As agile organizations grow, they realize the value of cross-functional teams. They staff for these cross-functional teams. And, then they have a little problem. They can’t find enough UX/UI people. Or, they can’t find enough database people. Or, enough writers. Or some other necessary role for the “next” team. They have a team without necessary expertise.
If managers allow this, they have a problem: They think the team is fully staffed, and it’s not. They think they have a cross-functional team that represents some capacity. Nope.
Some organizations attempt to work around the scarce-expertise problem. They have “visitors” to a team, filling in where the team doesn’t have that capability.
When you do that, you flow work through a not-complete team. You’re still flowing work, but the team itself can’t do the work.
You start that, and sooner or later, the visitor is visiting two, three, four, and more teams. One of my clients has 12 UI people for 200 teams. Yes, they often have iterations where every single team needs a UI person. Every single team. (Everyone is frustrated: the teams, the UI people, and management.)
When you have component teams and visitors, you can’t understand your capacity. You think you have capacity in all those teams, but they’re component teams. They can only go as fast as the entire team, including the person with the scarce expertise, can deliver features. When your team is not first in line for that scarce person, you have a Cost of Delay. You’re either multitasking or waiting for another person. Or, you’re waiting for an expert. (See CoD Due to Multitasking and CoD Due to Other Teams Delay. Also See Diving for Hidden Treasures.)
What can you do?
- Flow work through the experts. Instead of flowing work through teams that don’t have all the expertise, flow work through the experts (not the teams).
- Never let experts work alone. With any luck, you have people in the team working with the experts. In Theory of Constraints terms, this is exploiting the constraint. It doesn’t matter what other work you do. If your team requires this expertise, you need to know about it and exploit it (in TOC sense of exploitation).
- Visualize the flow of work. Consider a kanban board such as the one below that shows all the work in progress and how you might see what is waiting for whom. I would also measure the Cost of Delay so you can see what the delay due to experts is.
- Rearrange backlog ranking, so you have fewer teams waiting for the scarcity.
Here’s the problem. When you allow teams to compete for scarcity (here, it’s a UI person), you don’t get the flow of work through the teams. Everything is slower. You have an increased Cost of Delay on everything.
Visualizing the work helps.
Flowing the work through the constrained people will show you your real capacity.
Needing component teams is a sign someone is still thinking in resource efficiency, not flow efficiency. And, I bet some of you will tell me it’s not possible to hire new people with that skill set locally. I believe you.
If you can’t hire, you have several choices:
- Have the people with the scarce expertise consciously train others to be ready for them, when those scarce-expertise people become available. Even I can learn some capability in the UI. I will never be a UI expert, but I can learn enough to prepare the code or the tests or the experiments or whatever. (I’m using UI as an example.)
- Change the backlogs and possibly reorganize as a program. Now, instead of all the teams competing for the scarce expertise, you understand where in the program you want to use that scarce expertise. Program management can help you rationalize the value of the entire backlog for that program.
- Rethink your capacity and what you want the organization to deliver when. Maybe it’s time for smaller features, more experiments, more MVPs before you invest a ton of time in work you might not need.
I am not a fan of component teams. You could tell, right? Component teams and visitors slow the flow of releasable features. This is an agile management problem, not just a team problem. The teams feel the problem, but management can fix it.
When I go in to do large scale transformations I’m invariably asked the question, “should the PMO go away?’ The reasoning is that going agile should get rid of all of the oversight, the Gantt Charts, the weekly status meetings, release scheduling. The list goes on.
Before I address the question I want to give you some background as to what we typically see when we hit the ground from a coaching standpoint. The company is in an ad hoc state. They may be delivering but it isn’t always on time. Scope creep is inevitable in this environment as they schedule 3,6, and maybe even 12 month releases. As much as the teams try to be agile there are a number of processes in place to make sure the product actually gets out the door. There’s some release planning up front, expectations are set. Development may occur in sprints but integration testing and acceptance testing lag behind. Sometimes it is so complicated to do integration testing it has to happen in a big time box towards the end. The business becomes disengaged while development is off sprinting. This process isn’t agile and if you did lay it out in a Gantt chart it would present very much like waterfall.
Now think about all the stage gates you have in your organization. Release planning sign off. Weekly change control. Release scheduling. Release sign off. Deployment planning. Deployment change control. Some organizations I’ve seen have 20 people on the phone during an overnight deployment. So why is this? The answer is simple. Over time the organization has created an environment of mistrust. Promises have been broken. Buggy software has been delivered to customers. Fingers pointed; “Requirements were bad”, “Development is slow”, “Too many last minute changes.” A number of reasons have caused the need for the stage gates. Once a stage gate exists it’s difficult to remove.
To get the organization back on track we need to refocus on the 3 things that make up an agile process; backlog, team and working tested software. In essence, clarity, accountability and measurable progress. In order to do this, we need governance, structure and metrics. These things will get us to a predictable state. Once we get predictable we can begin to rebuild the trust in the organization.
To get an organization back on track we need to focus on the 3 things that make up an agile process; backlog, team and working tested software.
The governance model must slice through the organization from the top to the bottom. In many organizations this will be in the form of at least 3 layers; Portfolio, Program, and Team. The Portfolio layer will deal with the creation, definition and prioritization of themes and epics. The Program layer will create define, and prioritize features. The Team level is responsible for the implementation of the user stories derived from the features. This governance model will further define the process flows to go from inception to deployment.
What I have briefly described here is an initial step towards a logically planned out transformation strategy. As you can see, in this first step we clearly define a structure and a governance model that leads to a predictable process. We can’t just teach agile practices and hope everybody sees the light. There are a number of manual orchestration activities in the organization to keep everything moving forward. As the organization moves further along the scale towards a more decoupled system of delivery then the manual orchestration will diminish. I refer to this manual orchestration and stabilization processes as scaffolding. As manual orchestration diminishes the scaffolding can begin to come down. It is important in your transformation to identify the scaffolding and plan as part of your future transformation efforts to remove it.
Once we get predictable we can begin to rebuild the trust in the organization.
So, “Should the PMO go away?” Not in this scenario. Some part of the organization needs to facilitate the manual orchestration at this stage of the transformation. If your organization already has a PMO then these are the type of people you need to facilitate.
Can the PMO go away one day? The only responsible answer I can give is, “When your organization is ready.”
One last caveat. I’ve seen some organizations that are split, some parts need the PMO due to organizational and technical debt, and other parts have been built to be decoupled and on a continuous delivery cycle eliminating the need for manual orchestration.
When we released SAFe Version 4.0 last January (seems like forever ago in the lifetime of SAFe), we also introduced the ‘Implementing 1,2,3 Tab’ to provide our first published guidance on how to implement SAFe. That was sound advice, and it served well as basic guidance to implement SAFe. Many successful implementations followed, as you can see from Case Studies.
But we all know it takes more than that. How does one identify value streams and design the ARTs to begin with? How do you get ready for the first PI planning event? What do you do after you’ve launched that first ART? And so much more.
To address this larger issue of implementing SAFe at enterprise scale, we are pleased to announce a series of guidance articles which can now be found under the Implementation Roadmap main menu. There you will find this picture and upcoming links to 12 new articles (one for each roadmap step below), which provide more detailed guidance for implementing SAFe at scale. Of course, we all also know that there is no one right way to implement SAFe, but after hundreds of successful implementations, this pattern emerged as the most common, so we decided to share it here.Figure 1. SAFe Implementation Roadmap
Please be advised that this thread is a work in progress and we are planning to release about an article a week until this series is complete. As of this writing, the first article is posted. You can start the journey by clicking here.
Good luck with implementing SAFe; we are confident you will get the outstanding business results that you deserve.
Dean and the Framework team
Recently after attending a Scrum Alliance webinar on “Best Practices in Coaching,” I was reminded of my experiences teaching Acting students at university, and how I used changing status to help them achieve their best.
Status refers to the position or rank of someone within a particular group or community. I believe it was Canadian Keith Johnstone who introduced the idea of “playing status” to theatre improv teams. It is used to create relationships between characters onstage, and to change those relationships to move a story forward.
Status can be indicated through position, posture, facial expression, voice and clothing. It is a fascinating tool for any trainer or coach to use.
At the beginning of a semester with new students, I would invite them to sit on the stage floor in a circle with me. I would welcome them, discuss my expectations of their learning, and tell them what they could expect from me. We’d go over the course syllabus and I’d answer questions. I purposefully put myself in an equal status to them, as a way of earning their trust, because the processes of acting* requires huge amounts of trust. I also wanted to establish a degree of respect in them for the stage by all of us being in a “humble” position on the stage floor.
However, when I would introduce a new exercise to them that required them to go beyond their comfort zones, I would deliver instructions from a standing position while they were seated. By elevating my status, I conveyed the importance of the exercise, and it was a signal that it was not something they could opt out of. In this way, I could help them to exercise their creativity to a greater extent.
Another way I encouraged my students to take risks was to take risks myself. Sometimes I would illustrate an acting exercise by doing it myself first. For those few minutes I became a colleague with my students, one of them, equal in status. If I could “make a fool of myself” (which is how it may look to an outsider), then they could too.
I had one student who had great potential, but who took on the role of class clown and would not give it up. He fought against going deeper and getting real. One day in an exercise where they had to “own” a line of dialogue, I had him in a chair onstage, while I and the rest of the students were seated. He had to repeat the line of text until it resonated with him and became real. After some minutes, nothing was changing in him. In desperation had him turn his chair around so his back was to us. I then indicated to the other students to quietly leave the room. He could hear something happening but was confused about it. He was not able to turn around and look.
When I allowed him to turn around it was only him and me left in the theatre. I had him go through the repetition exercise again. Without an audience, and with me still seated, he finally broke through the wall he had erected and connected with the line of text from his inner self. It was a wonderful moment of truth and vulnerability. I then allowed the other students back in, and had him find that connection again with the students there. He was able to do it.
He is grateful to me to this day for helping him get beyond his comfortable role as clown to become a serious actor.
When training or coaching, it seems to me there can be huge value in playing with status. Sometimes taking a lower status, an equal status, or a higher status, can move a team or upper management into discovering whatever may have been blocking the process. Again, there are many ways to indicate status and even a status change to effect progress.
In his book, “Improv-ing Agile Teams,” Paul Goddard makes some important observations about using status. He writes: “Even though status is far less obvious than what is portrayed on stage, individuals still can take small steps to encourage status changes within their own team. For example, asking a team member who exhibits lower status behaviours to take ownership of a meeting or oversee a process not only boosts that person’s confidence but also increases status among peers…these subtle actions can help make lower-status team members feel more comfortable when expressing new ideas or exposing hidden problems.”
A colleague reminded me of a 1975 publication called “Power: How to Get It, How to Use It,” in which author Michael Korda gives advice about facial expression, stance, clothing and innumerable ways to express “power.” The idea of using status in the context I’m writing about is not about gaining power, but about finding ways through one’s own status changes to help unlock the capacity and potential of others.
How can a coach use status to help someone in management who is blocking change? Is someone on a team not accepting what others have to offer because s/he is keeping his/her status high? Is a Scrum Master necessarily a high-status team member, or rather a servant to the team (low status)?
I am curious if any coaches or trainers out there have used status in a way that created growth and change.
*Good acting is a matter of the actor finding the truth in oneself as it relates to the character he or she is playing. It requires vulnerability and courage to step out of one’s known persona and take on another as truthfully as possible. Inherent truthfulness also applies to work in any other endeavour.Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
In my work, I draw on models, frameworks, and years of experience. Yet, one of my most valuable tools is a simple one: Curiosity.
In an early meeting with a client, a senior manager expressed his frustration that development teams weren’t meeting his schedule. “Those teams made a commitment, but didn’t deliver! Why aren’t those people accountable?” he asked, with more than a hint of blame in his voice. As I spent more time in the organization, I heard other managers express similar wonderment (and blame).
I also noticed that whenever someone asked, “Why aren’t those people accountable?”—or some other blaming question, problem-solving ceased.
I know these managers wanted to deliver software to their customers as promised. But, their blaming questions prevented them from making headway in figuring out why they were unable to do so.
I started asking different questions–curious questions.
- Who makes commitments to the customers, and on what basis? How do customer commitments, team commitments, and team capacity relate to each other?
- When “those teams” make commitments, Is it really the people who will do the work committing, or someone else?
- What does “commitment” really mean here? Do all parties understand and use the term the same way?
- What hinders people from achieving what the managers desire? Do teams have the means to do their work?
- What is at stake, for which groups of people, regarding delivery of this product?
- What is it like to be a developer in this organization?
- What is it like to be a manager in this organization?
- What is it like to be a customer of this organization?
I worked with others in the client organization to learn about these (and other) factors. We developed and tested hypotheses, engaged in conversations, made experiments, and shifted the pattern of results.
For the most part, managers no longer ask blaming questions. They ask whether teams have the data to make decisions about how much work to pull into a sprint. They examine what they themselves say and do to reduce confusing and mixed messages. They review data, and adjust their plans.
Curiosity uncovered contradictions, hurdles, confusion, and misunderstandings. All of which we could work on to improve the situation.
So, there you have it. Curiosity is my number one Change Artist Super Power, and it can be yours, too.
On Monday, January 16th, the pre-sale for my Docker Recipes for Node.js Development ebook opens up. As I said in the last post, I need to sell 100 copies in the pre-order period, to ensure the book moves forward.
But before the pre-sale starts, though, I wanted to give you a sneak peak at the first bits of content that I’ll have ready, for the pre-sale.Writing That First Recipe Was Difficult
I mentioned previously that the first bits of content would likely be around debugging, and that has held true in the content I’ve worked on, so far. But things didn’t quite work out the way I had expected.
When I sat down to write the first recipe, I had intended to write a small bit on how to use the built-in Node.js command-line debugger within a container. But after doing that and asking a few friends for some feedback, I realized that I had not shown enough to get a sense of what the book would be like.
So I decided to write a few additional recipes to get a better sense of the flow and layout, and things started changing pretty quickly.
I’m still not completely happy with the writing at this point, but I think the recipe structures are starting to solidify, and I want to give you a peak into what that content will look like.A Preview of Debugging In A Container
The content around debugging will likely be it’s own “Part ##” in the book, since I already have 3 recipes basically written as rough drafts and may one or two more.
The opening for that part of the book is roughly outlined, here:
Within the recipes (chapters), there will be a short scenario description to help you understand when the recipe in question would be best suited.
There will be recipe listings, of course, which are meant to be copy-and-paste chunks of code and configuration, to solve a specific problem.
And each recipe will come with cooking instructions, to provide additional description and detail on how to use the code and configuration found in the recipe listings.
Depending on the specific recipe, there will also be some additional detail about specific commands, or notes on items related to the recipe in question. I’m trying to keep the book as short as possible, while still providing enough information to be valuable.
This won’t be an introduction, but a collection of solutions for someone familiar with Docker, but not yet comfortable using all their favorite development tools and techniques within Docker.To Be Edited … Heavily
I do have a fair number of pages written already, but I don’t expect the content and structure that I’ve shown to be the final form of the book. Remember, the goal of the pre-sale is to get feedback, input and ideas from early readers. That’s where you come in.
The pre-sale starts on January 16th, and ends on the 31st.
If you buy the ebook in that period, you’ll have the opportunity to provide direct feedback on how to best move forward with the content and structure. You’ll receive updates to the book as they happen. And you’ll get much more content than just the ebook (some screencasts, cheatsheets, etc) at a significantly reduced price.
Stay tuned in to the pre-sale and how it’s going, by joining my mailing list (below). And be ready for the pre-sale launch – it starts in only a few days!Tweet
The post A Sneak Peak at Docker Recipes for Node.js Development appeared first on DerickBailey.com.
Custom Request Types
Custom Request types increase the number of scenarios that the Service Desk can be used for. For example, you could add "Project Request" if you're doing Portfolio Management, "IT request" for infrastructure guys, and much more. Alternatively, you can simplify Idea Management by removing all Request types except for "Idea".
- Requiring a comment for state transfers is now supported in Boards and Lists. If checked, users are requested to input a comment before moving an entity to the selected state.
- Fixed: Non-admin user could change their team role without 'add/edit team' permissions
- POP plugin won't create a requester if a Targetprocess User with the same email already exists
- Fixed email notification duplicates in the case of reply-to comments where the same person is mentioned and addressed
- REST api/v1: InboundAssignables, OutboundAssignables endpoints with CustomField collection included returned empty value