… and soon to be book.
We’ve had requests for a single page that lists all the ongoing Beyond Scrum blog posts in one handy spot, at least until the book that they will become is released. We’re happy to oblige! The below list will be updated as new posts are added to the blog.
In his book “Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency”, Tom DeMarco makes the point that you can’t be creative when you are overworked or overburdened. Stress kills innovation as does busyness. Little slack leads to little time to look around leads to little improvement. To be creative, your mind needs to feel free and unallocated, uncluttered even.Thankfully, we have a solution.
We have more than one, in fact. Let me first tell you about the solution that this article is not about. You can introduce slack into your organization with regular slack time. There are numerous well known examples of companies that do things like Fed-Ex days, time set aside for self-directed work, not allocated to Prioritized Product Backlog Items. Dan Pink once endorsed this approach. In line with this, I like what Dan has to say about what motivates knowledge workers.
But there’s another solution to the lack of slack that we should entertain first. In fact, you are less likely to use FedEx days if you aren’t first doing the following. To introduce this I must first point out something that Scrum says about managers. Scrum doesn’t say a whole lot about managers, but it does say at least this: mangers must stop assigning tasks. Too bad that many managers get their power and sense of self-worth from this activity. Deciding how to get the work done in Scrum is left up to the self-organizing cross-functional team. The people who can best decide how the work should be done are those closest to the work.
Just to be clear, agile managers should not:
- make assignments
- hand out work
- direct people or tell them what to do
- make the hiring decision solo
- SW architecture (in my opinion – debatable)
- do work
- be an individual contributor
- be a hero
Well, gosh, then shat should a manager do? Well, I’ll tell ya. Manage more people. Step in when the team needs help (but not too quickly). Manage risks. Also, you are still an agent of the company, handling legal stuff, signing off on expenditures, etc.
But you could also do something else.
Move up to a higher level of value to the organization. Be the slack that has been wrung out of the team. Here are some specific suggestions:
- Keep an eye on the system, looking for improvements.
- Use A3 Problem Solving (A3 Thinking).
- Understand the capacity of the team.
- Protect the slack; protect capacity that is reserved for classes of work that require a short lead time.
- Ensure cross-training is happening (not by making assignments, but making the team handle it).
- Understand the dynamics of the organization.
- Understand how value is created.
- Protect the team from interference.
- Make the organization effective; learn to look at it as a system.
- Support the team.
- Clear roadblocks.
- Provide good facilities; fight the facilities police. (Better: Teach facilities how value is created and how better facilities helps create value.)
- Provide ample computing infrastructure, sufficient build machines and test machines (People are far more expensive than VMs).
- Use Derby’s 14 essential questions for managers or the book on this topic she is working on.
- Read Deming and Goldratt.
- Watch interpersonal interaction; Watch when one team member pulls back, withdraws in a brainstorm (for example)
- Help the team hone their craft.
- Encourage the team to learn TDD and BDD by making room for them to learn (time – remove the schedule pressure while they learn).
- Think through policies, procedures and reward/review systems and improve them. (What messages do they send?)
- Understand what motivates knowledge workers (see the previous reference to Pink); Let creating that kind of environment be an imperative.
I contend that we should focus on continuous improvement of process, of people’s skills and knowledge, with a strong focus on empowerment and self-organization. Work on open, honest feedback. You’ll have better results if you do all that and just completely scrap the individual annual review. I love quoting Drucker here: “The average person takes 6 months recovering from a performance review.”
Let me bring it back together by saying you can’t do a good job with this other higher value stuff if you are down in the weeds assigning tasks to people. Delegate more. In fact, delegate everything. Free yourself to be the ultimate in valuable slack.
I'm sorry for not writing to you for quite some time. But, you see, I have been working on a book together with my two colleagues and friends Daniel Deogun and Daniel Sawano. Late this summer we decided to try to write down our collective experiences and insights in design and security as a book. We are far from finished, but by now the first few chapters are released from the publisher Manning as part of their "Manning Early Access Program" (MEAP).
The book is named Secure by Design and our main message is how good design and good code can help developers avoid security problems. We have found that lots of security vulnerabilities are due to bugs that could have been avoided if the design of the code had been different.
You as well as I know that it is a lot easier to think about "good design" when developing features, that it is to think "security" at the same time.
Of course not all good design lead to avoiding security vulnerabilities. So we have gathered our experience on what design-tricks that are most effective in giving security as a side-effect. It's probably no big surprise to you that Domain-Driven Security is an important part of this, but there are many other parts as well.
We still have a lot of material to cover. But, what we have started with things that are close to the code, such as building Domain Primitives, how to structure validation, immutability etc. I guess you will find several of these ideas familiar, as we have discussed them earlier - but there are also some new insights I have not yet had the time to write to you about. Well, not to mention all the ideas from Daniel and Daniel which you and I haven't had possibility to reflect on earlier.
In the later parts there will be more material on integration, security benefits from cloud thinking. We also want to share our ideas around architecture such as security concerns for microservice architecture or how to handle legacy codebases.
We will continue to write during the spring and early summer. Hopefully we will see the book in print sometime during the fall this year.
PS The early-access was released last night; you can check the book at https://www.manning.com/books/secure-by-design
How do you know if you are making improvements where they can have the most impact? Can you be sure the improvements being made will provide the benefits expected or that those benefits actually help deliver more value?
Those of us that work in software development organizations have many opportunities to improve how work is done. In fact, there is no end to the improvements we could make and thus comes the rub. Without an ability to understand where constraints exist then we are just making guesses about where to put our improvement efforts. This often results in local optimizations to the detriment of the whole system.
Each discipline in software development, coding and testing for example, are able to continuously improve their craft. Unfortunately, perfecting the craft of one area may impede flow through the system. For example, requirements should have sufficient clarity so that the team can deliver value to the customer through working tested software. Having perfect requirements is probably not the goal and could create a constraint that starves the rest of the organization. Only focusing on a part of the overall system is a bit like polishing the propellor to perfection only to find out the ship is sinking.
Unfortunately, perfecting the craft of one area may impede flow through the system.
There are a variety assessments that can be used to determine where improvements should be made:
- Retrospectives – usually used by teams to evaluate their performance and then deciding where to make short-term improvements.
- Value stream mapping – mapping the flow of work through the software development system and determining where value is added or not.
- Capability modeling – identifying the capabilities that the organization delivers, the value provided, performance delivered, and risks for not performing well.
In all cases change should be controlled so that the impact of the change can be determined without other variables at play. As improvements are made and results of change determined, the system should be re-evaluated to determine the next areas for improvement.
I’ve put together a quick refresher on Agile Results for 2017:
I tried to keep it simple and to the point, but at the same time, help new folks that don’t know what Agile Results is, really sink their teeth into it.
For example, one important idea is that it’s effectively a system to use your best energy for your best results.
I’ve seen people struggle with getting results for years, and one of the most common patterns I see is they use their worst energy for their most important activities.
Worse, they don’t know how to change their energy.
So now they are doing work they hate, because they feel like crap,and this feeling becomes a habit.
The irony is that they would enjoy their work if they just knew how to flip the switch and reimagine their work as an opportunity to experiment and explore their full potential.
Work is actually one of the ultimate forms of self-expression.
Your work can be your dojo where you practice building your abilities, creating your competencies, and sharpening your skills in all areas of your life.
But the real key is to bridge work and life through your values.
If you can find a way to bake your values into how you show up each day, whether at home or in the office, that’s the real secret to living the good life.
But what’s the key to living the great life?
The key to living the great life is to give your best where you have your best to give in the service of others.
Agile Results is a way to help you do that.
Check out the refresher on Agile Results and use the Rule of Three to rule your day.
If you already know Agile Results, teach three people and help them live and lead a more inspired life.
“The best is yet to come.”
It can be tough creating the future among the chaos.
The key is to get a good handle on the real and durable trends that lie beneath the change and churn that’s all around you.
But how do you get a good handle on the key disruptions, the key trends, and the macro-level patterns that matter?
Draw from multiple sources that help you see the big picture in a simple way.
To get started, I’m going to share the key sources for trends and insights that I draw from (beyond my own experience and what I learn from working with customers and colleagues from around the world).
Here are the key sources for trends and insights that I draw from:
- Age of Context (Book), by Robert Scoble and Shel Israel. Age of Context provides a walkthrough of 5 technological forces shaping our world: 1) mobile devices, 2) social media, 3) big data, 4) sensors, 5) location-based services.
- Cognizant – A global leader in business and technology services, helping clients bring the future of work to life — today.
- DaVini Institute – The DaVinci Institute is a non-profit futurist think tank. But unlike traditional research-based consulting organizations, the DaVinci Institute operates as a working laboratory for the future human experience A community of entrepreneurs and visionary thinkers intent on discovering the (future) opportunities created when cutting edge technology meets the rapidly changing human world.
- Faith Popcorn – The “Trend Oracle.” Faith is a key strategist for BrainReserve and trusted advisor to the CEOs of The Fortune 500. She’s identified movements such as, “Cocooning,” “AtmosFear,” “Anchoring,” “99 Lives,” “Icon Toppling” and “Vigilante Consumer.”
- Fjord – Fjord produces an annual report to help guide you through challenges, experiences, and opportunities you, your organization, employees, customers, and stakeholders will likely face. Check out the Fjord Trends 2017 report on SlideShare.
- Foresight Factory (Formerly called Future Foundation) – Future focused, applied, global consumer insight. Universal trends that shape tastes and determine demand the world over; sector trends that are critical to success in specific industries; custom reports produced in partnership with clients and focus reports on key markets, regions and topics.
- Forrester – Research to help you make better decisions in a world where technology is radically changing your customer.
- Gartner – The the world’s leading information technology research and advisory company.
- Global Goals – In September 2015, 193 world leaders agreed to 17 Global Goals for Sustainable Development. If these Goals are completed, it would mean an end to extreme poverty, inequality and climate change by 2030.
- IBM Executive Exchange – An issues-based portal providing news, thought leadership, case studies, solutions, and social media exchange for C-level executives.
- Jim Carroll – A world-leading futurist, trends, and innovation expert, with a track record for strategic insight. He is author of the book The Future Belongs to Those Who Are Fast, and he shares major trends, as well as trends by industry, on his site.
- Motley Fool – Motley Fool – To educate, amuse, and enrich.
- No Ordinary Disruption (Book) – This is a deep dive into the future, backed with data, stories, and insight. It highlights four forces colliding and transforming the global economy: 1) the rise of emerging markets, 2) the accelerating impact of technology on the natural forces of market competition, 3) an aging world population, 4) accelerating flows of trade, capital, people, and data.
- O’Reilly Ideas – Insight, analysis, and research about emerging technologies.
- Richard Watson – A futurist author, speaker and scenario planner, and the chart maker behind The Table of Trends and Technologies for the World in 2020 (PDF). Watson is author of the What’s Next Top Trends Blog. Watson is the author of 4 books: Future Files, Future Minds, Futurevision, and The Future: 50 Ideas You Really Need to Know.
- Sandy Carter — Sandy Carter is IBM Vice President of Social Business and Collaboration, and author of The New Language of Marketing 2.0, The New Language of Business, and Get Bold: Using Social Media to Create a New Type of Social Business. She’s not just fun to read or watch – she has some of the best insight on social innovation.
- The Industries of the Future (Book), by Alec Ross. Alec Ross explains what’s next for the world: the advances and stumbling blocks that will emerge in the next ten years, and how we can navigate them.
- The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee. Erik Brynjolfsson and Andrew McAfee identify the best strategies for survival and offer a new path to prosperity amid exponential technological change. These include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.
- ThoughtWorks Technology Radar – Thoughts from the ThoughtWorks team on the technology and trends that are shaping the future.
- Trend Hunter – Each day, Trend Hunter features a daily dose of micro-trends, viral news and pop culture. The most popular micro-trends are featured on Trend Hunter TV and later grouped into clusters of inspiration in our Trend Reports, a series of tools for professional innovators and entrepreneurs.
- Trends and Technologies for the World in 2020 (PDF) – Table of trends and technologies shaping the world in 2020.
- Trendwatching.com – Trendwatching.com helps forward-thinking business professionals in 180+ countries understand the new consumer and subsequently uncover compelling, profitable innovation opportunities.
While it might look like a short-list, it’s actually pretty deep.
It’s like a Russian nesting doll in that each source might lead you to more sources or might be the trunk of a tree that has multiple branches.
These sources of trends and insights have served me well and continue to serve me as I look to the future and try to figure out what’s going on.
But more importantly, they all inspire me in some way to create the future, rather than wait for it to just happen.
I’m a big fan of making things happen … you play the world, or the world plays you.You Might Also Like
So you’ve reached the tipping point and are ready to go SAFe. What next? For those of you following the ‘critical moves’ identified in the SAFe Implementation Roadmap, this article describes the second step in that series: Train Lean-Agile Change Agents.
It discusses the eight stages of organizational transformation, and the critical steps needed to engage leadership in a SAFe coalition sufficiently powerful and knowledgeable to implement the change.
Read the full article here. As always, we welcome your thoughts so if you’d like to provide some feedback on this new series of articles, you’re invited to leave your comments here.
—Dean and the Framework team
Consumer Trends are a key building block for innovation.
Is you are stuck coming up with innovation opportunities, part of it is that you are missing sources of insight.
And of the best sources of insight is actually consumer trends.
One tool for helping you turn consumer trends into innovation opportunities is the Consumer Trend Canvas, by Trendwatching.com.
What I like about it is the simplicity, the elegance, and the fact that it’s similar in format to the Business Model Canvas.
The Consumer Trend Canvas is broken down into to simple sections:
In terms of the overall canvas, it’s actually a map of the following 7 components:
- Basic Needs
- Drivers of Change
- Emerging Customer Expectations
- Innovation Potential
- Your Innovations
From a narrative standpoint, you can think of it in terns of pains, needs, and desired outcomes for a particular persona, along with the innovation opportunities that flow from that simple frame.
The real beauty of the Consumer Trend Canvas is that it’s a question-driven approach to revealing innovation opportunities.
Here are the questions within each of the parts of the Consumer Trend Canvas:
- Which deep consumer needs & desires does this trend address?
- Why is this trend emerging now? What’s changing?
- What new consumer needs, wants, and expectations are created by the changes identified above? Where and how does this trends satisfy them?
- How are other businesses applying this trend?
- How and where could you apply this trend to your business?
- Which (new) customer groups could you apply this trend? What would you have to change?
When you put it all together, you have a quick and simple view of how a trend can lead to some potential innovations.
The power is in the simplicity and in the consolidation.You Might Also Like
The concept of new development is pretty straight forward. You start with nothing, write some code and then you have an application. Enhancement projects are a little more complicated. You start with an existing application. You add to it. You may change some of the existing code. You may even delete some of the original application. At the end, you have an enhanced application. For example, suppose that you had an application that evaluated the probability of a financial instrument changing in value, i.e. going up or down. You would have some screens, some interfaces and probably some data that was maintained. One of the pieces of data that you used was the amount of change that that instrument would go through in an average day. Let us suppose that you had to maintain that piece of information and store it in the application. Then you found that the information was available and could be accessed by the application directly. The enhancement would consists of an added interface to the information. There would probably be changes in the screen logic to utilize the new information from the interface. Finally, you would delete the screens and database that you had used to store your hand entered version of the data.
What just happened? Was this a change to a application that was in production, or was this one of the iterations in an agile development project. There is often the need for an existing application to be modified. This is sometimes called adaptive maintenance. Adaptive maintenance allows an application to be changed so that it adapts to new needs by its users. Corrective maintenance is primarily to fix bugs that have shipped or gone into production with the application. Perfective maintenance is closely related to corrective maintenance, but it is performed before the defects manifest themselves. The effort required for these type of maintenance are often not estimated at all. Preventative maintenance is similar in that it usually addresses things that are not functional, like optimization. However, it is time that we start to estimate this type of activity. The same is true of conversion maintenance. Some people consider it a non-functional form of adaptive maintenance because there is no change in functionality. Again, these technical projects need to be estimated and planned.
Agile development involves applications being developed in iterations. In scrum, these iterations are called sprints, other approaches have different names. Some of these iterations become releases. They are released to the user community where they are utilized. This is consistent with the values and principles of agile development. Agile developers embrace change, and it is when an application is being used that people see the types of changes that would improve it. These releases are so important that agile developers will often create stubs and later refactor code in order to deliver these releases. In other words, they will create a release that requires them to change and delete existing code. Traditional developers tend to hate change. They want to develop specifications and implement them without worrying about users making changes. They see changes as something that simply slows down the development process. I had one situation that was even more complicated than that. We were estimating and planning a large system for a utility. I suggested that we would be able to plan the development with three releases that would allow the system to be returning value to the client more quickly. There was no real cost for this. Looked at above, each enhancement had no changes or deletions, just added functionality from release to release. The sponsor said no. When I asked why, he explained that if we implemented one-third of his system, his management might feel that it was enough for now and pull the rest of his funding. If they had nothing to show until the end, then it would be funded until the end. An exception to traditional developers hating change is contract programming organizations who have built change management into their software development life cycles. They can negotiate changes through the process and turn a 2 million dollar project into 2.5 million dollars thanks to the changes.
I brought up this point about 10 years ago when I was exploring agile estimation as a thesis topic. At the time, my colleagues felt it was an important nuance and a distraction from any attempt to do agile development. I went on to develop an approach to estimating agile projects like they were simply new development. It seems to work for the development as a whole. Late last year, I came across a journal article written by Anandi Hira and Barry Boehm of the University of Southern California. The article was titled “Using Software Non-Functional Assessment Process to Complement Function Points for Software Maintenance.” I did not include a hyperlink because access to the article requires an ACM membership and possibly access to their digital library. However, the conclusions of the study are straight forward and, to me, surprising. When looking at projects that simply added functionality, such as new system development, it appeared that function points alone would yield worthwhile predictions. Using Software Non-Functional Assessment Process (SNAP) along with the function points yielded better estimates. When looking at projects that changed functionality, such as enhancement projects, SNAP yielded worthwhile estimates. Using function points with SNAP points did not improve or worsen the estimate. Hira and Boehm suggest that you use both measures in both situations for consistency.
In the beginning of the post it was mentioned that enhancement projects consisted of adds, changes and deletes. However, there was no mention of what was being added, changed or deleted. There could be many answers that question. Three common answers are lines of code, function points and SNAP points. Each has been used and each has its problems.
Using lines of code to measure enhancement size has all of the problems it had for new development work. In addition, it is necessary to segregate the counts for number of lines added, changed and deleted. You do not truly know what the lines of code count is until after the project is done. Then you do not need the estimate. Even then, there is no standard way to count lines of code. Is it a statement in the programming language being counted? Is it a physical line? Do you count comments? Do you count declarative statements? Once you have made these decisions, how do you combine your adds, changes and deletes. Is the size simply the sum of the three values. In other words, if I added 100 SLOC, changed 100 SLOC and deleted 100 SLOC is the total enhancement to 300 lines of code. That is possible. Some organizations have percentages that they apply to add, change and delete. For example, they might feel that the adds contribute 100% to the size of the enhancement, while the changes and deletes contribute 40% and 5%, respectively. In the example, the lines of code would combine to be 145 lines of code. In COCOMO II, there is a maintenance size that is basically a Maintenance Adjustment Factor times the sum of size added and size modified. Size deleted is ignored. The sizes can be in thousands of source lines of code (KSLOC), Application Points (a COCOMO II measure that is an alternative to either KSLOC or Function Points) or Function Points. By the way, the COCOMO II team mentioned that they get their best results from KSLOC. A minor change to a report (application points) or an External Output (function points) may overstate an estimate. The Maintenance Adjustment Factor to weight the effect of poorly or well written software on the maintenance effort. It appears that This measure may be getting into that gray area between size measurement and effort estimation with cost drivers. However the COCOMO II team refers to this as size, so we might as well do the same.
There are several forms of function points, including IFPUG, COSMIC and Mk II. The one maintained by the International Function Point Users Group (IFPUG) is the most widely used. IFPUG maintains the Counting Practices Manual (CPM), administers tests to certify function point specialists and to certify classes and tools. The CPM explains exactly how the enhancement count should be calculated. Basically, it is the sum of the functions that are added, changed and deleted. People who have spent too much time reading the CPM will be quick to correct this statement. For one thing, the enhancement count also includes the functionality associated with conversion. Seasoned estimators know that the conversion portion of a project is tricky to estimate. It is often performed by a separate team, for example, the legacy programmers who understand any previous system and the data files that it used. Thus, it is often estimated separately. Another thing that function point specialists may point out is that the above formula did not take into account the Value Adjustment Factor before and after the enhancement. Theoretically this is part of the calculation. However, most estimators are getting away from using the adjusted function point count in favor of the unadjusted count without a Value Adjustment Factor. The unadjusted function point count is what COCOMO II expects as its size driver. There remains one critical issue in the minds of many estimators: developing a new report, making a change to an existing report and deleting a report all have the same function point count. However, most people believe that they have different amounts of effort associated with them.
In 2009, IFPUG delivered its first draft of the Software Non-functional Assessment Process (SNAP). Considering non-functional requirements when estimating was not a new idea. The function point Value Adjustment Factor (VAF) had been meant to quantify these non-functional requirements. There seemed to be agreement that it was inadequate to capture the extent to which these non-functional requirements might impact the size of a piece of software. Cost drivers in the macro (top-down) estimating models where the next attempt. Of course, these were tied to the models in question. SNAP was a more general way to assess the amount of non-functional requirements for a software development or enhancement project. The size of an non-functional requirements of an enhancement project are the sum of the SNAP points added, the SNAP points change and the SNAP points deleted. This seems like it would lead to the same concerns as function points. However, the COCOMO II team had not seen SNAP points before they published COCOMO II. Likewise, the industry does not have enough experience with SNAP yet to worry about this. The paper by Hira and Boehm is the first quantitative study that I have seen to address this. As stated earlier, they state that SNAP can be used to predict enhancement projects better than function points do.
What is the next step for estimators? Any estimator who is using function points should become familiar with SNAP. It has the potential of improving estimates for new development. It may be the only way to get worthwhile estimates for enhancement projects. The question becomes how to incorporate this into the remainder of the estimating process at your organization. The first difficulty is transforming SNAP points into lines of code. That problem was solved for function points and is the basis for how algorithmic models like COCOMO work. A way for incorporate SNAP points might need to be found. The second difficulty is reconciling SNAP points with the models cost drivers. This is far from trivial. Some cost drivers tend to be more size related, others tend to impact the difficulty of code development without really impacting the size. To make things worse, the decisions that are made for COCOMO II will be different than the decisions that might be made for SEER-SEM or other models. Why go to the trouble. SNAP is probably the only way to reliably estimate the schedule and estimate for the next release. As an added benefit, it may improve the estimate for the entire project.
It’s that time of year when I like to take the balcony view to figure out where the world is going, at least some of the key trends.
I’ve long been a fan that while you can’t predict the future, you can take the long view and play out multiple future scenarios so you are ready for (most) anything.
But I’m an even bigger fan of the idea that rather than predict the future—create the future.
To do that, it helps to have a solid handle on the trends shaping the world.
To help make sense of the trends, I like to use mind tools and frameworks that help me see things more clearly.
One of my favorite tools for trends is the Trend Framework by Trendwatching.com
Trendwatching.com uses a framework to sort and catalog trends.
To understand the future of consumerism, they use a framework of 16 Mega-Trends:
- Status Seekers. The relentless, often subconscious, yet ever present force that underpins almost all consumer behavior.
- Betterment. The universal quest for self-improvement.
- Human Brands. Why personality and purpose will mean profit.
- Better Business. Why “good” business will be good for business.
- Youniverse. Make your consumers the center of their Youniverse.
- Local Love. Why “local” is in, and will remain, loved.
- Ubitech. The ever-greater pervasiveness of technology.
- Infolust. Why consumers voracious appetite for (even more) information will only grow.
- Playsumers. Who said business has to be boring?
- Ephemeral. Why consumers will embrace the here, the now, and the soon-to-be-gone.
- Fuzzynomics. The divisions between producers and consumers, brands, and customers will continue to blur.
- Pricing Pandemonium. Pricing more fluid and flexible than ever.
- Helpful. Be part of the solution, not the problem.
- Joyning. The eternal desire for connection, and the many (new) ways it can be satisfied.
- Post-Demographics. The age of disrupted demographics.
- Remapped. The epic power shifts in the global economy.
I’ve used these 16 Mega-Trends from the Trend Framework as a filter (well, maybe more accurately as idiot-guards and bumper-rails) for guiding how I look at consumer behaviors shaping the market.
In fact, this was one of the most helpful frameworks I used when putting together my Trends for 2016: The Year of the Bold.
As I create my master list of Trends for 2017, I’m finding this simple list of 16 Mega-Trends to be useful once again, to better understand all of the micro-trends that emerge on top of this foundation.
The Trend Framework makes it easier to see the graph of trends and to quickly make sense of why things are shaping the way they are.
“Adopting SAFe has set in motion the skill development and mindset for successful organizational change even as we scale to new programs, Release Trains, and people.”
—Gary Dawson, Assistant Director, Solutions Delivery
For organizations operating in highly regulated industries, the transition from Waterfall to Agile adds an additional layer of risk to what is already a daunting undertaking. Rapid and vast change, if not done properly and with cross-organizational collaboration, has the potential to be disruptive and actually hinder advancement.
We know that SAFe is emerging as a solution in regulated industries, so we’re always glad when we get a chance to peek inside one of these transformations. The folks from the United Kingdom’s NHS Blood and Transplant (NHSBT) have shared their SAFe story, and there’s much to learn from what appears to be an exemplary model for how to make the move from Waterfall to Agile in a phased approach without tipping over the boat.
NHSBT supplies safe blood to hospitals in England, and tissues and solid organs to hospitals across the United Kingdom. When the organization set out to revolutionize the way it interacts with blood donors, it needed to adopt a new technical platform and architecture. Yet it was clear its previous waterfall approach wouldn’t support the change. IT leaders also worried that change could impact the core business and the working relationships of employees.
With the help of Scaled Agile Partner, Ivar Jacobson (IJI), NHSBT chose SAFe to help support the governance and manage both the organizational and technical changes. They committed to a coaching and training plan—including a strategic Program Increment (PI) cycle—that ensured SAFe was adopted by employees with secure checkpoints and feedback along the way.
From the first PI onward, they noticed a difference in team effectiveness. In that first PI, they were able to deliver a committed, finite number of product features, as well as prioritize IT operations alongside the business part of the organization. Having delivered the first MVP in one of its programs, it’s now clear that the introduction and embedding of SAFe within NHSBT has provided significant, early business benefits.
“We would never have had that level of interaction in a waterfall delivery. To achieve the levels of understanding of both the technology and deliverables—along with all the inter-dependencies— would have taken months of calls, meetings, and discussions. We planned the next three months in just two days and now we retain that level of engagement on a daily basis.”
—Gary Dawson, Assistant Director, Solutions Delivery
Today, SAFe is part of everyday procedures at NHSBT, and it is poised to reach even more programs and people. Already, they have held two SAFe planning events for a potentially much larger program to replace its core blood offering system.
Make sure to check out the full case study for insights and inspiration; there’s a good amount of substance there that would be useful to any organization considering a move to SAFe, especially for those working in regulated industries.
Many thanks to Gary Dawson, Assistant Director, Solutions Delivery, NHSBT; and Brian Tucker, Principal Consultant and SPCT, IJI.
In a previous post about productivity patterns, I wrote about how I tried countless systems to improve my productivity. I tried everything from having a Franklin Planner, to using GTD, to Personal Kanban and the Pomodoro Technique. I asked myself why some methods worked and some did not. Why did I abandon two systems when I knew so many others have been successful with them? Why has Personal Kanban worked for me for the last 7 years? I started listing common traits and saw relationships and discovered patterns. Not only are there three things I believe every system needs to work, I also see three things that are necessary to prevent you from abandoning that system.
Every personal or professional thing we do is part of a system or subsystem. Those systems have both success and failure patterns.Success Patterns
For a system (defined as a set of principles or procedures to get something done or accomplished) to be successful, you always need ritual and habit.
- A ritual is a series of actions or type of behavior regularly and invariably followed by someone.
- A habit is a regular tendency or practice, especially one that is hard to give up. You need to be habitual with your rituals, as part of your system.
Early indicators that your system will fail include a lack of clarity, progress, or commitment(Very similar to Mike Cottmeyer’s “Why Agile Fails)
- Lack of clarity creates confusion and waste. Each step of a system should be actionable and repeatable. In order to ensure certainty around your system steps, write them down.
- If you lack progress, you will lose momentum. If you lose momentum, you will lose commitment to the system.
- Lack of commitment to the system results in you no longer using the system. You move on to something new to get the results you seek.
After I identified the patterns, I wanted to present a useful model to visualize the indicators that will, in time, cause the system to fail. I decided to base my model on the Business Model Canvas by Alex Osterwalder. Below you will see the five areas that need to be considered. Once complete, if you notice one or more of the sections is ambiguous or short on details, you should view that as a warning.
Scrum Framework Success Patterns
By using the Scrum Framework as an example system, I completed my system design canvas. Upon completion of the worksheet below, I can see if there are any “gaps” in the system. As you may have guessed, there are no gaps, if Scrum is properly implemented and followed. But if it was modified without expert guidance, a gap will become visible and provide an indication that the system is at risk of failure.
Because you may have a large organization where you are dealing with different kinds of dependencies, you may need to create “sub” system design canvases to account for organizational complexity. Scrum may not be enough. Don’t worry. The same rules apply.Free Download
Interested in testing your system or subsystems? Download a free copy of the System Design Canvas and see if you are at risk of failure. Because I am providing this under a Creative Commons Attribution-Share Alike 3.0 Unported license, I welcome you to download it and modify it to meet your needs.
This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen
To many people, Lean manufacturing was invented in Japan and is synonymous with the Toyota Production System (TPS). They will tell you that TPS is the manufacturing philosophy that enabled Toyota to effectively conquer the global automobile market by reducing waste and improving quality. While that is true, it is not the whole story. Lean has far deeper roots and broader potential.
Fundamentally, Lean is about creating value and empowering people, not just eliminating waste. It was developed long before Toyota—long before the
20th century, in fact. Some trace the roots of Lean all the way back to the Venice Arsenal in the 1500s, when Venetian shipbuilders could roll complete galley ships off the production line every hour, a remarkable achievement enabled by several weeks of assembly time being sequenced into a continuous, standardized flow. (The genius that helped the military engineers at the Venice Arsenal was none other than Galileo himself— perhaps the first-ever Lean consultant!)
By 1760, the French were using standardized designs and the interchangeability of parts to facilitate repairs on the battle-field. Eli Whitney refined the concept to build 10,000 muskets for the U.S. government at previously unheard-of low prices. Militaries around the world fine-tuned continuous flow and standardized processes throughout the 1800’s. Over time, standardization slowly entered into commercial manufacturing.
In 1910, Henry Ford moved his nascent automobile manufacturing operations into Highland Park, Michigan, which is often called the “birthplace of Lean manufacturing.” Ford used continuous flow and standardized processes, coupled with innovative machining practices to enable highly consistent, repetitive assembly. Ford often cited the frugality of Benjamin Franklin as an influence on his own business practices—especially Franklin’s advice that avoiding unnecessary costs can be more profitable than increasing sales.
Ford was able to reduce core chassis assembly time from twelve hours to less than three. This reduced the cost of a vehicle to the point where it became affordable to the masses and created the demand that helped build Ford’s River Rouge plant, which became the world’s largest assembly operation with over 100,000 employees. In 1911, Sakichi Toyoda visited the United States and witnessed Ford’s Model T production line. He returned to Japan to apply what he saw on his company’s handloom weaving machines.
As Ford and Toyoda were streamlining their operations, others were making parallel improvements in the quality and human factors of manufacturing. In 1906, the Italian Vilfredo Pareto noticed that 80% of the wealth was in the hands of 20% of the population, a ratio he found could be applied to areas beyond economics. J.M. Juran took the Pareto Principle and turned it into a quality control tool that focused on finding and eliminating the most important defects. A few years later, Walter Shewhart invented the control chart, which allowed managers to monitor process variables. Shewhart went on to develop the Plan-Do- Study-Act improvement cycle, which Dr. W. Edwards Deming then altered to create the Plan-Do-Check-Act (PDCA) cycle still in use today
In the early years of twentieth century, efficiency expert
Frank Gilbreth advanced the science of management by observing construction and factory workers. He and his wife, Lillian, started a consulting company to teach companies how to be more efficient by reducing human motion during assembly processes. Sakichi Toyoda, having already benefited from Henry Ford’s ideas, became an expert at reducing human-induced variability in his factories.
Then came World War II. At the beginning of the war, Consolidated Aircraft in San Diego was able to build one B-24 bomber per day. Ford’s Charles Sorensen thought he could improve that rate, and as a result of his efforts, a couple years later the Willow Run plant was able to complete one B-24 per hour.
With almost all of the traditional male factory workforce deployed overseas for the war, the human aspect of manufacturing moved front and center. Training Within Industry (TWI) was born as a method to rapidly and effectively train women to work in the wartime factories. After the war, TWI found its way to Japan even as it faded away in the U.S. (only recently has it returned).
The end of the war saw a divergence in philosophies between the two countries. In the U.S., Ford adopted the GM style of top-down, command-and-control management and effectively abandoned Lean manufacturing. Meanwhile in Japan, Toyota led the acceleration of the development and implementation of Lean methods. The company transitioned from a conglomerate that still included the original loom business to a company focused on the auto market. Taiichi Ohno was promoted to machine shop manager and under his watch, Toyota developed the elimination of waste, and creation of value, concepts. The human side of manufacturing was especially important to Ohno, who transferred increasing amounts of authority and control directly to workers on the shop floor.
After being sent to Japan in 1946 and 1947 by the U.S. War Department to help study agriculture and nutrition, Dr. Deming returned to Japan in the early 1950s to give a series of lectures on statistical quality control, demonstrating that improving quality can reduce cost. Toyota embraced these concepts and embedded them into the Toyota Production System (TPS), leading to Toyota winning the Deming Prize for Quality in 1965. Over several years, Taichi Ohno and Shigeo Shingo continued to refine and improve TPS with the development of pull systems, kanban, and quick changeover methods.
By the early 1970s, the rest of the world was beginning to notice Japan’s success, and managers assembled for the first study missions to Japan to see TPS in action. Norman Bodek and Robert Hall published some of the first books in English describing aspects of TPS, and by the mid-1980s, several U.S. companies, notably Danaher, HON, and Jake Brake, were actively trying the “new” concepts.
The term “Lean” was first coined by John Krafcik in his MIT master’s thesis on Toyota, and then popularized by James Womack and Daniel Jones in the two books that would finally spread a wider knowledge of TPS: The Machine That Changed the World in 1990 (written with Daniel Roos) and Lean Thinking in 1996. Lean Thinking described the core attributes of Lean as:
- Specify value from the perspective of the customer.
- Define the value stream for a product, then analyze the steps in that stream to determine which are waste and which are value-added.
- Establish continuous flow of products from one operation to the next.
- Create pull between process steps to produce the exact amount of products required (i.e., make to order).
- Drive toward perfection, both in terms of quality and eliminating waste.
Those books, as well as organizations such as the Association for Manufacturing Excellence (AME) and the Lean Enterprise Institute, drove a widespread acceptance of Lean as a path to productivity and profitability. By the year 2000, Lean methods were moving out of manufacturing and into office and administrative environments. The spread of Lean continues today, and currently, Lean healthcare, Lean government, Lean information technology (and Agile software development), and Lean construction are particularly popular.
A couple of weeks ago, I spoke locally about Manage Your Project Portfolio. Part of the talk is about understanding when you need project portfolio management and flowing work through teams.
One of the (very sharp) fellows in the audience asked this question:
As you grow, don’t you need component teams?
I thought that was a fascinating question. As agile organizations grow, they realize the value of cross-functional teams. They staff for these cross-functional teams. And, then they have a little problem. They can’t find enough UX/UI people. Or, they can’t find enough database people. Or, enough writers. Or some other necessary role for the “next” team. They have a team without necessary expertise.
If managers allow this, they have a problem: They think the team is fully staffed, and it’s not. They think they have a cross-functional team that represents some capacity. Nope.
Some organizations attempt to work around the scarce-expertise problem. They have “visitors” to a team, filling in where the team doesn’t have that capability.
When you do that, you flow work through a not-complete team. You’re still flowing work, but the team itself can’t do the work.
You start that, and sooner or later, the visitor is visiting two, three, four, and more teams. One of my clients has 12 UI people for 200 teams. Yes, they often have iterations where every single team needs a UI person. Every single team. (Everyone is frustrated: the teams, the UI people, and management.)
When you have component teams and visitors, you can’t understand your capacity. You think you have capacity in all those teams, but they’re component teams. They can only go as fast as the entire team, including the person with the scarce expertise, can deliver features. When your team is not first in line for that scarce person, you have a Cost of Delay. You’re either multitasking or waiting for another person. Or, you’re waiting for an expert. (See CoD Due to Multitasking and CoD Due to Other Teams Delay. Also See Diving for Hidden Treasures.)
What can you do?
- Flow work through the experts. Instead of flowing work through teams that don’t have all the expertise, flow work through the experts (not the teams).
- Never let experts work alone. With any luck, you have people in the team working with the experts. In Theory of Constraints terms, this is exploiting the constraint. It doesn’t matter what other work you do. If your team requires this expertise, you need to know about it and exploit it (in TOC sense of exploitation).
- Visualize the flow of work. Consider a kanban board such as the one below that shows all the work in progress and how you might see what is waiting for whom. I would also measure the Cost of Delay so you can see what the delay due to experts is.
- Rearrange backlog ranking, so you have fewer teams waiting for the scarcity.
Here’s the problem. When you allow teams to compete for scarcity (here, it’s a UI person), you don’t get the flow of work through the teams. Everything is slower. You have an increased Cost of Delay on everything.
Visualizing the work helps.
Flowing the work through the constrained people will show you your real capacity.
Needing component teams is a sign someone is still thinking in resource efficiency, not flow efficiency. And, I bet some of you will tell me it’s not possible to hire new people with that skill set locally. I believe you.
If you can’t hire, you have several choices:
- Have the people with the scarce expertise consciously train others to be ready for them, when those scarce-expertise people become available. Even I can learn some capability in the UI. I will never be a UI expert, but I can learn enough to prepare the code or the tests or the experiments or whatever. (I’m using UI as an example.)
- Change the backlogs and possibly reorganize as a program. Now, instead of all the teams competing for the scarce expertise, you understand where in the program you want to use that scarce expertise. Program management can help you rationalize the value of the entire backlog for that program.
- Rethink your capacity and what you want the organization to deliver when. Maybe it’s time for smaller features, more experiments, more MVPs before you invest a ton of time in work you might not need.
I am not a fan of component teams. You could tell, right? Component teams and visitors slow the flow of releasable features. This is an agile management problem, not just a team problem. The teams feel the problem, but management can fix it.
When I go in to do large scale transformations I’m invariably asked the question, “should the PMO go away?’ The reasoning is that going agile should get rid of all of the oversight, the Gantt Charts, the weekly status meetings, release scheduling. The list goes on.
Before I address the question I want to give you some background as to what we typically see when we hit the ground from a coaching standpoint. The company is in an ad hoc state. They may be delivering but it isn’t always on time. Scope creep is inevitable in this environment as they schedule 3,6, and maybe even 12 month releases. As much as the teams try to be agile there are a number of processes in place to make sure the product actually gets out the door. There’s some release planning up front, expectations are set. Development may occur in sprints but integration testing and acceptance testing lag behind. Sometimes it is so complicated to do integration testing it has to happen in a big time box towards the end. The business becomes disengaged while development is off sprinting. This process isn’t agile and if you did lay it out in a Gantt chart it would present very much like waterfall.
Now think about all the stage gates you have in your organization. Release planning sign off. Weekly change control. Release scheduling. Release sign off. Deployment planning. Deployment change control. Some organizations I’ve seen have 20 people on the phone during an overnight deployment. So why is this? The answer is simple. Over time the organization has created an environment of mistrust. Promises have been broken. Buggy software has been delivered to customers. Fingers pointed; “Requirements were bad”, “Development is slow”, “Too many last minute changes.” A number of reasons have caused the need for the stage gates. Once a stage gate exists it’s difficult to remove.
To get the organization back on track we need to refocus on the 3 things that make up an agile process; backlog, team and working tested software. In essence, clarity, accountability and measurable progress. In order to do this, we need governance, structure and metrics. These things will get us to a predictable state. Once we get predictable we can begin to rebuild the trust in the organization.
To get an organization back on track we need to focus on the 3 things that make up an agile process; backlog, team and working tested software.
The governance model must slice through the organization from the top to the bottom. In many organizations this will be in the form of at least 3 layers; Portfolio, Program, and Team. The Portfolio layer will deal with the creation, definition and prioritization of themes and epics. The Program layer will create define, and prioritize features. The Team level is responsible for the implementation of the user stories derived from the features. This governance model will further define the process flows to go from inception to deployment.
What I have briefly described here is an initial step towards a logically planned out transformation strategy. As you can see, in this first step we clearly define a structure and a governance model that leads to a predictable process. We can’t just teach agile practices and hope everybody sees the light. There are a number of manual orchestration activities in the organization to keep everything moving forward. As the organization moves further along the scale towards a more decoupled system of delivery then the manual orchestration will diminish. I refer to this manual orchestration and stabilization processes as scaffolding. As manual orchestration diminishes the scaffolding can begin to come down. It is important in your transformation to identify the scaffolding and plan as part of your future transformation efforts to remove it.
Once we get predictable we can begin to rebuild the trust in the organization.
So, “Should the PMO go away?” Not in this scenario. Some part of the organization needs to facilitate the manual orchestration at this stage of the transformation. If your organization already has a PMO then these are the type of people you need to facilitate.
Can the PMO go away one day? The only responsible answer I can give is, “When your organization is ready.”
One last caveat. I’ve seen some organizations that are split, some parts need the PMO due to organizational and technical debt, and other parts have been built to be decoupled and on a continuous delivery cycle eliminating the need for manual orchestration.
When we released SAFe Version 4.0 last January (seems like forever ago in the lifetime of SAFe), we also introduced the ‘Implementing 1,2,3 Tab’ to provide our first published guidance on how to implement SAFe. That was sound advice, and it served well as basic guidance to implement SAFe. Many successful implementations followed, as you can see from Case Studies.
But we all know it takes more than that. How does one identify value streams and design the ARTs to begin with? How do you get ready for the first PI planning event? What do you do after you’ve launched that first ART? And so much more.
To address this larger issue of implementing SAFe at enterprise scale, we are pleased to announce a series of guidance articles which can now be found under the Implementation Roadmap main menu. There you will find this picture and upcoming links to 12 new articles (one for each roadmap step below), which provide more detailed guidance for implementing SAFe at scale. Of course, we all also know that there is no one right way to implement SAFe, but after hundreds of successful implementations, this pattern emerged as the most common, so we decided to share it here.Figure 1. SAFe Implementation Roadmap
Please be advised that this thread is a work in progress and we are planning to release about an article a week until this series is complete. As of this writing, the first article is posted. You can start the journey by clicking here.
Good luck with implementing SAFe; we are confident you will get the outstanding business results that you deserve.
Dean and the Framework team