I get asked this question all the time. And I think that the answer is both obvious and not at the same time. In any case, it’s not possible to answer what the ratio of developers to QA testers should be. Here’s why.
Let’s take a look at a flowchart of how software development really occurs.
I know that there are differences in this diagram based on whether we are using “waterfall”, “Scrum”, “Kanban”, and so forth. But the differences are usually that we don’t explicitly acknowledge the stage of process that we are in, or we subsume it in another process step. For example, in eXtreme Programming, we tend to design, code, and functionally test all in one step. We use Unit Testing in Test Driven Development as the functional test in isolation, and the “refactor” step in “red/green/refactor” as a way to accomplish “design a solution”.Where are the testers in that diagram?
That is, by no means, a trivial question, and it does vary based on the operating model that we use for software development. In traditional waterfall development, we usually have testing occur by role. Developers are typically assigned the responsibility to functionally test what they code, usually called “unit testing”. In fact, most of what I’ve seen in the field is that those “unit tests” are usually simplified functional “spot tests” that never make it into any regression suite, rather than the eXtreme Programming type of unit tests used in Test Driven Development. These spot tests are typically fast, throwaway, and unrepeatable. The people in QA sometimes are called upon to do this testing. The best that can be said is that we saw some version of code that made the code functional for its intended purpose at the point at which point the test passed.
Traditional waterfall type testing does employ QA testers at the “non-functional and regression tests” stage. QA will typically write long test plans intended to ensure that functionality not only meets the functionality desired, but also does not have an adverse impact on either previously developed functionality or non-functional aspects of the software, such as speed, capacity, etc.
In a traditional world, market tests are typically not done by either developers or QA personnel. That testing occurs only once the product has been released to the marketplace. The testing occurs by the customers of the product. Unhappily, the results of that testing show up as missed market expectations, and occur after any hope of fixing the software would be possible, as the product is now in the marketplace.
If we are looking for better quality, both in terms of assuring that the software is written correctly, as well as being the right code to solve the problems that customers need a solution for, you’re going to have to do two things. You’re going to have to push testing forward, so the feedback loops occur quicker. You’re going to have to automate as much of the testing as possible, so that you can iteratively attack the problem, and not have huge amounts of labor to fix a product that is not on course with customer expectations. I can’t count the number of times I’ve seen product teams cut out the software testing done after the code was developed so that it could be delivered on a date previously promised. They did it because the labor and time needed to exhaustively regression test the code weren’t on the timelines that were committed to. Delivering buggy code to your marketplace is an excellent way to help your competitors capture your market. So is delivering on the wrong solution. This means that shortening the cycle time for this diagram is critical to make this happen!This diagram is way too simplistic
Absolutely! In Lean terms, it is a single stream flow of one piece of required functionality within a product that a businessperson knows must be presented to the marketplace as a “minimum viable product”, a “minimum marketable feature”, or even something smaller. Depending on the process that you use for software development, you may be tempted to have as many of these functionality implementations going on at once. But that increase in WIP (work in process) causes its own set of problems. If one piece of functionality is being developed that depends on another piece of functionality code, then dependencies start to rear their ugly head. Having one piece of functionality take longer to develop than originally estimated can have compounding and confounding ill effects on the code depending on it. The same is true of a solution that doesn’t “meet the mark” on the non-functional and integrated regression testing aspects. Note that by waiting until “we are ready to release” before we start this type of testing, we potentially grow an enormous WIP of unreleased code. Any failure at this stage has a lot more code to fix than if we were able to stay with an ideal single stream flow.“Rules of thumb” are meaningless
You can find many rules of thumb for the ratio of QA to developers if you do a Google search with the words in the title of this blog entry. You will find people talk about 10 developers to 1 QA tester, 3 to 1, 1 to 1, and many others. My feeling is that none of these can possibly be correct. They can’t be right, because they don’t take into account the abilities of both the developer and the tester. Highly capable developers may be 10 or more times quicker at producing the same code as less capable team members. The same will hold true for QA testers. I had a conversation with Fred George a little over a year ago on this topic, and he recounted an assignment where he observed a ratio of 6 testers needed to absorb the work of one highly productive developer.
Rather than going that rule of thumb route, I would urge you to consider getting closer to a single stream flow on individual things that the software needs to do and employ the “Three Amigos” model that George Dinwiddie explains in Better Software, November/December 2011. Here, we get the BAs, QAs, and developers at the start to write automated tests that serve as the functional requirements for the work to be done. If we keep the rate of production of these collaboratively-developed tests in line with the actual rate of production that satisfies the tests, we never have to fear that we have something off balance. If we find, perhaps with a Kanban analysis, that we can’t produce enough tests to keep our developers happy with enough work to do, we may find that we don’t have enough BA types or enough QA types available for us, and can adjust accordingly.
And, yes, there will always be a place for QA exploratory testing on integrated code. But that should be automated as well into regression suites that are automated and repeatable.Flow is the most important thing to a business
You may be asking yourself at this point, “Yes. I get it. I need to reduce WIP, and not worry so much about rules of thumb to get software done. But what about that test in the market? How do we get better at that?” The answer there is easy – release more often! And that will take your continuous integration solution to a whole new vista – continuous delivery, and engender a whole new set of problems that are wonderful to have, such as “how quickly can may market absorb new features, and how can I get them to accept things in a more laminar flow fashion?”
Businesses that produce software do it for a purpose. Usually a pecuniary purpose. Understanding the reasons behind why excessive WIP is such a dangerous thing to have may put the onus on the business to see how to incorporate smaller product tests, in terms of releases to the marketplace, more frequently.So, the right answer is?
Since there is no right answer to the question of “what’s the right ratio”, let’s invoke Kobayashi Maru sort of solution. For those of you who never saw or have forgotten Star Trek II: The Wrath of Khan, the Kobayashi Maru was a simulation exercise that Starfleet put all of its officers through to test if they could save the civilians trapped in a disabled ship in the Klingon Neutral Zone. Because of the constraints involved, no cadet at Starfleet academy had ever passed the test. Even the legendary James T. Kirk failed the test twice, only passing it the third time by reprogramming the simulator.
We can’t win this battle for correct ratios between QA and developers by simple rules of thumb. But we can fall back on the values and principals of Agility, just like Kirk did for Star Trek values and principles. We need to focus on the team’s people and interactions (specifically, QAs, BAs, and developers) for as much work as possible up front to increase quality (the most efficient and effective method of conveying information to and within a development team is face-to-face conversation). Keep WIP sizes small (working software is our primary measure of progress). Test our code not just during development (continuous attention to technical excellence and good design enhances agility), but get it into the hands of customers quickly and often (customer collaboration).
Let’s change the conversation from one asking for rule of thumb ratios into one that asks for collaborative development, better quality, and faster market realization of smaller and smaller chunks of valuable software. We can measure and find where WIP is causing resource constraints, and do the traditional 5 step Theory of Constraints solution to fix things. In other words, let’s turn around the faulty question for an answer (that is bound to fail in practice) into a new quest to deliver, measure, and deliver more of what works.
Like this? You’ll love Agile Eats
Agile Eats is our semi-monthly e-blast chock full of tips and tricks too good not to share. Subscribe now!
The post What’s the Right Ratio Between QA Testers and Developers? appeared first on SolutionsIQ.
Transformation! Agility! Antifragility! Scalability! Sustainability! . . . All conjure up different sensations!
Coaches are catalysts for fostering success by partnering with clients in advancing their business outcomes. Fundamentally, coaches are students of Human Nature and Human Dynamics in the context of the Human Condition . . . Reality!
As Tom Landry elegantly expressed: “A coach is someone who tells you what you don’t want to hear, who has you see what you don’t want to see, so you can be who you have always known you could be.”
Knowing Mike (@mcottmeyer) and Dennis (@dennisstevens) for many years, we’ve often entertained the potential of collaborating. And on many occasions, it almost happened! However, after Mike and Dennis founded LeadingAgile (@leadingagile), I (@SAlhir) only became that much more intrigued.
Much of Mike’s, Dennis’, and LeadingAgile’s transformation experience is captured in the LeadingAgile Way and much of Mark’s, Brad’s, and my transformation experience is captured in Conscious Agility.
After my announcement regarding working with LeadingAgile, many clients and colleagues queried: How do you reconcile your experience with Mike’s, Dennis’, and LeadingAgile’s experience? My reply: One doesn’t reconcile, but integrates unique experiences — to pragmatically serve our clients and their business outcomes! It’s all about Synergy!The LeadingAgile Way
LeadingAgile’s transformation experience is captured in the LeadingAgile Way using a Compass, Roadmap, and Journey.Compass
LeadingAgile’s Compass expresses a “worldview”. It describes four quadrants formed by the intersection of a Predictive-to-Adaptive dimension and Convergent-to-Emergent dimension. The Predictive-to-Adaptive dimension describes a company’s values. The Convergent-to-Emergent dimension describes a company’s customers’ values.
- The Predictive-Emergent quadrant focuses organizations on building trust.
- The Predictive-Convergent quadrant focuses organizations on becoming more predictable.
- The Adaptive-Convergent quadrant focuses organizations on reducing batch size.
- The Adaptive-Emergent quadrant focuses organizations on fully decoupling teams.
LeadingAgile’s Roadmap expresses a macro-level transformation approach; and includes:
- Define the Strategy, which sets the destination and defines an end-state vision with a focus on Structure, Governance, and Metrics & Tools.
- Lead the Transformation, which guides expeditions progressively through basecamps with a focus on forming teams, train teams, and coaching teams.
- Prepare to Go Alone, which sustains the change with a focus on assessments, targeted coaching, and sustaining artifacts.
LeadingAgile’s Journey expresses a micro-level transformation approach using expeditions and basecamps. An expedition is a vertical slice of an organization (Portfolio, Program and Delivery & Service teams) that is taking part in a journey from a current state to a defined basecamp goal. A journey is the specific path an expedition will take to reach the next basecamp goal. A basecamp is a specific milestone for an expedition undergoing a transformation.
- Basecamp 1, Getting Predictable, focuses on stabilizing teams.
- Basecamp 2, Reducing Batch Size, focuses on flow across teams.
- Basecamp 3, Breaking Dependencies, focuses on decoupling teams.
- Basecamp 4, Increasing Local Autonomy, focuses on team autonomy and adaptive governance.
- Basecamp 5, Investing to Learn, focuses on innovation.
Basecamps 1 and 2 are in the Predictive-Convergent quadrant of the Compass, basecamp 3 is in the Adaptive-Convergent quadrant of the Compass, and basecamps 4 and 5 are in the Adaptive-Emergent quadrant of the Compass.Conscious Agility
Conscious Agility — with roots in the real-world practice of Conscious Capitalism, Business Agility, and Antifragility — is a design-thinking approach for business ecosystems that integrates awareness with intuition, orientation, and improvisation so that individuals and collectives may benefit from uncertainty, disorder, and the unknown.
While many business movements strictly adopt a single perspective — some focus on the so-called “softer” aspects (dynamics) while others focus on the so-called “harder” aspects (mechanics) — Conscious Agility remains agnostic and embraces an all-inclusive viewpoint, integrating relevant perspectives yet keeping the “human element” paramount.
Not to discount any particular movement, but any myopic viewpoint often obscures reality and the importance of the “human element;” and while all business movements would readily argue that they don’t elevate their “particulars” over the “human element” — experience leads us to conclude otherwise!
A Conscious Agility initiative is a cycle of “fundamental change” (or renewal), which is uniquely organic, simultaneous, and holistic in addressing a business perspective, organizational perspective, and culture perspective while being business or industry domain agnostic and technology agnostic, and open to being combined with other approaches.
A Conscious Agility initiative is organized into a cycle of phases (which are generally sequential but may overlap), which are composed of conversation clusters, which are composed of conversations. Conversations occur in any order and as many times as needed to ensure the overall objectives of the phase are achieved and that they are oriented towards addressing one or more questions that activates how people relate to one another and how people behave with one another.
There is absolutely no rigidity, but complete and utter flexibility, in how the phases, conversation clusters, and conversions are used in a Conscious Agility initiative. And the scope of an initiative is completely flexible to a whole enterprise or any subset of the enterprise.Foundational Concepts
Edgar Schein defines Culture as “a pattern of shared tacit and interconnected assumptions that was learned by a group as it solved its problems of external adaptation and internal integration, that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems.”
Conscious Agility relies on the following foundational concepts, which are rooted in Edgar Schein’s definition of Culture:
- An Ecosystem is a collection of stakeholders, including their environment.
- A Stakeholder is an entity who impacts or is impacted by other stakeholders.
- Stakeholder identity encapsulates awareness and ownership. Identity should not be confused with ego, which is focused only on the “self”!
- Awareness involves stakeholders being conscious of themselves and one another.
- Ownership involves stakeholders embracing how they impact or are impacted by other stakeholders within the ecosystem.
- Values are tenets that stakeholders consider meaningful. Individual stakeholders unite based on compatible values.
- Purpose is the reason or “why” stakeholders form an ecosystem. Individual stakeholders unite around a shared purpose.
- Value is worth that stakeholders’ experience. Individual stakeholders unite with an orientation towards co-creating value.
- A Conversation is an exchange between stakeholders. Conversations are purposeful, value-oriented, and values-based. A Conversation Cluster is a collection of conversations.
- A Canvas is a description of reality.
A Conscious Agility initiative cycle includes the following phases: Define, Create, and Refine.Define Phase
The Define phase focuses on fostering awareness of stakeholders, envisioning an improved way of working together, and establishing clarity around the initiative. This includes:
- Establish a Design Team, which launches the initiative with a team of individuals who represent the stakeholders within the ecosystem.
- Discover a “minimal” Ecosystem Definition, which establishes design team ownership of a path forward toward an improved way of working that is considerate of all stakeholders.
The Create phase focuses on achieving greater awareness, intuition, orientation, and improvisation by evolving the ecosystem; enacting shared experiences, while also integrating stakeholders. This includes:
- Enact Experiences, which exercises and evolves the new way of working.
- Integrate Stakeholders, which exercises and evolves a new way of organizing.
The Refine phase focuses on ensuring stakeholders have sufficiently evolved the ecosystem to nurture continued success, allowing for the initiative to draw to closure. This includes:
- Embrace Experiences, which ensures that the new way of working is embraced by all stakeholders and reflects stakeholder experiences within the ecosystem.
- Nurture Stakeholders, which ensures that the ecosystem has fully integrated the new way of organizing and is nurturing all stakeholders.
What is the synergy between the LeadingAgile Way and Couscous Agility?Compass and Foundational Concepts
LeadingAgile’s Compass and Conscious Agility’s foundational concepts are congruent “worldviews”.
- Schein’s emphasis on “external adaptation” is operationalized with the Convergent-to-Emergent dimension and Value.
- Schein’s emphasis on “internal integration” is operationalized with the Predictive-to-Adaptive dimension and Identity, Values, and Purpose.
Furthermore, the Compass as well as the Stakeholders and their Conversation Clusters and Conversations integrate these internal and external aspects.Roadmap and Phases
LeadingAgile’s Roadmap and Conscious Agility’s phases are congruent macro-level transformation approaches.
- LeadingAgile’s Define the Strategy with Conscious Agility’s Define phase.
- LeadingAgile’s Lead the Transformation with Conscious Agility’s Create phase.
- LeadingAgile’s Prepare to Go Alone with Conscious Agility’s Refine phase.
Specifically, as LeadingAgile’s Strategy focuses on Structure, Governance, and Metrics & Tools, Conscious Agility’s Ecosystem Definition (and Canvas) focus on these among other relevant aspects.Journey and Conversation Clusters
LeadingAgile’s Journey and the basecamps along with Conscious Agility’s Enact Experiences, Integrate Stakeholders, Embrace Experiences, and Nurture Stakeholders conversation clusters are congruent micro-level transformation approaches.
- Basecamps 1, 2, and 3 relate to Enact Experiences and Integrate Stakeholders.
- Basecamps 4 and 5 relate to Embrace Experiences and Nurture Stakeholders.
While this emphasizes a general congruence, there are other natural affinities among basecamps and conversation clusters.. . . Partnering with Clients in advancing their Business Outcomes!
As Stephen Covey emphasized: “Synergy is better than my way or your way. It’s our way.”
Synergy is all about integrating, not merely reconciling, our unique experiences in order to best pragmatically serve our clients and their business outcomes!
And should irreconcilable differences emerge in our journey ahead, how we continue (together or separately!) will be anchored and guided by our cause of serving our clients and their business outcomes!
The post It’s all about Synergy in Pragmatically Serving Clients appeared first on LeadingAgile.
I recently found myself needing to support an unknown URL folder structure with an Express router. The gist of it is that I am serving video files that may or may not be in a sub-folder, from another service.
For example, I need to have all of these URLs handled by the same Express router:
The request coming into my server will be logged and forwarded to the actual host, and it would be easier for me to maintain the same folder structure as the actual host.
And after some digging, I found there are at least three ways to make this work – most of which involve copy & paste programming.The Hard Way
The really hard way of handling this is to manually code each of the supported folders. I call this “hard” because it involves a lot of duplicated code – copy and paste programming.An Easier Way
Shortly after I realized how terrible it is to hard code folder, because of 1) the duplicated code, and 2) the limited sub-folders and nesting that I could support while doing this manually, I found a way to optionally specify any number of sub-folders:
Adding the asterisk after the :folder parameter name allows Express to use any number of folders at that point in the URL.
But this doesn’t work with no folders – files at the root of the URL mount – so I ended up with 2 routing entries, still:
While 2 entries instead of 10 or 15 or more is a pretty big improvement, I wanted to do better. This is still a tiny bit of copy & paste in the URL definitions, even if the handler function was re-used.The Really Easy Way
With some further digging through google and stackoverflow, I eventually found the answer I needed for my route setup:
By adding the ? to the end of the folder parameter with the asterisk, I am telling Express that there will be zero or more folders. This allows me to have a single route definition that handles all of my use cases – no folders, all the way out to as many nested folders as needed.Very Express-ive Routes
Express allows for some very complex and expressive routing setup, with many different options to get the job done. It wasn’t until I really starting digging into this need that I saw the use of regular expressions and wildcards, though.
The more I work with express, the more I seem to learn from it and the more I want to continue working with it to really master every last detail.
If you’re interested in Express and want to see how I build large apps, join me at WatchMeCode and check out the Architecting Express series, and the follow-on series. You’ll get an over-the-shoulder view of how small Express apps can be composed into larger systems.
Depending on when, how, and from whom you’ve heard of DevOps, its origin and meaning as well as the reasons for its implementation are going to be different. The reason is fairly straightforward: unlike Agile, which was born in February of 2001 when the signatories got together and drafted and signed into existence the Agile Manifesto, DevOps—like most new concepts—has no hard and fast birthdate. As a result, what it is and how you do it have largely been left up to the many, many individuals trying to make it work.
There is, however, one central point: There was, is, or has been a problem within corporate information technology departments between Development and Operations and, as a result, something is happening to address that problem. That thing is called “DevOps”.
This DevOps (r)evolution became manifest in March of 2011 when Gartner published a slide that had a simple predictive statement in it:
“By 2015, DevOps will evolve from a niche strategy employed by large cloud providers into a mainstream strategy employed by 20% of Global 2000 organizations.” (The Rise of a New IT Operations Support Model, Gartner, 2011)
We’re still in the beginning of 2016 as I write this, but I’d say the prediction has come true for the most part.
So let’s talk about what DevOps is, at least from my perspective.What is DevOps?
First, I agree with Gartner: DevOps is a “strategy”. It isn’t a separate department that lives between Development and Operations and acts like some kind of interpretive interface. Where we may have a difference of opinion, however, is in DevOps’ scope.
DevOps isn’t just an IT operations support model. It’s a culture, a movement, a practice, and a concept that emphasizes collaboration and communication and encourages us to look at the business activities we perform differently. Specifically, DevOps requires us to not only take a critical eye to our own behavior and adapt to the reality that we see as a result, but also to change the tools we use to increase the speed of change and provide better information earlier so that business decisions happen when the opportunity arises—not years, months, weeks, or even days later. DevOps is not limited to IT; it is part of an overarching enterprise Agile Transformation.
The principles of Agile, which first focused on software development specifically and then evolved into systems development practices and principles, have expanded further to incorporate R&D and manufacturing. While Agile and Lean thinking and principles ultimately gave rise to DevOps (as well as Continuous Delivery, which overlaps with but is different from DevOps), DevOps extends beyond just software development. Where Agile seeks to remove the friction between end-users and developers, DevOps focuses on the removal of the causes of friction in general. While Agile strongly advocates a preference for personal interactions over processes and tools, it does not obviate the necessity of these processes and tools. An effective DevOps implementation is entirely dependent on the selection of tools, practices, and ideas that enrich Agility. While this may appear to be a point of contention, or even a collision, in reality DevOps values personal interaction so much that tools are used to enrich these interactions. If and when that doesn’t work out, DevOps principles advocate removing the tool or process in favor of others. Having a DevOps mindset means ruthlessly identifying what the source of any friction is and doing whatever is necessary to remove it. If your personal interactions are impeded because your teams are distributed—as are most teams in the world—DevOps encourages you to find whatever tool will remove that friction (e.g., bring the team together and/or use tools and practices that improve communication).DevOps is About Tooling for Business Success
DevOps is about tooling for business success, and that’s why there are so many articles about the plethora of tools that will address business needs all along the delivery chain. Wikipedia gives a list of general tool categories (to wit: Code, Build, Test, Package, Release, Configure, Monitor) and New Relic even has a glossary of “Tools for DevOps”. Though this “periodic table of DevOps tools” by Xebia Labs really takes the cake, at least design wise.
Keep in mind: it’s not tools for tools’ sake, but tools for the business’ sake. So I want to share the only thing you really need to know about DevOps and tooling (whether you’re in IT or not), and it’s actually pretty simple:
- Your operational success is directly related to how well you define your business goals and choose the tools that will let you achieve those goals. Specifically, your business goals should guide you as to which tool to use, and not vice versa.
- Abstract the platform/infrastructure to implement based on the service or product you want to deploy. You should be able to change out platform size/components as needed to scale up or down without additional development. As Uncle Bob Martin put it, “Make sure the business rules don’t depend on the database.”
- Use dashboards in your business logic to dynamically track and, as needed, adjust performance and capacity. Your key performance indicators should indicate how well your platform/infrastructure is delivering against your business goals. For example, if your performance is shown to run high frequently/constantly, you may want to consider scaling up, but if you never exceed 20% of your max capacity, then you are paying too much for capacity you don’t need, so scaling down would make sense. (This is a common problem: Lots of companies implement the very expensive, overkill Oracle when the more cost-effective SQL will do just fine. This is akin to buying a Maserati only to drive it like a VW Beetle.) You can also set alarms that draw attention to ineffective or inefficient use of resources and even recommend improvements.
- When in doubt, err on the side of giving end users more, not less, administrative control. Design administrative functions into the application or service to limit unnecessary super administrator intervention in. A simple rule to live by: if you have to restart the app to implement changes, then the associated admin rights should be assigned to the super-admin. If an app restart isn’t necessary, then put the admin power in the user’s hands so they can make as many customizations as possible as quickly as possible.
- If you let your business needs and logic guide you as you build the platform/infrastructure, you should be able to quickly and flexibly release the services, products and/or applications that your operational users need.
- If you ensure 1a) above, then your platforms should be consistent in behavior, elastic in scale, and available on demand.
- If you ensure 1b) above, then operational capacity and/or performance will be visible and can dynamically alert you to when you need to make changes to improve performance and/or decrease unnecessary expense.
- If you ensure 1c) above, procurement and provisioning plans will be designed to provide what is needed when it’s needed, minimizing platform warehousing or backlogs.
If you put the two fundamental ideas together, you end up with a simple guiding statement:
What you put in (1) is what you get out (2).Conclusion
If you’re attempting an enterprise-wide Agile Transformation but you’re only working on the development side of the equation, you’re probably experiencing a lot of problems. A DevOps approach to your Agile transformation engages everyone in achieving your objectives at the same time, but more importantly it addresses one of the primary drivers of an Agile transformation: the realization of value. In Agile development, the delivery team ends each iteration or sprint with a demo. This engages the users early and often and allows for quick adjustments. However, the realization of business value only occurs once what was demonstrated is available to be used. DevOps closes the gap between demo and reality such that at the end of each iteration or sprint, the “demo” is released to production and in the target end user’s hands.
In my last post here on the LeanKit blog, I wrote about the hidden dangers of vanity...
Willkommen zur deutschen Version von WordPress. Dies ist der erste Beitrag. Du kannst ihn bearbeiten oder löschen. Und dann starte mit dem Schreiben!
I recently found myself writing an R script to extract parts of a string based on a beginning and end index which is reasonably easy using the substr function:
> substr("mark loves graphs", 0, 4)  "mark"
But what if we have a vector of start and end positions?
> substr("mark loves graphs", c(0, 6), c(4, 10))  "mark"
Hmmm that didn’t work as I expected! It turns out we actually need to use the substring function instead which wasn’t initially obvious to me on reading the documentation:
> substring("mark loves graphs", c(0, 6, 12), c(4, 10, 17))  "mark" "loves" "graphs"
Easy when you know how!
As you know, not everything that’s valuable in SAFe can make it to an icon on the Big Picture. One such example is the new Guidance article on Agile Contracts, which we developed as part of 4.0. In the hubbub around the 4.0 release, we didn’t take the time to bring any attention to this new content, so we are doing so now.
Working with customers on a more collaborative basis is a key element of achieving a leaner and more agile enterprise. Such an approach moves the Customer-Supplier (see also the new 4.0 Customer and Supplier articles) relationship to a win-win paradigm, one that improves the overall economic outcomes for both parties. True “partnering” in Lean means exactly that; a supplier engagement model based on long term customer commitments to key suppliers, based on earned trust.
But even in the presence of trust, how does one go about committing substantial investment to an Agile program, where it’s impossible, in advance, to really know exactly what the buyer is getting? That’s the interesting topic that this new article addresses. The article highlights a “SAFe Managed Investment Contract” approach which uses SAFe practices, nomenclature, and PIs as-objective-Milestones for contract governance. Here’s a graphic teaser from the article:SAFe Managed Investment Contract execution phase
We are placing this blog post in the “Updates” category so it will appear in the Updates field above the Big Picture. That simply serves as a reminder of new content under development, and in this case as a prompt to more readily get your comments on this important topic.
— Dean, Drew and the SAFe Team
Of all of our customer requests, improved reporting and analytics is one of the most popular, consistently ranking near the top of the most requested features on our marketing and exit surveys.
Until recently, we hadn’t taken the time to focus on improving our analytics. The biggest reason for this is that we didn’t really use any project health metrics ourselves. Our team was small, we didn’t ever really “report to anyone,” and we don’t really have hard deadlines. This isn’t to say that we couldn’t benefit from project health metrics; we just hadn’t experienced a strong need for them, so we had difficulty addressing a need we couldn’t identify.
The obvious answer was to also speak to customers and Pivotal Labs consultants (aka, “Pivots”). But who, and how do we find them?We spoke to three groups of PMs
1. People who were using our (really) old reporting features
Our first step was to go to customers and Pivots already using our reports. The goal here was not only to discover how they were using reports now, and what was valuable to them, but also learn what other reports they create (with or without Tracker data). While a good starting point, this can be a risky undertaking to only speak to current customers. The concept of a “product death cycle” illustrates that simply speaking to your current customers does not reveal the needs of those who don’t use it now.
2. Customers who told us they wanted better reporting
The second group we spoke to included people who mentioned reporting, dashboards, and charts in feature requests and marketing surveys. These represent an underserved group who were not finding value from our current reports. In many cases, they were using other tools (or creating their own tools) to compensate.
3. PMs managing large, ongoing projects
The third group was customers and Pivots who were managing larger teams that didn’t necessarily ask for analytics, but had a depth of experience on various engagements. For this group, we wanted to know which metrics were most important to them to track team progress in general. The reason we looked at larger customers is because we found a correlation between larger teams/multiple projects and a greater need for holistic views of projects, as their size makes them difficult to manage directly.What we learned: experienced PMs consider predictable teams healthy teams
We’ve been hearing for some time that experienced PMs focus less on the speed of the team and more on predictability. This is because predictability is about confidence: it allows someone a better idea of when something might be done, and with fewer surprises along the way. When we spoke to experienced PMs—particularly Pivots—a number of recurring factors for measuring predictability kept emerging:
1. Consistent velocity (low volatility)
Velocity and points accepted is often the first (and sometimes only) metric that PMs pay attention to. But instead of just looking at the velocity numbers in isolation, experienced PMs track trends in velocity over time for signs of peaks and valleys, known as volatility. We have a metric we use for volatility in Tracker, but this number alone might not tell you as much as visualizing these trends on a graph. Ken Mayer wrote about the importance of paying attention to volatility on our blog a few years back.
2. A smooth process pipeline and few blockers
This includes any stories that may be blocked, or which are not moving through the delivery process as expected. A common problem is a situation in which the team has delivered a number of stories but those stories have not been accepted/rejected, leading to a situation colloquially known as “Christmas time,” due to the red/green button coloring on the dashboard. Another common issue is having too many stories started without being delivered (aka, “too many balls in the air”). Experienced PMs look for these patterns frequently to get a sense of ongoing project health.
3. Predictable time between story start and acceptance
For planning purposes, Agile discourages pegging time to story work (or points). This is because it skews team estimation of work away from complexity and toward time. But knowing how long it takes to typically get work done is an asset to a PM trying to identify process bottlenecks, or planning for several iterations out.
Closely tied to completion time is rejection rate. PMs reported that they would look for trends in rejection; too many rejections for a particular iteration were a warning sign. Rejections can mean a number of things that can point to a process bottleneck: loosely defined acceptance criteria, steps being missed, or misunderstood story requirements.PMs were already creating reporting tools with Tracker
Note: All of these report examples were recreated from customer and Pivotal reports formats. The project names and content have been changed to protect privacy.
In another post, I wrote about the importance of using artifact research to get to customer pains quickly, tease out needs vs. wants, and demonstrate value to stakeholders. I won’t go into depth about it in this post, but most of our insights about customer reporting needs came from looking at what customers were doing already.
What did we get done? What is still in flight?
One report we encountered early on was a basic progress report. This was something that seemed ubiquitous across customers and Pivots. The format of the report was straightforward, showing what the team accomplished in some period of time (usually an iteration) and what was still in progress. These reports may have also included some notion of what was coming down the pipeline the next iteration. In all cases, the progress reports used stories as a basic unit of accomplishment.
Below are two examples of this report: one from a Pivotal Labs PM, another from a customer. Both are creating roundups of stories accepted, with the darker version (a Markdown file) demonstrating what was also in-flight and unstarted.
What is the status of the initiatives I care about? When will they be done?
Most of our customers (and Pivots) use epics to represent major feature areas. This makes sense considering that epics basically represent large features. Many teams will also version their epics to demonstrate their place in a bigger roadmap (e.g., “Login screen v1,” “Shopping cart v2,” etc.).
To this end, epic progress reports were one of the most common reports we saw. This took many forms, but the most common was that of a report each iteration with the recently completed, in-progress, and upcoming epics. The degree of details varied between reports, but the constant was knowing which epics were completed, which epics were in progress (and what was left from those epics), and what was coming down the pipeline.
Did we accomplish what we expected?
The first version of Tracker Analytics focused on helping teams become more predictable
Customers would frequently tell us that they wanted to see what they planned to do vs. what they actually accomplished. This closely follows the theme of predictability, as it gives PMs a sense of what may happen in the future based on what happened in the past. More importantly, it allows PMs to identify problems with their process.
There was a lot we wanted to do with Tracker Analytics, but for the first version, we kept things simple and focused on the most basic form of the above indicators. We designed Analytics to help teams clear process bottlenecks, stay predictable, and communicate status. We wanted to help PMs answer the questions:
- What did we work on? How much did we accomplish?
- Are we predictable? What could we improve on?
- Are there any bottlenecks in our process?
Lisa Doan wrote a great post on the basic usage of Analytics. In the article, she talks about using Analytics to make teams more predictable, identify bottlenecks, and report on feature work.
How are the new Analytics working for your team? Let us know by using the Send us feedback widget in the top left of Analytics, or email email@example.com.
The post How Customer PMs Helped Us Design Tracker Analytics v1 appeared first on Pivotal Tracker.
Are you exploring agile/lean management practices? Submit a draft agile/lean research paper or experience report by June 15, 2016 to the Agile/Lean mini-track at the Hawaii International Conference on System Sciences (HICSS)!
The HICSS conference, sponsored by IEEE, brings together a broad cross-section of researchers in system sciences—including software development, social media, energy transmission, marketing systems, knowledge management and information systems. Agile and lean management practices apply to all of these fields.Influential papers on Scrum patterns, agile metrics, lean forecasting, qualitative grounded inquiry, distributed development and large-company experience reports have appeared in past years. HICSS 50 will be held January 4-7, 2017 at Hilton Waikoloa Village, Big Island, Hawaii.
In conjunction with, and in celebration of, the 50th HICSS conference, selected submissions from this mini-track may be selected for fast-track consideration in the Journal of Information Technology Theory and Application (JITTA) and the AIS Transactions on Human-Computer Interaction.
If you are researching or innovating in applying agile and lean principles, we welcome your submission. The full call for papers is here: Agile/Lean HICSS-50 Call for Papers.
Help us extend the agile and lean frontier, by presenting your work at HICSS.
The post Call for Papers: Agile / Lean at HICSS (due 15 June 2016) appeared first on LeadingAgile.