Agile Chronicles – Composite Stories
Agile Artifacts – Ephemeral v. Enduring Value
During retrospection, when evaluating the quality and value of our artifacts for Epic, Feature and Story decomposition a common theme for our scrum teams is that these artifacts are by design barely sufficient and as such are ephemeral and provide no enduring value.
The design is in the code, the documentation is in the code, so we leave these artifacts attached to the engineering cards in our Agile Lifecycle Management (ALM) tool, close the cards when complete and never reference them again. Well, maybe we retain some Quality Assurance scripts that are still performed manually, but soon we will complete our QA Automation program and then the documentation will be in the code (automated scripts) and we won’t need to maintain a document artifact for QA scripts either. We accept this as a natural consequence of “barely sufficient” and we move on to the next sprint.
What if there was some undetected value in some of this information, and if sustained over time with minimal effort could provide enduring value, and help us achieve our team and business objectives.
Consider the case for managing software assets by creating and sustaining a definitive list of features for the software asset. This list becomes the feature dictionary, a common language for all teams and manifests itself throughout the Epic, Feature, Story life cycle.
Here is the brief story of an Agile Transformation and the value we discovered by performing software asset feature management and using that common feature definition to enable traceability for scrum team accountability, Quality Assurance test planning, code file ownership, portfolio analysis, competitive analysis and financial analysis.
We have just over 5M lines of code and the list of features began as a two-tiered description of 20 Capabilities and 70 related Features. The features were later delineated to 675 sub-features (about 10 sub-features per feature) to add more granularity to our traceability.
The driving business reasons for agile transformation were Quality first and foremost, but Predictability was also a problem that needed to be solved.
“We’ve done the PxQ analysis and if we dedicate two resources from each scrum team we can fix 700 defects in 9 months. We can do it in 6 months if we hire some contract resources”
Scrum teams were delineated by the list of features and corresponding software that they “own” and are accountable for. This enabled the scrum teams to focus on improving their knowledge of their software asset and focus on improving the quality of the software asset by allocating sprint time for refactoring and for reducing the technical debt that they inherited. Each defect was re-triaged in order to assign it to a specific scrum team for resolution, and as a result each scrum team had clear visibility to their defect backlog.
“You are fixing a few problems always breaking something else”.
Our client’s experience with our product was expressed as a negative impact to our business in the form of a declining Net Promoter Score and other reference-ability measurements. Participation in our client beta test program had dwindled to just a few long-term clients. The client pain manifested itself in the form of client incidents, some (or many depending on who you talked to) of which were caused by software defects. To reduce mean time to repair (MTTR), the scrum teams began providing recurring support in the form of team member rotations to the client incident triage process. They focused on resolving the incidents that were easily correctable without software changes quickly and were also responsible for assigning any defects that evolved to the scrum team that was accountable for the root cause feature set.
The predictability of delivering value to our customers depends on a well groomed backlog, how well we define the Epic that enables that value. The Epic is defined by the common list of features that are changed or added as a result of the Epic objective. This list of features per Epic is used to assign the features the accountable scrum teams, to elaborate the Feature modifications required for the Epic, define dependencies, perform Feature to Story decomposition and story point estimation.
“Why are we focusing our QA Automation efforts on an industry standard code coverage objective instead of focusing on defect hot spots and areas of code complexity? We need depth of coverage in targeted areas more than when need breadth of coverage for feature sets and features with minimal technical debt.”
Now let’s extend feature traceability to Quality Assurance (QA) scripts and to code files in the Software Version Management tool by denoting the QA scripts and code files associated with each feature. This enables the QA team members to plan based on the complexity of the feature changes to specific code files and to schedule the automated and manual testing that is necessary during each sprint. They can further verify this plan by relating the code file change reports produced in each of the build processes during the sprint to the corresponding features and Quality Assurance scripts. This enables the focus of QA feature testing to be (not limited to, but) focused on the specific and adjacent feature sets deltas in each code build.
“Why are we using our least experienced scrum team members and contract resources to fix defects in our highest complexity code?”
Next let’s study our software asset by analyzing the cyclomatic complexity of the code files. This standard McCabe evaluation provides some insight into which code files required subject matter expertise and extra scrutiny when the corresponding features were scheduled for delta in sprint planning. These dependencies were discussed during sprint planning, annotated in the ALM tool and scheduled for early resolution in the sprint.
“Why are we doing this, why are we adding or changing this feature of the product”?
Next, the scrum teams were encouraged to ask the product managers and product owners to explain the product vision so they could include that information in their respective sprint goals and release goals. The most important question to answer for the scrum teams was “why are we doing this, why are we adding or changing this feature of the product”? The answers were usually a rote response of “competitive response or competitive advantage”.
These recurring questions led the product management team to take a more proactive approach to answering this question and use the software asset feature list for quantitative and qualitative evaluation of competing and adjacent products. Our scrum team members were able to compare the specific feature sets for which they were accountable to the corresponding feature sets of competitive products. This was a knowledge accelerator for the scrum teams and most team members made it a priority to regularly assess these competitors for feature changes and shared this information during story grooming and sprint planning sessions.
Do we have a strategy for investment and are we executing it?
Over time, because we attached the feature annotation to all of the engineering cards in our ALM tool for our work on investments, enhancements, maintenance, and defect reparation we accumulated a lot of good information.
For each portfolio investment category and each feature set and feature, we had a near real-time and continuous flow of information, such as effort expended, story point investment levels, and defect hot spots. All of these measurements could be correlated to investment strategy, code complexity, QA coverage (depth and breadth) and competitor assessment. This information mostly confirmed, but sometimes indicated contradictions in our portfolio planning.
We used a 3-6 month portfolio plan horizon to rationalize future scrum team feature re-alignment, and impact assessments for near term investment spending adjustments, and budget constraints. The value and sight distance of this planning horizon was directly proportional to how well groomed our backlog was at the time.
So, to summarize the business value we received from software asset feature management:
- We initially used the feature list to define scrum team accountability, the features were related to code files in the software version management repository and team based access control was assigned for specific code files associated with the scrum team’s feature set ensured 100% accountability for all software changes to that feature set.
- Sprint Planning based on code complexity assured that the proper level of subject matter expertise was applied to high complexity software deltas in the form of team members with the most knowledge validating the work of less knowledgeable team members, and applying the commensurate the level of quality assurance effort, including increased depth of testing and more testing of adjacent features.
- The focus of quality efforts per build based on the features and code files that changed provided the optimal use of the limited QA resources of time and effort (even automated testing takes time).
- The competitive analysis information was new information to the scrum team members. It accelerated their knowledge of the product and made them active participants in continual market analysis.
- The portfolio view of accurate information enabled fact-based decisions for WIP and increased the accuracy and sight distance of our planning horizon.
The tangible benefits to our clients included:
- Much better results from our technical debt reduction program, and it got us out of the cycle of, according to our customers “fixing a few problems and breaking something else”.
- Most impactful was the renewed participation in our client beta test program and the willingness of the participants to express the value they received in terms of improved quality and feature improvements to other customers.
- This was reflected in improved client reference-ability.
The benefits to our software development organization were:
- Made our scrum teams much more knowledgeable of the software asset that they “own” in terms of complexity and feature value to the business.
- Provided a common language and some standardized practices for all scrum teams that improved the time to productivity for new team members by providing the Epic-Feature-Story-Code-File-QA-Script traceability.
- Enabled the scrum teams to understand the methods and level of effort required to produce zero defect software and made them realize that that was a realistic and achievable goal.
So in conclusion, having had this experience, we have agreed that each of these questions and approaches would be handled differently the next time.
“We’ve done the PxQ analysis and if we dedicate two resources from each team we can fix 700 defects in 9 months. We can do it in 6 months if we hire some contract resources”
Throwing money and resources at a quality problem will certainly fix many defects, but the incremental defect injection or leakage may go undetected.
“You are fixing a few problems and breaking something else”.
Believe the terrain, if your customers are telling you this then it is true and you have a problem that needs to be analyzed and resolved. Please do not rationalize it, as we did by telling yourself that “we are fixing far more defects than we inject”.
“Why are we focusing our QA Automation efforts on an industry standard code coverage objective instead of focusing on defect hot spots and areas of code complexity? We need depth of coverage in targeted areas more than when need breadth of coverage.”
Many of us have followed the rainbow trying to find the (mythical) 70% or 80% code coverage. Focus instead on incrementing quality where it will most impactful to your customers and business.
“Why are we using our least experienced team members and contract resources to fix defects in our highest complexity code?”
This was thought to be the most cost effective means of fixing a large number of defects in a short time. It was also the primary source of “fixing a few problems and breaking something else”. Apply subject matter expertise commensurate with the level of complexity.
“Why are we doing this, why are we adding or changing this feature of the product”
This is a non-engineering activity, but proved to have the largest positive impact to our team cohesiveness and culture. Understanding our product’s relative position in the market place made the team members cognitive of the value of the features they were building.
Do we have a strategy for investment and are we executing it?
This is two questions. The first is an easy one to answer. A strategy statement is easy to find somewhere in most organizations. Having a method to evaluate strategy attainment requires thoughtful effort to achieve.
See you on the journey!
The post Agile Chronicles (Composite Stories) – Agile Artifacts – Ephemeral v. Enduring Value appeared first on LeadingAgile.
Feedback from first version incorporated.
ThanksTry out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
I posted my slides for my Agile 2014 talk, Agile Projects, Program & Portfolio Management: No Air Quotes Required on Slideshare. It’s a bootcamp talk, so the majority of the talk is making sure that people understand the basics about projects. Walk before you run. That part.
However, you can take projects and “scale” them to programs. I wish people wouldn’t use that terminology. Program management isn’t exactly scaling. Program management is when the strategic endeavor of the program encompases each of the projects underneath.
If you have questions about the presentation, let me know. Happy to answer questions.
The team is proud to announce the release of SonarQube 4.4, which includes many exciting new features:
- Rules page
- Component viewer
- New Quality Gate widget
- Improved multi-language support
- Built-in web service API documentation
With this version of SonarQube, rules come out of the shadow of profiles to stand on their own. Now you can search rules by language, tag, SQALE characteristic, severity, status (E.G. beta), and repository. Oh yes, and you can also search them by profile, activation, and profile inheritance.
Once you’ve found your rules, this is now where you activate or deactivate them in a profile – individually through controls on the rule detail or in bulk through controls in the search results list (look for the cogs). In fact, the profiles page no longer has it’s own list of rules. Instead, it offers a summary by severity, and a click through to a rule search.
Another shift in rule handling comes for what used to be called “cloneable rules”. We’ve realized that strictly speaking, these are really “templates” rather than rules, and now treat them as such.
Templates can no longer be directly activated in a profile. Instead, you create rules from them and activate those.Component viewer
The component viewer also experienced major changes in this version. The tabs across the top now offer filtering, which controls what parts of the code you see (E.G. only show me the code that has issue), and decoration, which controls what you see layered on top of the code (show/hide the issues, the duplications, etc.).
A workspace concept debuts in this version. As you navigate from file to file through either code coverage or duplications, it helps you track where you are and where you’ve been.
A new Quality Gate widget makes it clearer just what’s wrong if your project isn’t making the grade. Now you can see exactly which measures are out of line:
Multi-language analysis was introduced in 4.2 and it just keeps getting better. Now we’ve added the distribution of LOC by language in the size widget for multi-language projects.
We’ve also added a language criterion to the Issues search:
To find this last feature, look closely at at 4.4′s footer.
We now offer on-board API documentation.
So there you are, wrapping up another successful release planning session. Sprints are all laid out for the entire release. All the user stories you can think of have been defined. All the daunting challenges laid down. Compromises have been made. Dates committed to. Everyone contributed to the planning effort fully.
So why isn’t everyone happy? Let’s check in with the product owner: The product owner looks like somebody ran over his puppy. The team? They won’t make eye contact and they’re flinching like they’ve just spent hours playing Russian roulette. What’s up? Well, here’s the dynamic that typically plays out:
- The product owner has some fantasy of what they think they will get delivered as part of the release. This fantasy has absolutely no basis in reality, it just reflects the product owner’s hopes for what he/she thinks they can get out of the team (it’s just human nature). This is inevitably far beyond what the team is actually capable of. My rule of thumb? A team is typically capable of delivering about 1/3 of what a product owner asks for in a release. That’s not based on any metrics, its just an observation. However, more often than not, it seems to play out that way.
- The team is immediately confronted with a mountain of work they can’t possibly achieve in the time allotted – even under the most optimistic circumstances. It’s their job to shatter the dreams of the product owner. Of course, strangling dreams is hard work. Naturally enough, the product owner doesn’t give up easy. They fight tooth and nail to retain any semblance of their dream.
- After an hour, perhaps two, maybe even three or four (shudder), the battle is over.
I’m going to go out on a limb here and speculate that this is no one’s idea of a positive dynamic. But it seems to happen pretty often with agile projects. It sure doesn’t look like much fun. I’m pretty sure this isn’t in the Agile Manifesto. So how do we avoid this kind of trauma?
- The product owner needs to be a central part of the team. They need to live with the team, be passionate about the product, and witness to what a team does daily. Fail to engage in any of this and a product owner loses touch with the work the team does and loses the ability to gauge their capabilities. Doing all of this is hard. There’s a reason that the product owner is the toughest job in Scrum.
- The team needs to embrace their product owner as an equal member of the team. You have to let them in. Work together. Let go of the roles and focus on the work.
- Prepare for the release planning in advance. There is no reason for it to be a rude surprise. Spend time together grooming the backlog together. As a team.
- Don’t cave to pressure from upper management. Behind every product owner is a slavering business with an insatiable desire for product. Ooh, did I just write that?
Release planning doesn’t have to be a nightmare. OK, in theory…
Filed under: Agile, Scrum, Teams Tagged: Agile, management, Planning, product management, Release Planning, software development
There is a lot of interest in scaling Agile Software Development. And that is a good thing. Software projects of all sizes benefit from what we have learned over the years about Agile Software Development.
Many frameworks have been developed to help us implement Agile at scale. We have: SAFe, DAD, Large-scale Scrum, etc. I am also aware of other models for scaled Agile development in specific industries, and those efforts go beyond what the frameworks above discuss or tackle.
However, scaling as a problem is neither a software nor an Agile topic. Humanity has been scaling its activities for millennia, and very successfully at that. The Pyramids in Egypt, the Panama Canal in central America, the immense railways all over the world, the Airbus A380, etc.
All of these scaling efforts share some commonalities with software and among each other, but they are also very different. I'd like to focus on one particular aspect of scaling that has a huge impact on software development: communication.The key to scaling software development
We've all heard countless accounts of projects gone wrong because of lack (inadequate, or just plain bad) communication. And typically, these problems grow with the size of the team. Communication is a major challenge in scaling any human endeavor, and especially one - like software - that so heavily depends on successful communication patterns.
In my own work in scaling software development I've focused on communication networks. In fact, I believe that scaling software development is first an exercise in understanding communication networks. Without understanding the existing and necessary communication networks in large projects we will not be able to help those project adapt. In many projects, a different approach is used: hierarchical management with strict (and non-adaptable) communication paths. This approach effectively reduces the adaptability and resilience in software projects.Scaling software development is first and foremost an exercise in understanding communication networks.
Even if hierarchies can successfully scale projects where communication needs are known in advance (like building a railway network for example), hierarchies are very ineffective at handling adaptive communication needs. Hierarchies slow communication down to a manageable speed (manageable for those at the top), and reduce the amount of information transferred upwards (managers filter what is important - according to their own view).
In a software project those properties of hierarchy-bound communication networks restrict valuable information from reaching stakeholders. As a consequence one can say that hierarchies remove scaling properties from software development. Hierarchical communication networks restrict information reach without concern for those who would benefit from that information because the goal is to "streamline" communication so that it adheres to the hierarchy.
In software development, one must constantly map, develop and re-invent the communication networks to allow for the right information to reach the relevant stakeholders at all times. Hence, the role of project management in scaled agile projects is to curate communication networks: map, intervene, document, and experiment with communication networks by involving the stakeholders.
Scaling agile software development is - in its essential form - a work of developing and evolving communication networks.
Picture credit: John Hammink, follow him on twitter
What does aggressive decoupling look like?
Last post I talked about the failure modes of Scrum and SAFe and how the inability to encapsulate the entire value stream will inevitably result in dependencies that will kill your agile organization.
But Mike… as some level of scale, you have to have dependencies? Even if we are able to form complete cross-functional feature teams, we may still have features which have to be coordinated across teams or at least technology dependencies which make it tough to be fully independent.
But Mike… you talk about having teams formed around both features and components… in this case, it is inevitable that you are going to have dependencies between front end and back end systems. Whatever we build on the front end, has to be supported on the back end.
What if you looked at each component, or service, or business capability as a product in and of itself. What if that product had a product owner guiding it as if it were a standalone product in its own right?
What if you looked at each feature that might possibly need to consume a component, or service, or business capability as the customer of said service who had to convince the service to build on it’s behalf?
What if the component, service, or business capability team looked at each of the feature teams as their customer, and had the freedom to evolve it’s product independently to best satisfy the needs of all it’s customers?
What if the feature teams could only commit to market based on services that already existed in the services layer, and could never force services teams to commit based on a predetermined schedule?
What if feature teams could *maybe* commit to market based on services which were on the services teams near term roadmap, but did so at their own risk, with no guarantees from the service owner?
What if feature teams were not allowed to commit to market based on services that didn’t exist in the service, nor were on the near term roadmap, eliminating the ability to inject features to the service?
I think you’d have a collection of Scrum teams… some Scrum teams that were built around features and some Scrum teams that were built around shared services and components… each being treated as it’s own independent product building on it’s own cadence under the guidance of it’s own PO.
There would be no coordination between the feature teams and the services teams because each set of teams would be evolving independently, but with a general awareness of each others needs. The services teams develop service features to best satisfy the collective needs of their feature team customers.
I’m not suggesting this something that most companies can go do today. There is some seriously intentional decoupling of value streams, technical architecture, business process, and org structure that has to happen before this model would could be fully operational.
That said, if you want to have a fully agile, object oriented, value stream encapsulated organization, this is what it looks like. You not only have to organize around objects (features, services, components, business capabilities), but you have to decouple the dependencies and let them evolve independently.
The problems ALWAYS come in when you allow the front end to inject dependencies into the back end shared services. You will inevitably will create bottlenecks that have to be managed across the software development ecosystem. Dependencies are bad, bottlenecks might be worse.
If we can create Scrum teams around business objects, work to progressively decouple these business objects from each other, and allow the systems to only consume what’s in place now, and never allow the teams to dictate dependencies between each other… I think you have a shot.
Do this, and you really have agile at scale.
I really enjoyed meeting new people and seeing so many old friends at Agile2014 in Orlando. Thank you to everyone who attended my session, asked questions and provided feedback, which encouraged me and gave me ideas for future events.
Here is the feedback for "Teaching Agile to Management":
"Your session's recorded attendance was 80 attendees (at start), 76 (in the middle) and 76 (at the end). 37 attendees left feedback.
"The feedback questions are based on a 5 rating scale, with 5 being the highest score. Your average ratings are shown below:
- Session Meets Expectations: 4.22
- Recommend To Colleague: 4.22
- Presentation Skills: 4.49
- Command Of Topic: 4.73
- Description Matches Content: 4.22
- Overall Rating: 4.24"
The slide deck is available for download here. The Word file for the "Role-ing Doughnut Game" is also available. I print the file on Avery labels (10 to a sheet). I measure and cut 8 cards per sheet out of card stock sheets to mount the labels. The poster for the game is also available for download. I order 3' x 4' posters from FedEx Office.
Please share your experiences in the comments and feel free to send any questions our way.
In today’s busy workplace, how do you make sure you’re collaborating effectively? Team emails and all-hands meetings — while good at keeping everyone up to speed — can be inefficient for managing day-to-day communications. To cut down on the status update clutter and inject more meaning into ongoing conversations, try using targeted notifications directed to those who […]
The post How to Boost Collaboration with Targeted Notifications appeared first on Blog | LeanKit.
In my continued playing around with R and meetup data I wanted to group a data table by two variables – day and event – so I could see the most popular day of the week for meetups and which events we’d held on those days.
I started off with the following data table:
> head(eventsOf2014, 20) eventTime event.name rsvps datetime day monthYear 16 1.393351e+12 Intro to Graphs 38 2014-02-25 18:00:00 Tuesday 02-2014 17 1.403635e+12 Intro to Graphs 44 2014-06-24 18:30:00 Tuesday 06-2014 19 1.404844e+12 Intro to Graphs 38 2014-07-08 18:30:00 Tuesday 07-2014 28 1.398796e+12 Intro to Graphs 45 2014-04-29 18:30:00 Tuesday 04-2014 31 1.395772e+12 Intro to Graphs 56 2014-03-25 18:30:00 Tuesday 03-2014 41 1.406054e+12 Intro to Graphs 12 2014-07-22 18:30:00 Tuesday 07-2014 49 1.395167e+12 Intro to Graphs 45 2014-03-18 18:30:00 Tuesday 03-2014 50 1.401907e+12 Intro to Graphs 35 2014-06-04 18:30:00 Wednesday 06-2014 51 1.400006e+12 Intro to Graphs 31 2014-05-13 18:30:00 Tuesday 05-2014 54 1.392142e+12 Intro to Graphs 35 2014-02-11 18:00:00 Tuesday 02-2014 59 1.400611e+12 Intro to Graphs 53 2014-05-20 18:30:00 Tuesday 05-2014 61 1.390932e+12 Intro to Graphs 22 2014-01-28 18:00:00 Tuesday 01-2014 70 1.397587e+12 Intro to Graphs 47 2014-04-15 18:30:00 Tuesday 04-2014 7 1.402425e+12 Hands On Intro to Cypher - Neo4j's Query Language 38 2014-06-10 18:30:00 Tuesday 06-2014 25 1.397155e+12 Hands On Intro to Cypher - Neo4j's Query Language 28 2014-04-10 18:30:00 Thursday 04-2014 44 1.404326e+12 Hands On Intro to Cypher - Neo4j's Query Language 43 2014-07-02 18:30:00 Wednesday 07-2014 46 1.398364e+12 Hands On Intro to Cypher - Neo4j's Query Language 30 2014-04-24 18:30:00 Thursday 04-2014 65 1.400783e+12 Hands On Intro to Cypher - Neo4j's Query Language 26 2014-05-22 18:30:00 Thursday 05-2014 5 1.403203e+12 Hands on build your first Neo4j app for Java developers 34 2014-06-19 18:30:00 Thursday 06-2014 34 1.399574e+12 Hands on build your first Neo4j app for Java developers 28 2014-05-08 18:30:00 Thursday 05-2014
I was able to work out the average number of RSVPs per day with the following code using plyr:
> ddply(eventsOf2014, .(day=format(datetime, "%A")), summarise, count=length(datetime), rsvps=sum(rsvps), rsvpsPerEvent = rsvps / count) day count rsvps rsvpsPerEvent 1 Thursday 5 146 29.20000 2 Tuesday 13 504 38.76923 3 Wednesday 2 78 39.00000
The next step was to show the names of events that happened on those days next to the row for that day. To do this we can make use of the paste function like so:
> ddply(eventsOf2014, .(day=format(datetime, "%A")), summarise, events = paste(unique(event.name), collapse = ","), count=length(datetime), rsvps=sum(rsvps), rsvpsPerEvent = rsvps / count) day events count rsvps rsvpsPerEvent 1 Thursday Hands On Intro to Cypher - Neo4j's Query Language,Hands on build your first Neo4j app for Java developers 5 146 29.20000 2 Tuesday Intro to Graphs,Hands On Intro to Cypher - Neo4j's Query Language 13 504 38.76923 3 Wednesday Intro to Graphs,Hands On Intro to Cypher - Neo4j's Query Language 2 78 39.00000
If we wanted to drill down further and see the number of RSVPs per day per event type then we could instead group by the day and event name:
> ddply(eventsOf2014, .(day=format(datetime, "%A"), event.name), summarise, count=length(datetime), rsvps=sum(rsvps), rsvpsPerEvent = rsvps / count) day event.name count rsvps rsvpsPerEvent 1 Thursday Hands on build your first Neo4j app for Java developers 2 62 31.00000 2 Thursday Hands On Intro to Cypher - Neo4j's Query Language 3 84 28.00000 3 Tuesday Hands On Intro to Cypher - Neo4j's Query Language 1 38 38.00000 4 Tuesday Intro to Graphs 12 466 38.83333 5 Wednesday Hands On Intro to Cypher - Neo4j's Query Language 1 43 43.00000 6 Wednesday Intro to Graphs 1 35 35.00000
There are too few data points for some of those to make any decisions but as we gather more data hopefully we’ll see if there’s a trend for people to come to events on certain days or not.
Did you know that according to job board indeed.com, openings for project managers (PMs) with Agile experience have grown more than 2,500% since 2005?
As more companies seek greater value from technology projects by making the switch from waterfall to Agile, it’s imperative that project managers maximize their value, too, by understanding their role in Agile projects and keeping their skills sharp. A recent global survey from PricewaterhouseCoopers (PwC) showed that 34% of PMs now use Agile methods, and a majority of PMs (62%) are certified Agile practitioners.
“Not only have organisations raised the bar in order to stay competitive in the turbulent business environment, but PM standards have also significantly increased … more practitioners are becoming certified in PM with an increased adaptation of Agile PM and EVM.” (PwC)Get Accredited
Earning your credentials as a Project Management Institute Agile Certified Practitioner (PMI-ACP) is a great way to take your skills and versatility to the next level. If you’re preparing to take the PMI-ACP exam, Agile University has got just the training you need to make you ready to test with flying colors.
Come spend a few days in beautiful Boulder, Colorado, to up your game on topics like:
- The Agile Manifesto
- Lean Basics
- Kanban Design and Value Stream Mapping
- Communication and Information
- Planning, Estimating and Adjustment Practices
- Iterative Risk Management
- Facilitation Techniques and Conflict Resolution
If you want to acquire a thorough understanding of Agile / Lean principles and practices, invest in high-quality professional training, and pass the PMI-ACP exam on the first try, this is the course for you.Sign On Up
Join us November 4-5, 2014. Find out more and register, here.
Can’t make this one? Check our course calendar to see upcoming dates for this and other great classes!Rally Software
As promised in my previous post, I just pushed the first version of our "Angular Promise DSL" to Github. It extends AngularJS's $q promises with a number of helpful methods to create cleaner applications.
The project is a V1, it may be a bit rough around the edges in terms of practical applicability and documentation, but that's why it's open source now.
The repository is at https://github.com/fwielstra/ngPromiseDsl and licensed as MIT. It's the first OS project I've created, so bear with me. I am accepting pull requests and issues, of course.
Welcome to our August newsletter. Do you also have a dislike for time tracking? Joanne Perold does, and she explains why in this month’s blog post….more. Kanban Training We still have spaces available on our upcoming Kanban Foundation and Advanced courses. Scrum Sense will once again be teaming up with LKU-accredited Kanban Trainer, Dr. Klaus […]
The post News update 2014/08 – 5 Reasons to Kill Time Sheets appeared first on ScrumSense.com.
Traditionell fragen Projektmanager ihre Kollegen: “Wie lange dauert es, das zu entwickeln?“ Sie wollen Kosten berechnen, die Länge des Projekts kalkulieren und möglichst früh wissen, wie viele Ressourcen das Projekt braucht.
Diese Fragen deuten immer in die gleiche Richtung: Die Projektmanager wollen die Unsicherheit aus dem Projekt nehmen. Unsicherheiten gibt es in Projekten genug: Haben die Projektmitarbeiter genügend Ahnung? Bekommt man die Technologie in den Griff? Hat man am Ende für das gesamte Projekt genügend Budget? Findet man die richtigen Features für das Produkt? Wird der Kunde es haben wollen? Werden die Kollegen die geeigneten Lösungen schnell genug finden … und, und, und. Kurz, es wird versucht die Zukunft vorherzusagen und gleichzeitig weiß man ganz genau, dass das gar nicht geht. Denn jeden beschleicht das Gefühl, dass man bei einem Projekt ja etwas Neues macht – also etwas, das es vorher noch nicht gegeben hat, etwas, das noch nie gemacht wurde. Alle diese Fragen können also gar nicht beantwortet werden. Mit diesen Fragen bin ich immer in die Diskussion eingestiegen und habe mich dabei gewundert, wie man als Projektmanager überhaupt annehmen kann, diese Fragen zu Anfang eines Projekts klären zu können. Dann fragte mich letztens ein Zuhörer bei einem meiner Vorträge: „Ist es denn so? Ist es nicht vielmehr so, dass die meisten Projekte sehr wohl von Leuten gemacht werden, die genau das Gleiche schon einmal gemacht haben? Daher kann man doch sehr genau die Aufwände schätzen.” In diesen Fällen wisse man doch genau, wie lange etwas dauere.
Diesem Argument lässt sich nicht viel entgegensetzen, oder? Denn es ist korrekt: Wenn man mehr oder weniger immer das Gleiche tut, kann man auch Vorhersagen treffen. Auf diesem Prinzip basieren auch alle Ansätze der Schätzverfahren in Scrum-Teams. Wenn man zu irgendeinem Zeitpunkt genug Daten darüber hat, wie lange etwas dauert, dann lassen sich sehr wohl die Aufwände schätzen. Aber Moment: In diesem Fall befindet sich ein solches Team doch gar nicht mehr in einem Projektmodus. In diesem Moment, in dem es wiederkehrende Aufgaben vollbringt, ist es wieder in einem Support-, Maintenance- oder Bug-Fixing-Modus. Aber genau dann sollte man doch sofort vom Projektmanagement hin zu einem klassischen Verfahren der Prozesssteuerung und des Workflowmanagements zurückgehen und wo landet man da? Genau – beim Lean Management des Toyota Production Systems. Dann ist man besser damit beraten sich anzusehen, wie man in Call Centern, in Produktionsbetrieben, in der Logistik oder in Einkaufszentren arbeitet. Dort sollten Logistiker den Ton angeben: Denn Workflowmanagement wird seit 40 Jahren nach dem Pull-Verfahren und dem One-Piece-Flow-Verfahren optimal gesteuert. Es hat lange gedauert, bis sich dieses Wissen durchgesetzt hat, aber es war am Ende erfolgreich. Viele Manager, die Sales-Zahlen vorgeben, Absatzzahlen erfinden und Produktionsziele vorgeben wollen, wollten lange nicht wahrhaben: Der Empfänger des Produkts, meist der Kunde, bestimmt durch sein Kaufverhalten, wie viel ein Unternehmen liefern sollte, und nicht die Annahmen darüber, wie viel der Kunde möglicherweise kaufen wird. Das hat der Handel in den letzten 10 Jahren gelernt und man sieht es jeden Tag an der Kasse der Discounter, wie dieses Prinzip funktioniert.
Dieses Denken hat auch in die Software-Entwicklung Einzug gehalten. Die massive Hinwendung vieler Software-Entwicklungsabteilungen hin zu Kanban – die heute sogar in Tools wie JIRAs Ansatz zur Workflowsteuerung gipfeln – lässt sich so erklären, und sie ist folgerichtig. Wo nichts Neues entwickelt, wo keine Projekte gemacht und einfach das immer Gleiche immer wieder neu angepinselt und noch einmal verkauft wird, da gilt es tatsächlich nichts anders als effizientes Workflowmanagement zu machen. Support-Aufgaben hier, ein Bugfix da.
Es gibt Untersuchungen und IT-Leiter, die behaupten, 70% dessen, was entwickelt werde, wäre nunmal Maintenance. Aus meiner Beobachtung stimmt das. Es ist so – aber wieso? Ist das eine Ursache oder ein Symptom? Ich glaube, dass ist das Symptom. Wenn Software-Entwicklung so durchgeführt wird, wie in vielen Unternehmen heute noch immer und am Ende zu gigantischen Bergen technischer Schulden führt, ja dann schreibt man relativ schnell keine neuen Features, sondern ist ständig damit beschäftigt, das Alte anzumalen oder kleine, völlig unbedeutende Veränderungen vorzunehmen. Die wunderbare Präsentation von Salesforce.com zeigt sehr eindrucksvoll, wie das auch einem Internet-Startup wie Salesforces.com sehr schnell passiert ist.
Aber zurück zum Schätzen. In diesen Umfeldern ist es zwar prinzipiell möglich, Aufwände sogar relativ gut und valide zu schätzen, denn man hat ja eh immer das Gleiche zu tun. Aber es ist völlig unnötig, wie uns die Logistikindustrie gezeigt hat. In diesen Umfeldern hat sich etwas anderes durchgesetzt: Die sogenannte Flusssteuerung, die mit Hilfe statistischer Verfahren die Durchflussgeschwindigkeit und die Höhe der Inputschlange bestimmt, und basierend auf diesen Aussagen die Lieferzeiten sehr korrekt ermitteln kann.
Man braucht also gar nicht mehr zu schätzen, sondern man weiß, wann etwas geliefert wird. Diese Ideen hat Kanban für die Software-Entwicklung populär gemacht – es wurden Serviceklassen eingeführt und nun ist man sehr gut darin, mit diesen Verfahren und WIP-Limits die Auslastung von Teams zu steuern. Jedes Schätzen ist hier Zeitvergeudung und vollkommen unnötig. Erklärt man diesen Umstand, schauen mich Projektmanager oft ungläubig an. Die Prinzipien sind zwar einfach, doch leider liegen diese Erkenntnisse quer zu dem, was der gesunde Menschenverstand sagt und vollkommen quer zu dem, was in vielen Unternehmen zum Thema Auslastung, Workflowmanagement etc. gelehrt wird. Es hat 20 Jahre gedauert, bis in der Automobilindustrie die Ideen von Toyota wirklich angekommen sind.
Wenn also Schätzen in diesen Umfeldern nicht mehr notwendig ist, wie ist das dann bei echten Projekten? Also in einem Kontext, in dem etwas versucht wird, das noch nie zuvor versucht worden ist? Ich habe das große Glück, in den letzten beiden Jahren mit Kunden arbeiten zu dürfen, die solche Projekte durchführen. Dort werden tatsächlich Dinge getan, erforscht, erprobt, ausgetüftelt – also Produkte gebaut, die noch nie zuvor ein Mensch gesehen hat. Das ist ein bisschen so wie der erste Mondflug. Man weiß einfach nicht ob es geht. Die einzige Chance, in diesen Projekten zu Ergebnissen zu kommen, ist die Welt zu gestalten. Man erfindet einfach das, was man braucht. Seien es die Entwicklungsmethoden, die Tools oder auch die notwendigen Produktideen. Allerdings ergibt sich aus diesem Vorgehen eine prinzipielles und unlösbares Faktum. Nun kann man einfach nicht mehr voraussagen, wann man fertig ist. Sicher, man kann sich etwas vornehmen, auf ein Ziel hinarbeiten. Doch etwas zu versprechen wäre illusorisch.
Findige Designer haben für Dinge, die nicht zu komplex sind, eine relativ simple Methode gefunden, wie man dennoch Fortschritte dokumentieren kann und wie man sich zumindest auf etwas festlegen kann: Man gibt einen Zeitrahmen vor. Diese zeitliche Grenze erzeugt den notwendigen Druck, zumindest immer auf ein Ziel hinarbeiten zu müssen, sich also nicht selbst zu belügen und einfach ergebnislos vor sich hinzuarbeiten. Zeitliche Grenzen erzeugen den Fokus, man probiert nicht alles aus, sondern liefert mal den erstmöglichen Fortschritt, das erstmögliche Ergebnis. Kann man diesen Zeitrahmen sinnvoll strukturieren? Sicher! Es gibt Techniken wie Design Thinking oder Scrum, die Teams dabei helfen, genau diesen Zeitrahmen zumindest insofern zu strukturieren, dass das Finden von Ergebnissen wahrscheinlicher – nicht aber sicher – wird.
Doch jetzt das Paradoxon: Diese Methoden sind bekannt. Sie sind sogar so erfolgreich, dass die erfolgreichsten Firmen des letzten Jahrzehnts freiwillig erzählen, dass sie darauf setzen. Sie sind sogar in den Firmen bekannt, die bis dato ganz anders gearbeitet haben – dennoch wird stillschweigend noch immer vorausgesetzt, dass man wissen muss, wann zu welchen Kosten und mit welchem Ergebnis auf jeden Fall das geliefert werden soll, was man heute noch gar nicht kennt. Es wird also versucht, diese neuen Methoden in einem Kontext zu leben, der zugegeben hat, dass die Aufgabe unlösbar ist. Deshalb nimmt man die neuen Methoden und gleichzeitig verdrängt man die Tatsache, dass die Aufgabe unlösbar ist.
Aufwände zu schätzen ist in diesen Umfeldern schlicht unmöglich. Das ist jedem klar, und doch wird es immer wieder gefordert. Warum? Der Trugschluss herrscht, dass man mit Hilfe des Schätzens eines bekämpfen könnte: Die Angst, in irgendeiner Weise zu versagen. Es geht also anders ausgedrückt darum, das Risiko zu verringern. Als könne man durch das Schätzen gewährleisten, dass es zu keinem Verlust kommen könnte.
Doch Schätzen ist aus mindestens diesen vier Gründen ungeeignet, das Risiko zu minimieren:
- Aufwandsschätzungen werden zumeist optimistisch abgegeben. Damit wird das Risiko erhöht.
- Aufwände werden als Kostenfaktor gesehen. Damit ist eine hohe Schätzung zwar vielleicht ein Maß für Risiko, aber wirtschaftliche Interessen konterkarieren dieses Thema sofort.
- Wir wissen aus den Arbeiten von Eliyahu Goldratt, Autor von „The Goal“, dem einflussreichsten Buch zur Theory of Constraints, dass im Falle dessen, dass Aufwandsschätzungen zu groß waren, dennoch diese Aufwände aufgebraucht werden. Damit erhöht sich das Risiko, die Puffer des Projekts zu sprengen und wenn dann wirklich eine Schätzung zu gering war, bricht man die zeitlichen Grenzen. Das Projekt wird also riskanter.
- Aufwandsschätzungen sind abhängig von dem, der sie durchführt. Damit erhält man kein objektives Maß für das Risiko.
All das ist hinreichend bekannt. Dennoch erzeugen Aufwandsschätzungen und die sich daran ausrichtende Planung immer wieder die Illusion, das Risiko wäre gebannt. Was durch das Schätzen von Aufwänden also in Wahrheit geschieht: Das Risiko wird nicht eingeschätzt und bewertet, sondern es wird verdrängt und ignoriert. Wir haben geschätzt und gebannt.
Wie wäre es, wenn wir das Risiko benennen würden, ihm ins Gesicht schauen und es angreifbar machen? Wie wäre es, wenn wir von Anfang an sagen würden: Wir haben hier die besten Kollegen zusammengerufen, die wir haben. Wir lassen Sie das Vorhaben mit ungewissem Ausgang wagen. Wir wissen, dass wir nicht wissen können, wann wir fertig sein werden, doch wir nehmen uns Etappen vor, und wir überprüfen immer wieder, was es braucht, um das Ziel zu erreichen. Was wäre, wenn wir offen darüber sprechen würden, dass wir nicht wissen, ob die Technologie die richtige ist, und deshalb davon ausgehen, dass wir bei neuen Erkenntnissen die Richtung noch einmal wechseln können. Was wäre, wenn wir das Risiko dadurch minimieren, dass wir immer einen kleinen Schritt machen und überprüfen, ob wir auf Kurs sind?
Auch dann wäre es nicht nötig zu schätzen. Man schaut einfach, wie viel man ausgegeben kann und wohin man kommen muss, damit man die nächste Investition rechtfertigen kann. Venture Capitalists gehen so vor – und nicht nur diese. Jeder nutzt genau diese Strategie in seinem eigenen privaten Bereich. Man schaut, was man an Ressourcen hat und dann fängt man an. Ist das ideal? Nein, aber der einzige Ausweg für alle, die innovative Produkte auf den Markt bringen wollen.
Für alle, die sich der Unvorhersagbarkeit stellen wollen, habe ich ein Buch geschrieben: Wie schätzt man in agilen Projekten – oder warum Scrum-Projekte erfolgreicher sind.
Today we released a great new update for quickly viewing an item’s detail and history. We developed a modal for the item detail view. Click on an item’s title and a child window opens up with detailed information for that particular item. You can review, comment and add attachments to tickets without losing your place on the main page. Losing context while reviewing items is a piece of feedback we’ve heard often.
This is my personal humble feedback on Agile Conference. I do make broad conclusions though, so feel free to provide your vision in comments.
I haven’t visited Agile conferences for like 5 years, the last one was in Chicago. It was pretty good. The main innovations were Kanban and UX+Agile. Many sessions were still quite boring to any experienced agile practitioner. Now I’m in Orlando. Conference becomes huge. There are so many people around. But what about sessions? In 3 days I visited exactly one session that was really interesting and useful, it was about Netflix culture at DevOps track. All the others I visited were not useful, boring, kinda OK, way too abstract or completely trivial. Maybe I was just unlucky and missed all the good talks. Maybe, but I picked carefully, to be honest. I talked to some people and received mixed feedback, but in general it looks like conference content is not great. DevOps track looks very good, BTW, and I heard many good words about it.
How do I feel about all these things you ask me? I personally see a serious stagnation and the lack of innovations in agile community. Don’t get me wrong. There are bright people with brilliant ideas, but it seems they are in opposition to the main trends. How’s that happened?
Agile is about helping businesses to kick ass. To do that, there should be innovations in various directions. We, as an agile community, should invent new ways to help business understand what is valuable and what is not. Invent new development practices and try them in various contexts. Inspect organizations as a whole and invent new ways to detect problems and solve them on a system level. But what we have at the moment?
There are many topics about Scaled Agile frameworks. I visited several sessions and I have an opinion that speakers have no clue how to really scale agility. Proposed frameworks are kinda prescriptive and heavy. They reminded me RUP-days. You really can create a good framework based on RUP, but there were few successful cases.
SAFe looks complicated and it does not address root problems on my opinion. We need real structural transformations, while SAFe implies specific receipts and says that it will work in almost any context. How is that possible? Everything is context-dependent, and that is why many agile transformation initiatives failed and will fail. People just apply a recipe without deep thinking, ignoring context-specific things and expect it to work. It won’t work in many cases, and you can’t fix it without context-awareness.
SAFe has many good practices inside. It can help companies initially and you will see some tactical success, but I also think that in the long run SAFe is a strategic disaster. It may take 5+ years to feel that, but I don’t believe that company will inject a true agile mindset starting with SAFe. It can happen, but it will be exceptional cases mostly. The really bad thing is that companies will not notice the problem. With waterfall the problem is (now) obvious, while with SAFe they will have an illusion that they are truly agile, while they are not.
So at the end of the day I have a perception that majority of speakers present some abstract theoretical frameworks with extremely poor argumentation. Why this might work? In which context? No clue.
I also wonder why we have no talks about Kanban here? Is Kanban agile or not? Agile community have personal troubles with Kanban approaches? C’mon, folks, this separation is childish.
All that sounds like rants without solutions so far. So I have some actionable proposals for the next Agile Conference. Here is my feedback:
- Add a decent mix of various disciplines. We can learn from complexity science, biology, sociology, sport, physics and other disciplines. Try to intrigue people from these disciplines to really mix their practices with our practices and invent something new finally. At least invite them to speak about things they know to stimulate our imagination and analogy thinking. Invite Dave Snowden, finally, to see his controversial view on scaling. There should be more perspectives. We need greater diversity.
- Have more real-life experience reports with real practices that work in some contextes. It will help to learn from each other and spread good practices. I know many good discussions are firing up between people, but why don’t do that on sessions as well?
- There should be more science. People over the world do great research about group dynamic, development practices, cooperative games, etc. Invite them to share their researches.
- Invite bright business people to talk about marketing, agile workspace, new hiring practices, strategy, etc. It will finally help merge Agile and business together. Nothing is separate. We should see high-level pictures and learn from them.
- 75 minutes talks? Are you kidding me? Nobody can control attention for more than 45 minutes. Split these talks and make workshops longer, since 75 minutes are not enough for a decent workshop. I’d like to see more TED-like talks, short and precise. Experiment with that at least. Inspect and adapt.
In short, Agile Conference demands more inventions, real-life reports, more science and different format. Conference organization is just perfect, it really is. I can’t imagine better. Content, however, is below average, and that is embarrassing for agile-minded community. We can do better.
The final thing is the slogan I saw yesterday. It is just unbearable to me: “Allow agile and waterfall work together” WTF?
I thought we were working on replacing waterfall and change the ways organizations work. Do we, as a community, still think it is a good idea? Or are we starting to agree with a status quo? I believe we are fucking not. There is no limit to perfection.
“Pirates are bold not safe” — These guys are doing something good
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
I generally like it because it provides a reasonable structure in a collaborative, canvas style.
However, to make it more appealing to me, I'd like to adjust it to generalise to a non-UX designer perspective and also to reflect some slightly different assumptions of what I consider important for developing oneself and others. Specifically, I prefer a job crafting approach.
I've created a template on Google Drive: