Marcus Blankenship and I wrote a follow-up piece to our first article, mentioned in Discovery Projects Work for Agile Contracts. That article was about when your client wants the benefit of agile, but wants you to estimate everything in advance and commit to a fixed price/fixed scope (and possibly fixed date) project. Fixing all of that is nuts.
The next article is Use Demos to Build Trust.
That post prompted much Twitter discussion about the purpose of estimates and trust. I’ll write a whole post on that because it deserves a thoughtful answer.
I remember the first time I heard the concept of done done. When a developer finishes coding, the story is done but not done done. There are a number of other steps that have to be completed before it is really done, that is done done. The team should come up with the definition of done at the start of a project. Typically it's things like coding is finished, unit testing was completed successfully, and the story was accepted by the author.
But how do you know when something is ready to start, when it's ready ready? Do you have a definition? I've seen where the work can get delayed when a story is brought into an iteration when it isn't ready to be started on for various reasons. In order to avoid problems with getting the work done in each iteration, the team should have a definition of when a story is ready to start.
So here are some ways to make sure your story is Ready-Ready:
- Ensure the story and acceptance criteria are clear. The story should be in the proper format of As a I want to so that .
- The size of the story is small enough to be completed within the iteration. If the story is to large, the team should look at breaking it up into smaller stories.
- There is obvious user value.
- The story is immediately actionable, meaning there aren't dependencies that aren’t ready. If your story requires some yet to be completed action from another team, you don't want to include it in your iteration until that action is complete.
Taking this approach is a good way to ensure you are ready-ready.
We’re already familiar with how Story Maps are an excellent way to help development teams visualize the work involved in building a large product. They act as stepping-stones between the initial vision and the resulting deliverable user stories. But Story Maps can serve a similar purpose in Organizational Change, too! If the Story Map (or Organizational Change Map) is created as a collaborative activity between the Executives and the people who will implement the changes, the activity becomes a form of Catchball, as discussed in the previous post. The Story Map then becomes the Strategy.
But collaboration is the key. Both Vision and Strategy should be created with input from the whole organization. Often they’re created in a Catchball / Story Map process mentioned above, but there is an alternative approach, such as an X-Matrix.
Some prefer to use the X-Matrix as a Strategy Deployment Tool – basically another way of visualizing the relationship between an objective, a strategy, and the tactics used to implement the strategy. Caveat: most of the examples you will find on the subject are around manufacturing and are heavily focused on metrics. Given the limitations of measurement in the world of software development (i.e. knowledge work), I don’t find that an X-Matrix adds much more than a good Story Map.
If you choose the Story Map method, break down a few items at a time into manageable changes. Changes at this level should be small enough that it’s clear which team(s) needs to be involved in making the change.Case Study
At the WorldsSmallestOnlineBookStore, there are over one hundred people involved in making the quality changes that were implied by the vision! This is too large for even the most effective facilitator to handle in a single event. So John, the appointed ScrumMaster and Agile Coach, decides to run “Vision to Strategy” sessions in five groups. Each session includes at least one executive and a couple of other people who were participants in creating the original vision.The five groups discover the following problems:
- Stack Ranking, affectionately known as Rank and Yank. Since its introduction two years ago, has turned the workplace into a competition to get a bonus and avoid being fired. Team(s) involved: Organizational Improvement Team
- Full Regression Test Cycle is Too Long (four people take three weeks to run it). This means that even after a release is complete there is an extra three weeks before the software can be deployed. Team(s) involved: All Development Teams
- Hard to Make Changes because the Code Base is Brittle. Team(s) involved: All Development Teams
- Pressure to Deliver over Quality. Team(s) involved: Organizational Improvement Team
- Lack of Belief in Management’s Commitment. Team(s) involved: Organizational Improvement Team
- Every Sprint, new work results in net new bugs – i.e. “Bugs that escape the sprint”. Team(s) involved: All Development Teams
- Meaningless Performance Review Goals. They happen too late to provide relevant feedback and improvement opportunities. They create goals but the goals become out of date so quickly that, if the goals are pursued, they might focus people in the wrong places. Team(s) involved: Organization Improvement Team will work with HR
They reframe them as more positive goals:
Image attributions: Agile Pain Relief, Karl Scotland, Agile Pain Relief with elements from freepik.com
I’m giving the workshop Getting More out of Agile and Lean at the GOTO Berlin 2016 conference on November 16. In this agile workshop you will learn Agile and Lean practices for teams and their stakeholders to develop the right products, deliver faster, increase quality, and create happy high performing teams.
What will you get out of this workshop
- Effective practices for planning, daily stand-ups, product reviews and retrospectives
- Ideas for improving collaboration in teams and between teams and stakeholder
- Tips and tricks to improve your agile way of working
- Advice on selecting and applying agile and lean practices effectively
Retrospectives Exercises Toolbox - Design your own valuable Retrospectives
This workshop is intended for:
- Techical (team) leaders and Scrum masters
- (Senior) Developers and Tester
- Product Owners and Project/Line Managers
- Agile and Lean Coaches
- Anybody involved in agile transformations
This workshop is given in collaboration with Trifork at the GOTO Berlin 2016 conference. You can register for my workshop and the GOTO Berlin Conference. Early bird tickets are available until September 7!
Linda Rising gave a great talk last night at Agile New England. Her topic was problem-solving and decision-making.
One of her points was to discuss the problem, out loud. When you talk, you engage a different part of your brain than when you think. For us extroverts, who speak in order to think, this might not be a surprise. (I often say that my practice for my talks is almost irrelevant. I know what I’m going to say. And, I feed off the energy in the room. Things come out of my mouth that surprise me.)
If you’re an introvert, you might be surprised. In fact, since you think so well inside your head, you might scoff at this. Yes, speech and problem-solving both work in your frontal lobe. And, your brain processes thought and speech differently.
Long ago, I was stuck on a problem. I went to my boss and he told me to talk to the duck.
“The duck?” I asked. I thought he’d lost his mind.
“Yes, this duck.” He pulled out a yellow rubber duck off his shelf. “Talk to the duck.”
I looked at him.
“What are you waiting for? Do you want to take the duck back to your office? That’s okay.” He turned back to his computer.
I sat there for a few seconds.
“You don’t pray to the duck. You talk to the duck. Now, either start talking to the duck or take the duck. But, talk to the duck.”
I am happy to say that talking to the duck worked for me. I have used that technique often.
Sometimes, I talk to a person. All they have to do is say, “Oh,” or “Uh huh,” or some other acknowledgement that they still live and breathe. If I use one person too often, I suspect they prefer if I talked to a duck.
If you are stuck on a problem, don’t do the same thing you did for the past 20 minutes. (That’s my maximum time to be stuck. Yours might be longer.) Talk to the duck.
If you want the wikipedia reference, here it is: Rubber Duck Debugging. Talk on.
If you’ve ever tried to introduce a new tool, technology or technique into an existing team, you’ve probably been met with resistance. Chances are, you’ve been told “no.” with no real discussion.
It’s a natural reaction for people to not want change. There’s potential risk. There’s learning curves. There’s a lot of emotional attachment to the way things are, and more.
If that’s the case, though, how do we bring new tools and technologies into a team?
In this episode of ThoughtsOnCode, I’ll share the technique that I’ve used in multiple companies, with multiple teams.Tweet
A common challenge faced by inexperienced Scrum teams is related to splitting user Stories (in Scrum, Product Backlog Items or PBIs) so that they are granular enough for development. The INVEST model is a good way to test whether user stories are well-written.
- I – Independent
- N – Negotiable
- V – Valuable
- E – Estimable
- S – Small
- T – Testable
Independent – Each user story must be independent of each other. This prevents any overlap between the items; moreover, it allows the team to implement them in any order.
Negotiable – The details of the work must be negotiable, both among the stakeholders and the team. Specific requirements and design decisions will be fleshed out during development. Many agile practitioners recommended writing user stories on a note card — this is intentional so that a limited amount of detail can be prescribed.
Valuable – Each user story must add business value to the product, the customer and/or the users’ experience.
Estimable ¬– A good user story can be understood well-enough by the team that they can estimate it — not accurately — but at a high-level they perceive that it has size. It is helpful to understand the relative effort as compared to other user stories.
Small – A user story is not small if the team cannot get it done within a single Sprint. As large user stories are split into smaller items, greater clarity about the size and implementation is achieved, which improves the likelihood that the team will get it done within a Sprint.
Testable – Each user story should be testable; this is a common characteristic of well written requirements. If the team cannot determine how the user story may be tested, it is an indication that either desired functionality or the desired business value is not clear enough.Vertical vs Horizontal Splitting
There are two common ways to split user stories: vertically or horizontally. Horizontal breakdown of user stories splits the item at an architectural component level. Example: front end UI, databases or backend services. Whereas, a vertical slice results in working, demonstrable, software which adds business value. Therefore, it is recommended to slice user stories vertically so as to reduce dependencies and improve the team’s ability to deliver a potentially shippable product increment each sprint.Splitting User Stories Example
As a customer I can pay for my order so that I receive the products
If the above user story was to be split in a vertical manner, it might be broken down into the various ways a customer can complete a payment. As follows…
As a customer I can make a credit card payment for my order so that I collect reward points on my credit card.
As a customer I can make a PayPal payment for my order so that I can securely complete my purchase without sharing credit card details with another retailer.
The key point to note in the vertically sliced user stories above is that each story passes the INVEST tests mentioned earlier and therefore a Product Owner can prioritize these user stories based on customer needs. However, if a horizontal approach was used to split the user story (i.e. split by architectural layers and components) then the implementation of such requirements would result in working functionality only when all horizontal components are eventually integrated.Breaking down by Workflow
Another approach that is commonly used to breakdown user stories is focused on the individual steps a user may take in order to achieve their end goal — that is, a user story which describes a long narrative or “user flow” through a system may be sliced into steps which represent portions of the user’s flow. Continuing from the example above of a customer making a purchase online, the user story can be broken down into the following:
As a customer I can review the items being purchased for my order so that I can be confident I’m paying for the correct items.
As a customer I can provide my banking information for my order so that I can receive the products I ordered.
As a customer I can receive a confirmation ID for my purchase so that I can keep track and keep a record of my purchase.Other Methods
There are many other methods that can be utilized to breakdown larger user stories such as:
- Breaking down by business rules
- Breaking down by happy / unhappy flow
- Breaking down by input options / platform
- Breaking down by data types or parameters
- Breaking down by operations (CRUD)
- Breaking down by test scenarios / test case
- Breaking down by roles
- Breaking down by ‘optimize now’ vs ‘optimize later’
- Breaking down by browser compatibility
Kudos to this article for inspiring the list above: blog.agilistic.nl.Other Helpful Resources Scrum and Agile training sessions on WorldMindware.comPlease share!
From what the term “Service” does not imply, from “What is a service (2016 edition)”:
- Svc Fabric
- etc. pp.
We can apply a similar list to Microservices, where the term does not imply any technology. That’s difficult these days because so much marketecture conflates “Microservices” with some specific tool or product. “Simplify microservice-based application development and lifecycle management with Azure Service Fabric”. Well, you certainly don’t need PaaS to do microservices. And small is small enough when it’s not too big to manage, and no more. Not pizza metrics or lines of code.
So microservices does not imply:
- Feature flags
- No more than 20 lines of code in deployed service
- Service Fabric
- AWS Lambda
Instead, focus more on the characteristics of a microservice:
- Focused around a business domain
- Technology agnostic API
Most of the other descriptions or prescriptions around microservices are really just a side-effect of autonomy, but those technologies prescribed certainly aren’t a requirement to build a robust, scalable service.
My suggestion – go back to the DDD book, read the Building Microservices book. Just like DDD wasn’t about entities and repositories, microservices isn’t about Docker. And once you do get the concepts, then come back to the practitioners to see how they’re building applications with microservices, and see if those tools might be a great fit. Just don’t cargo-cult microservices like so many did before with DDD and SOA.
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
How do you work with professional services, consulting, field engineers, etc. to make your product better? Do you just treat their inputs as yet another channel for feature requests, or do you engage them as an incredibly potent market-sensing capability?Conversation Starter
I received an excellent and insightful question from one of my former students in DIT’s product management degree program (enrollment for the next cohort closes in a month). This student is now a VP of product, and kicked off a conversation with me about best practices for establishing a workflow for product managers to collaborate with professional services teams to improve the product. I’ve seen several companies try different ways to make this work, with one consistent attribute that described all of the approaches – not-visibly-expensive.
Two nights ago I was chatting with another colleague about how his team has been tasked with delivering a set of features, and not a solution to the underlying problem. As a result, he’s concerned about potential mis-investment of resources and the possibility of not genuinely solving the problem once the team is done with their tasks.
Combining the two conversations, I realized that there’s a common theme. When I look at how I’ve engaged with professional services folks, I found I’ve had success with a particular approach (which would also help my colleague).
First, let’s unpack a couple typical ways I’ve seen companies engage “the field” to get market data, and think through why a different approach could be better.Just Ingest
One team I worked with managed their product creation process (discover, design, develop) within Atlassian’s Confluence (wiki) and JIRA (ticketing) systems. Product managers and owners would manage the backlog items as JIRA tickets. Bugs were submitted as JIRA tickets, and triaged alongside feature requests. There was a place where anyone (deployment engineers, for example) could submit feature requests based on what they were seeing on-site at customers. Product managers would then “go fishing” within that pool of tickets looking for the next big idea. This process did not have a lot of visible overhead, but suffered from a “throw it over the wall” dynamic, a lack of collaboration, and a well-established pattern (not just a risk) of good ideas lying fallow in the “pool” waiting to be discovered, evaluated, and implemented.
From the product team’s perspective, going fishing was looking for needles in a haystack. The cognitive effort required to parse through low-value tickets and duplicates shifts your thinking to where it is challenging to apply critical thinking to any given idea. Therefore in addition to good ideas that were never discovered, many were touched but passed over.
This is certainly better than “no information from the field” but it emphasizes data and minimizes insight.High Fidelity Connections
One team I worked with had a product owner who formerly worked as a field support engineer. This product owner reached out to her colleagues in the field regularly both socially (cultivating her network, and maintaining genuine connections with friends) and professionally – asking about trends, keeping her experience “current by proxy” as she realized her direct experience would grow stale with time.
This narrow-aperture channel was very high fidelity, but low in volume and limited in breadth of coverage.
Each idea that came in received thoughtful consideration, and the good ones informed product decisions. The weakness of this approach was lack of scale; it suffered from the danger of extrapolating “market sensing” from a narrow view of a subset of the market. Because this “just happened” within the way the product owner did her work, it appeared to accounting to be “free.” Many good ideas were missed, presumably, because they didn’t happen to come to the attention of this product owner’s network.
I put this in the bucket of good (and better than just ingesting), but still falling short of the objective of a product manager.A product manager’s goal is to develop market insights, not collect market data.
- The first approach, while easy to institutionalize, had so much noise that you couldn’t find the signals.
- The second approach had a great (data) signal-to-noise ratio, but the signal was constrained by limited bandwidth, and only worked because of the product manager’s unique background, approach, and interpersonal skills.
There is another approach I’ve worked with a few teams to effectively generate product insights based on real-world insights from professional services team members. The challenge with this approach is that the expense is visible – you’re pulling people out of the field for a day or two, taking them off their accounts for a day or two. On one sufficiently high-profile project, a workshop date was scheduled (~5 weeks in advance) and people were “told” to come. They planned travel, manged customer commitments, etc. We booked a large room for two days and rolled up our sleeves. On another project, we opportunistically scheduled a half-day session the day after an all-hands quarterly meeting that brought everyone into the office anyway. The cost of the “extra day” was a lot lower than the cost of a standalone event. I’ve run two types of workshops that were very effective for this. The first one frames problems in a broader context, and the second one really explores alternatives and opportunities in a more targeted exercise. Ironically, the tighter targeting leverages divergent thinking as well as convergent thinking, and the broader framing is purely convergent. The first workshop is a co-opted customer journey mapping exercise. I say co-opted because while I go through very many of the same steps, I am not attempting to improve the experience, I’m attempting to understand the nature of, and relative importance of solving, the problems a customer faces through the course of doing what they do while interacting with our product. Without going into the specifics of running the workshop, the high level looks like the following:
- Start out with a straw-man of what you believe the customer’s journey looks like – a storyboard is a good tool for making a visceral, engaging touchpoint for each step in the journey. Review with the team and update the steps (add missing steps, re-order as appropriate, remove irrelevant and tag optional steps).
- [Might not be needed, but worked when I did it] Start out with key personas identified, representing the customers for whom we are building product. Workshop participants will be capturing their perspectives on the relative importance of problems from the point of view of those personas.
- Within each step, elicit from the field all of the problems a customer faces within each of those steps.
- Have the participants in the workshop prioritize the relative importance of each problem within each step (the 20/20 innovation game works great for this)
- Have the participants prioritize the relative importance of “improving any particular step” relative to improving any other step. (Fibonacci story-pointing works well for this)
- Record / take notes of the conversations – particularly the discussions where the participants are arguing about relative priority / relative importance. Those conversations will uncover significant learnings that influence your thinking, and establish focused questions to which you will want answers later. Before the workshop, you didn’t know which questions you needed to ask.
Professional services folks have massive amounts of customer data and insight – they only lack the (product management) skills to transform that insight into something usable by a product team.
The best way I’ve found to get value from that insight, in a repeatable way across teams and individuals, is to incorporate running workshops that force teams to articulate what the customers are doing with the product (what are their goals and challenges).
When asking the questions this way, you get the answers you need. By doing it in a collaborative workshop, you get more and better contributions from each of the team members than you would get through a series of interviews.
In this guest blog post on BenLinders.com Andrew Mawson from Advanced Workplace Associates talks about their ongoing research on cognition. The aim of that research is to provide guidelines that help knowledge workers do the right things to maximise their cognitive performance.
Cognition is just a scientific term for the functioning of the brain. Until recently measuring the effectiveness of the brain was a difficult challenge requiring laboratory conditions and very expensive equipment. However, over the last few years, researchers have designed software that reliably measure the performance of different parts of the brain.
Retrospectives Exercises Toolbox - Design your own valuable Retrospectives
There are many different functions of the brain which are known as ‘domains’ but there are 5 of these domains that seem to be most important and which can be measured using software.
The primary domains are:
- Attention: the ability to focus one’s perception on target visual or auditory stimuli and filter out unwanted distractions.
- Executive functioning: the ability to strategically plan one’s actions, abstraction, and cognitive flexibility – the ability to change strategy as needed.
- Psychomotor and Speed accuracy: Reaction time / processing speed: related functions that deal with how quickly a person can react to stimuli and process information.
- Episodic Memory: the ability to encode, store, and recall information. In most studies memory is further divided into recognition, recall, verbal, visual, episodic, and working memory. Each type of memory has specific tasks associated with that memory function.
- Working Memory: is the system responsible for the transient holding and processing of new and already-stored information, and is an important process for reasoning, comprehension, learning and memory updating.
You can probably see that cognition and these domains matter hugely if you are involved in knowledge work. If for example your ‘Attention’ domain is not as effective as it could be, your ability to concentrate in meetings and whilst reading could be lower than somebody who has a better performing ‘Attention’ domain. Imagine if you were in a meeting with 4 other people and you missed a vital piece of the discussion. Regardless of how good your memory might be, if the information isn’t getting as far as your memory, then you won’t be able to recall it. So if at some future date when you are dealing with one of those four colleagues, they may well make an assumption that you absorbed the same information in the meeting as they did…but in fact you didn’t. You can probably see how this can lead to potential confusion. ‘Was that guy in the same meeting as you and I’?
Also, the environment in which you work may matter more to you is your ‘attention’ domain is weaker than someone who is stronger in this area. You may, for instance, have a greater need for a distraction free environment than someone who’s concentration allows them to block everything else out.
You can probably see that depending on your role, different domains have a greater significance to your effectiveness in delivering your role. So for instance if you are involved in a role where accuracy and speed are vital, perhaps if you are an airline pilot or an accountant then the ‘Psychomotor speed and accuracy’ domain may be critical to you. If you are a senior leader involved in determining strategy or planning, then ‘Executive’ function will be more critical.
You can see pretty quickly that in a world where the brains of your people are your key tools in generating value that the effectiveness of these domains matter enormously.
What we wanted to do in our research was to examine all the academic studies that had been done on the subject of ‘cognition’ around the world to establish what makes the most difference to the performance of different parts of the brain and identify what advice could be provided to make improvements then to come up with guidelines that help people do the right things to maximise their cognitive performance.
So over the next 6 months we’re going to reveal the results from our study at our Cognitive Fitness webpage, providing some guidelines that you can use to improve your own cognitive performance and those of your people.
On numerous occasions I’ve observed long-time members of the Agile community complain about misinterpretations of what Agile means, and how it performed. Frequently this is precipitated by yet another blog post about how terrible Agile is, and how it damaged the life of the blogger. Sometimes it’s triggered by a new pronouncement of THE way to practice Agile software development, often in ways that are hardly recognizable as Agile. Or THE way to practice software development is declared post-Agile as if Agile is now obsolete and ready to be tossed in the trash bin.
The responses are both predictable and understandable. “If they’d only listen to what we actually said.” “Of course we don’t mean that.” “That’s not the Agile I know.” “Let’s take back Agile from those who misrepresent it!” It’s frustrating when people take terms you use and mangle the ideas that they represent to you into something unrecognizable and undesirable. I empathize with people making that response.
Recently I observed a similar situation where the shoe was on the other foot. I observed a long-time member of the Agile community describe a concept from another community where I’ve spent a lot of time and effort. The description was so far off the mark that I never would have guessed the concept that was being described.
As nearly as I can tell, they had formed a working definition from context, and from that definition had rejected the concept out of hand. They rejected it so forcefully that it seemed to taint their opinion of everyone connected with that concept. Indeed, “taint” may be insufficient, as they expressed their opinion not as “this person believes…” or “this person behaves…,” but as “this person is….”
I found this profoundly sad from several angles.
The concept is one that I’ve studied and used for years, and have developed layers of understanding that grow deeper over time. The person being dismissed is one I consider a friend and a mentor. It’s someone from whom I’ve learned a great deal over time, and that learning has significantly enriched my life.
The person making these comments is also someone I consider a friend. It distresses me greatly to watch one friend disparage another. I will generally defend the friend who is not there, as I did in this case and have done with regard to the other friend in other situations. This, of course, makes me a surrogate target representing the friend who is not there, and that increases the emotional magnitude of my distress.
The behavior of the friend who was present was so strikingly similar to the behavior I’ve observed that same friend rail against. In the situations where they were defending the concepts of Agile software development from what I considered unwarranted attack, I had felt a close affinity with what they were saying.
Now, seeing the same person taking the opposite role in a different context, I was having a hard time reconciling the difference in their behavior in the two situations. “They should know better than to dismiss a concept they don’t understand!”
Flashback to the year 2000. I was taking a coffee-break with a couple of colleagues at work. We were making jokes, being witty as software developers are wont to do. At one point in the conversation I used the phrase “Extreme Programming” as the punchline to a joke.
One colleague asked, “Extreme Programming? What’s that?”
“I don’t really know.” I had been researching Design Patterns on the Portland Pattern Repository and had seen the term being heatedly discussed, but had considered it noise in the way of my study of Design Patterns. The fact that there were obvious arguments about it, and that the term seemed silly on it face, had lead me to dismiss it out of hand. “I guess I should find out.”
This was the start of my study of Agile software development. I don’t know why my reaction, when confronted with my ignorance, was to enquire more deeply rather than defend my ignorance. I doubt that I react in that manner all the time. That particular reaction, though, has been hugely valuable for me. In many ways, it changed the direction of my life.
It’s not the term used, the name of the concept, that counts. It’s learning the nuances of the concept, starting with “Why would someone advocate this concept?” Assuming the answer to that question is that they’re an idiot leads nowhere productive. Investigating with curiosity often does.
What could I do about people who dismiss valuable concepts out of ignorance? I don’t have a good answer for that. Perhaps ignore the situation is the easiest non-negative response I can take. Arguing never seems to help, in my experience.
But when the shoe is on the other foot and someone suggests something that seems ridiculous at first glance, asking “Hmmm… What does that mean?” has served me better than rejection. At worst it goes nowhere and I’m left with “I don’t know.” Sometimes, however, it has opened my eyes to possibilities that I’d not yet imagined.
Earlier this week, I ran a workshop at the first ever Agile Europe conference organised by the Agile Alliance in Gdansk, Poland. As described in the abstract:
Architects and architecture are often considered dirty words in the agile world, yet the Architect role and architectural thinking are essential amplifiers for technical excellence, which enable software agility.
In this workshop, we will explore different ways that teams achieve Technical Excellence and explore different tools and approaches that Architects use to successfully influence Technical Excellence.
During the workshop, the participants explored:
- What are some examples of Technical Excellence?
- How does one define Technical Excellence?
- Explored the role of the Architect in agile environments
- Understood the broader responsibilities of an Architect working in agile environments
- Focused on specific behaviours and responsibilities of an Architect that help/hinder Technical Excellence
What follows are the results of the collective experiences of the workshop participants during Agile Europe 2016.
- A set of coding conventions & standards that are shared, discussed, abided by by the team
- Introducing more formal code reviews worked wonders, code quality enabled by code reviews, user testing and coding standards, Peer code review process
- Software modeling with UML
- First time we’ve used in memory search index to solve severe performance RDBMS problems
- If scrum is used, a good technical Definition of Done (DoD) is visible and applied
- Shared APIs for internal and external consumers
- Introducing ‘no estimates’ approach and delivering software/features well enough to be allowed to continue with it
- Microservice architecture with docker
- Team spirit
- Listening to others (not! my idea is the best)
- Keeping a project/software alive and used in prod through excellence customer support (most exclusively)
- “The art must not suffer” as attitude in the team
- Thinking wide!
- Dev engineering into requirements
- Problems clearly and explicitly reported (e.g. Toyota)
- Using most recent libraries and ability to upgrade
- Right tools for the job
- Frequent availability of “something” working (like a daily build that may be incomplete functionality, but in principle works)
- Specification by example
- Setting up technical environment for new software, new team members quickly introduced to the project (clean, straightforward set up)
- Conscious pursuit of Technical Excellence by the team through this being discussed in retros and elsewhere
- Driver for a device executed on the device
- Continuous learning (discover new tech), methodologies
- Automatic deployment, DevOps tools use CI, CD, UT with TDD methodology, First implementation of CD in 2011 in the project I worked on, Multi-layered CI grid, CI env for all services, Continuous Integration and Delivery (daily use tools to support them), Continuous Integration, great CI
- Measure quality (static analysis, test coverage), static code analysis integrated into IDE
- Fail fast approach, feedback loop
- Shader stats (statistical approach to compiler efficiency)
- Lock less multithreaded scheduling algorithm
- Heuristic algorithm for multi threaded attributes deduction
- It is easy to extend the product without modifying everything, modularity of codebase
- Learn how to use something complex (in depth)
- Reuse over reinvention/reengineering
- Ability to predict how a given solution will work/consequences
- Good work with small effort (efficiency)
- Simple design over all in one, it’s simple to understand what that technology really does, architecture of the product fits on whiteboard
More than 30 years ago Hirotaka Takeuchi and Ikujiro Nonaka wrote an article titled ‘The New New Product Development Game,’ which compares product development to rugby. This year’s NFL draft inspired me to make a similar analogy to American football. This is the first in a series of articles comparing agile and football around the major events:
- Draft Day
- Training Camp
- Kickoff Game
- Super Bowl
A football organization has a team of coaches with different experiences and strengths who are experts at football. Not only do they know the rules, but they specialize in one area (offense, defense, and special teams). Some are very inspirational, some are great with the owners of the team and some thrive on being in the trenches with the players. To be successful, they have to know what makes their players tick.
- Agile coaches are experts in agile and gravitate to various areas of focus (enterprise agile, new transformations, team level coaching, engineering best practices, etc.)
Team management and coaches are given a budget to spend. On draft day, they pick their top college and trade picks based on position, experience, potential, and how much they cost. The players receive a salary for the season and therefore the cost of their salaries are fixed.
- Agile teams are fixed and therefore the cost of labor doesn’t change.
Coaches recruit a cross-functional team for the different positions needed for the game (quarterback, receiver, guard, tackle, kicker, etc.).
- Agile teams are made up of individuals with all the skill sets needed to deliver value (developer, tester, user experience, customer representation, DevOps, etc.)
The team is made up of multiple individuals that play the same position so the coaches have options when deciding who should participate in each play.
- Agile teams can also have individuals with the same skill set. However, the team, not the coach or ScrumMaster, decides who should work on what.
Players are designated as first line or second line to indicate the stronger player. Even so, they don’t want a single point of failure, so they make sure all levels are as prepared and as skilled as they can be.
- The agile team succeeds together and fails together so it’s in their best interest to build up the weakest link by pairing the stronger member with the weaker.
Each play has a main position and a secondary position so players can help out in time of need.
- A team member may primarily be a developer, but can help out by executing test cases (not for their code of course) or with documentation.
The team captain designation is a team appointed position indicating the player is a leader on and off the field.
- Some agile teams may also designate a member as a team lead. That individual is performing work along the side of the other teammates and provides guidance to and outside of the team.
Stay tuned for the rest of the articles in this series:
- Training Camp – July
- Kickoff Game – September
- Super Bowl – February
Find out how VersionOne can partner with you to build winning agile teams for successful agile transformations.
The post The ‘Game’ of Agile Compared to Football: Draft Day appeared first on The Agile Management Blog.
In SonarQube 5.5 we adopted an evolved quality model, the SonarQube Quality Model, that takes the best from SQALE and adds what was missing. In doing so, we’ve highlighted project risks while retaining technical debt.
Why? Well, SQALE is good as far as it goes, but it’s primarily about maintainability, with no concept of risk. For instance, if a new, blocker security issue cropped up in your application tomorrow, under a strict adherence to the SQALE methodology you’d have to ignore it until you fixed all the Testability, Reliability, Changeability, &etc issues. When in reality, new issues (i.e. leak period issues) of any type are more important than time-tested ones, and new bugs and security vulnerabilities are the most important of all.
Further, SQALE is primarily about maintainability, but the SQALE quality model also encompasses bugs and vulnerabilities. So those important issues get lost in the crowd. The result is that a project can have blocker-level bugs, but still get an A SQALE rating. For us, that was kinda like seeing a green light at the intersection while cross-traffic is still flowing. Yes, it’s recoverable if you’re paying attention, but still dangerous.
So for the SonarQube Quality Model, we took a step back to re-evaluate what’s important. For us it was these things:
- The quality model should be dead simple to use
- Bugs and security vulnerabilities shouldn’t be lost in the crowd of maintainability issues
- The presence of serious bugs or vulnerabilities in a project should raise a red flag
- Maintainability issues are still important and shouldn’t be ignored
- The calculation of remediation cost (the use of the SQALE analysis model) is still important and should still be done
To meet those criteria, we started by pulling Reliability and Security issues (bugs and vulnerabilities) out into their own categories. They’ll never be lost in the crowd again. Then we consolidated what was left into Maintainability issues, a.k.a. code smells. Now there are three simple categories, and prioritization is easy.
We gave bugs and vulnerabilities their own risk-based ratings, so the presence of a serious Security or Reliability issue in a project will raise that red flag we wanted. Then we renamed the SQALE rating to the Maintainability rating. It’s calculated based on the SQALE analysis model (technical debt) the same way it always was, except that it no longer includes the remediation time for bugs and vulnerabilities:
To go help enforce the new quality model, we updated the default Quality Gate:
- 0 New Bugs
- 0 New Vulnerabilities
- New Code Maintainability rating = A
- Coverage on New Code >= 80%
The end result is an understandable, actionable quality model you can master out of the box; quality model 2.0, if you will. Because managing code quality should be fun and simple.