Skip to content

Blogs

Agile Workshop at GOTO Berlin

Ben Linders - Sat, 06/04/2016 - 10:20

Goto Berlin logoI’m giving the workshop Getting More out of Agile and Lean at the GOTO Berlin 2016 conference on November 16. In this agile workshop you will learn Agile and Lean practices for teams and their stakeholders to develop the right products, deliver faster, increase quality, and create happy high performing teams.

What will you get out of this workshop

  • Effective practices for planning, daily stand-ups, product reviews and retrospectives
  • Ideas for improving collaboration in teams and between teams and stakeholder
  • Tips and tricks to improve your agile way of working
  • Advice on selecting and applying agile and lean practices effectively


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

This workshop is intended for:

  •  Techical (team) leaders and Scrum masters
  • (Senior) Developers and Tester
  • Product Owners and Project/Line Managers
  • Agile and Lean Coaches
  • Anybody involved in agile transformations

This workshop is given in collaboration with Trifork at the GOTO Berlin 2016 conference. You can register for my workshop and the GOTO Berlin Conference. Early bird tickets are available until September 7!

Categories: Blogs

Tell Your Problems to the Duck

Johanna Rothman - Fri, 06/03/2016 - 18:51

Linda Rising gave a great talk last night at Agile New England. Her topic was problem-solving and decision-making.

One of her points was to discuss the problem, out loud. When you talk, you engage a different part of your brain than when you think. For us extroverts, who speak in order to think, this might not be a surprise. (I often say that my practice for my talks is almost irrelevant. I know what I’m going to say. And, I feed off the energy in the room. Things come out of my mouth that surprise me.)

If you’re an introvert, you might be surprised. In fact, since you think so well inside your head, you might scoff at this. Yes, speech and problem-solving both work in your frontal lobe. And, your brain processes thought and speech differently.

Rubber ducks

Long ago, I was stuck on a problem. I went to my boss and he told me to talk to the duck.

“The duck?” I asked. I thought he’d lost his mind.

“Yes, this duck.” He pulled out a yellow rubber duck off his shelf. “Talk to the duck.”

I looked at him.

“What are you waiting for? Do you want to take the duck back to your office? That’s okay.”  He turned back to his computer.

I sat there for a few seconds.

“You don’t pray to the duck. You talk to the duck. Now, either start talking to the duck or take the duck. But, talk to the duck.”

I am happy to say that talking to the duck worked for me. I have used that technique often.

Sometimes, I talk to a person. All they have to do is say, “Oh,” or “Uh huh,” or some other acknowledgement that they still live and breathe. If I use one person too often, I suspect they prefer if I talked to a duck.

If you are stuck on a problem, don’t do the same thing you did for the past 20 minutes. (That’s my maximum time to be stuck. Yours might be longer.) Talk to the duck.

If you want the wikipedia reference, here it is: Rubber Duck Debugging. Talk on.

Categories: Blogs

How To Bring New Tech Into Your Team

Derick Bailey - new ThoughtStream - Fri, 06/03/2016 - 17:34

If you’ve ever tried to introduce a new tool, technology or technique into an existing team, you’ve probably been met with resistance. Chances are, you’ve been told “no.” with no real discussion.

It’s a natural reaction for people to not want change. There’s potential risk. There’s learning curves. There’s a lot of emotional attachment to the way things are, and more.

If that’s the case, though, how do we bring new tools and technologies into a team?

In this episode of ThoughtsOnCode, I’ll share the technique that I’ve used in multiple companies, with multiple teams.

Categories: Blogs

Splitting User Stories

Learn more about transforming people, process and culture with the Real Agility Program

A common challenge faced by inexperienced Scrum teams is related to splitting user Stories (in Scrum, Product Backlog Items or PBIs) so that they are granular enough for development. The INVEST model is a good way to test whether user stories are well-written.

  • I – Independent
  • N – Negotiable
  • V – Valuable
  • E – Estimable
  • S – Small
  • T – Testable

Independent – Each user story must be independent of each other. This prevents any overlap between the items; moreover, it allows the team to implement them in any order.

Negotiable – The details of the work must be negotiable, both among the stakeholders and the team. Specific requirements and design decisions will be fleshed out during development. Many agile practitioners recommended writing user stories on a note card — this is intentional so that a limited amount of detail can be prescribed.

Valuable – Each user story must add business value to the product, the customer and/or the users’ experience.

Estimable ¬– A good user story can be understood well-enough by the team that they can estimate it — not accurately — but at a high-level they perceive that it has size. It is helpful to understand the relative effort as compared to other user stories.

Small – A user story is not small if the team cannot get it done within a single Sprint. As large user stories are split into smaller items, greater clarity about the size and implementation is achieved, which improves the likelihood that the team will get it done within a Sprint.

Testable – Each user story should be testable; this is a common characteristic of well written requirements. If the team cannot determine how the user story may be tested, it is an indication that either desired functionality or the desired business value is not clear enough.

Vertical vs Horizontal Splitting

There are two common ways to split user stories: vertically or horizontally. Horizontal breakdown of user stories splits the item at an architectural component level. Example: front end UI, databases or backend services. Whereas, a vertical slice results in working, demonstrable, software which adds business value. Therefore, it is recommended to slice user stories vertically so as to reduce dependencies and improve the team’s ability to deliver a potentially shippable product increment each sprint.

Splitting User Stories Example

As a customer I can pay for my order so that I receive the products

If the above user story was to be split in a vertical manner, it might be broken down into the various ways a customer can complete a payment. As follows…

As a customer I can make a credit card payment for my order so that I collect reward points on my credit card.

And/or

As a customer I can make a PayPal payment for my order so that I can securely complete my purchase without sharing credit card details with another retailer.

The key point to note in the vertically sliced user stories above is that each story passes the INVEST tests mentioned earlier and therefore a Product Owner can prioritize these user stories based on customer needs. However, if a horizontal approach was used to split the user story (i.e. split by architectural layers and components) then the implementation of such requirements would result in working functionality only when all horizontal components are eventually integrated.

Breaking down by Workflow

Another approach that is commonly used to breakdown user stories is focused on the individual steps a user may take in order to achieve their end goal — that is, a user story which describes a long narrative or “user flow” through a system may be sliced into steps which represent portions of the user’s flow. Continuing from the example above of a customer making a purchase online, the user story can be broken down into the following:

As a customer I can review the items being purchased for my order so that I can be confident I’m paying for the correct items.

As a customer I can provide my banking information for my order so that I can receive the products I ordered.

As a customer I can receive a confirmation ID for my purchase so that I can keep track and keep a record of my purchase.

Other Methods

There are many other methods that can be utilized to breakdown larger user stories such as:

  • Breaking down by business rules
  • Breaking down by happy / unhappy flow
  • Breaking down by input options / platform
  • Breaking down by data types or parameters
  • Breaking down by operations (CRUD)
  • Breaking down by test scenarios / test case
  • Breaking down by roles
  • Breaking down by ‘optimize now’ vs ‘optimize later’
  • Breaking down by browser compatibility

Kudos to this article for inspiring the list above: blog.agilistic.nl.

Other Helpful Resources

The Hamburger Method
User Stories and Story Splitting at AgileAdvice.com

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Splitting User Stories appeared first on Agile Advice.

Categories: Blogs

What Microservices Is Not

Jimmy Bogard - Fri, 06/03/2016 - 16:52

From what the term “Service” does not imply, from “What is a service (2016 edition)”:

  • “Cloud”
  • “Server”
  • “ESB”
  • “API”
  • XML
  • JSON
  • REST
  • HTTP
  • SOAP
  • WSDL
  • Swagger
  • Docker
  • Mesos
  • Svc Fabric
  • Zookeeper
  • Kubernetes
  • SQL
  • NoSQL
  • MQTT
  • AMQP
  • Scale
  • Reliability
  • “Stateless”
  • “Stateful”
  • OAuth2
  • OpenID
  • X509
  • Java
  • Node
  • C#
  • OOP
  • DDD
  • etc. pp.

We can apply a similar list to Microservices, where the term does not imply any technology. That’s difficult these days because so much marketecture conflates “Microservices” with some specific tool or product. “Simplify microservice-based application development and lifecycle management with Azure Service Fabric”. Well, you certainly don’t need PaaS to do microservices. And small is small enough when it’s not too big to manage, and no more. Not pizza metrics or lines of code.

So microservices does not imply:

  • Docker/containers
  • Azure/AWS
  • Serverless
  • Feature flags
  • Gitflow
  • NoSQL
  • Node.js
  • No more than 20 lines of code in deployed service
  • Service Fabric
  • AWS Lambda

Instead, focus more on the characteristics of a microservice:

  • Focused around a business domain
  • Technology agnostic API
  • Small
  • Autonomous
  • Autonomous
  • Autonomous

Most of the other descriptions or prescriptions around microservices are really just a side-effect of autonomy, but those technologies prescribed certainly aren’t a requirement to build a robust, scalable service.

My suggestion – go back to the DDD book, read the Building Microservices book. Just like DDD wasn’t about entities and repositories, microservices isn’t about Docker. And once you do get the concepts, then come back to the practitioners to see how they’re building applications with microservices, and see if those tools might be a great fit. Just don’t cargo-cult microservices like so many did before with DDD and SOA.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Professional Services and Improving Your Product

Tyner Blain - Scott Sehlhorst - Fri, 06/03/2016 - 14:14

Prioritization at whiteboard

How do you work with professional services, consulting, field engineers, etc. to make your product better? Do you just treat their inputs as yet another channel for feature requests, or do you engage them as an incredibly potent market-sensing capability?

Conversation Starter

I received an excellent and insightful question from one of my former students in DIT’s product management degree program (enrollment for the next cohort closes in a month).  This student is now a VP of product, and kicked off a conversation with me about best practices for establishing a workflow for product managers to collaborate with professional services teams to improve the product.  I’ve seen several companies try different ways to make this work, with one consistent attribute that described all of the approaches – not-visibly-expensive.

Two nights ago I was chatting with another colleague about how his team has been tasked with delivering a set of features, and not a solution to the underlying problem.  As a result, he’s concerned about potential mis-investment of resources and the possibility of not genuinely solving the problem once the team is done with their tasks.

Combining the two conversations, I realized that there’s a common theme.  When I look at how I’ve engaged with professional services folks, I found I’ve had success with a particular approach (which would also help my colleague).

First, let’s unpack a couple typical ways I’ve seen companies engage “the field” to get market data, and think through why a different approach could be better.

Just Ingest

tickets for a short order cook

One team I worked with managed their product creation process (discover, design, develop) within Atlassian’s Confluence (wiki) and JIRA (ticketing) systems.  Product managers and owners would manage the backlog items as JIRA tickets.  Bugs were submitted as JIRA tickets, and triaged alongside feature requests.  There was a place where anyone (deployment engineers, for example) could submit feature requests based on what they were seeing on-site at customers.  Product managers would then “go fishing” within that pool of tickets looking for the next big idea.  This process did not have a lot of visible overhead, but suffered from a “throw it over the wall” dynamic, a lack of collaboration, and a well-established pattern (not just a risk) of good ideas lying fallow in the “pool” waiting to be discovered, evaluated, and implemented.

stack of tickets that all look the same

From the product team’s perspective, going fishing was looking for needles in a haystack.  The cognitive effort required to parse through low-value tickets and duplicates shifts your thinking to where it is challenging to apply critical thinking to any given idea.  Therefore in addition to good ideas that were never discovered, many were touched but passed over.

This is certainly better than “no information from the field” but it emphasizes data and minimizes insight.

High Fidelity Connections

cook with ticket

One team I worked with had a product owner who formerly worked as a field support engineer.  This product owner reached out to her colleagues in the field regularly both socially (cultivating her network, and maintaining genuine connections with friends) and professionally – asking about trends, keeping her experience “current by proxy” as she realized her direct experience would grow stale with time.

This narrow-aperture channel was very high fidelity, but low in volume and limited in breadth of coverage.

Each idea that came in received thoughtful consideration, and the good ones informed product decisions.  The weakness of this approach was lack of scale; it suffered from the danger of extrapolating “market sensing” from a narrow view of a subset of the market.  Because this “just happened” within the way the product owner did her work, it appeared to accounting to be “free.”  Many good ideas were missed, presumably, because they didn’t happen to come to the attention of this product owner’s network.

I put this in the bucket of good (and better than just ingesting), but still falling short of the objective of a product manager.

A product manager’s goal is to develop market insights, not collect market data.
  • The first approach, while easy to institutionalize, had so much noise that you couldn’t find the signals.
  • The second approach had a great (data) signal-to-noise ratio, but the signal was constrained by limited bandwidth, and only worked because of the product manager’s unique background, approach, and interpersonal skills.
Manifestation Shows It’s Face Again Another truism in product management is that people tell you about how problems manifest, and ask you to address those manifestations.  They very rarely tell you which problem needs to be solved – because they don’t think about it that way.  Product people think about underlying problems. woman blowing her nose When your nose is runny, you reach for a tissue to clean up the mess.  You’re treating the lowest level symptom – a manifestation of the problem.  Some people will also reach for a decongestant, to stop their nose from running.  This too is treating the manifestation of the problem.  The underlying problem is illness, or allergies, or “something medical.” Software problems are experienced the same way. “I need to be able to see more issues on the screen at one time, because it is time-consuming to move through page after page of issues, and go back and forth to reference other related issues.” This is the software version of asking for a tissue.   If you dig into the problem, you will discover “the user needs to address groups of related issues simultaneously, and the UI does not help to collect and process them together.” Suddenly, you have different items in your backlog.  “I need a way to see which problems are urgent so that I can address them first – please add an icon to the display of each urgent issue in the issue list.  Then, when I scan through the pages of issues, I can find the most urgent ones and address them first.” Another tissue issue. When you delve into the problem and find “the user needs to be able to address the urgent issues first, even though other non-urgent issues are treated first-in, first-out.” You have an opportunity to re-sort the list to make the urgent issues be first.  You have the opportunity to understand if there are a team of people working against a queue of issues – and incorporate urgency into how those issues are assigned to individual users. When inputs are coming from the field, in my experience, a large portion of them are passed on “verbatim” as customer requests, without parsing by the services professionals who captured them.  And most of the remainder are augmented by well meaning team members who incorporate proposed solutions into the feedback.  Which is great.  Except they consistently ask for tissues, but perhaps helpfully suggest specifically how it might best be implemented in our product.  Problem solving is a character trait that makes great professional services people great.  Problem discovery and abstraction is not often a hiring criterion for folks in the field. Collaborative Workshops

collaborating to understand problems

There is another approach I’ve worked with a few teams to effectively generate product insights based on real-world insights from professional services team members.  The challenge with this approach is that the expense is visible – you’re pulling people out of the field for a day or two, taking them off their accounts for a day or two.  On one sufficiently high-profile project, a workshop date was scheduled (~5 weeks in advance) and people were “told” to come.  They planned travel, manged customer commitments, etc.  We booked a large room for two days and rolled up our sleeves.  On another project, we opportunistically scheduled a half-day session the day after an all-hands quarterly meeting that brought everyone into the office anyway.  The cost of the “extra day” was a lot lower than the cost of a standalone event. I’ve run two types of workshops that were very effective for this.  The first one frames problems in a broader context, and the second one really explores alternatives and opportunities in a more targeted exercise.  Ironically, the tighter targeting leverages divergent thinking as well as convergent thinking, and the broader framing is purely convergent. The first workshop is a co-opted customer journey mapping exercise.  I say co-opted because while I go through very many of the same steps, I am not attempting to improve the experience, I’m attempting to understand the nature of, and relative importance of solving, the problems a customer faces through the course of doing what they do while interacting with our product.  Without going into the specifics of running the workshop, the high level looks like the following:
  • Start out with a straw-man of what you believe the customer’s journey looks like – a storyboard is a good tool for making a visceral, engaging touchpoint for each step in the journey.  Review with the team and update the steps (add missing steps, re-order as appropriate, remove irrelevant and tag optional steps).
  • [Might not be needed, but worked when I did it] Start out with key personas identified, representing the customers for whom we are building product.  Workshop participants will be capturing their perspectives on the relative importance of problems from the point of view of those personas.
  • Within each step, elicit from the field all of the problems a customer faces within each of those steps.
  • Have the participants in the workshop prioritize the relative importance of each problem within each step (the 20/20 innovation game works great for this)
  • Have the participants prioritize the relative importance of “improving any particular step” relative to improving any other step. (Fibonacci story-pointing works well for this)
  • Record / take notes of the conversations – particularly the discussions where the participants are arguing about relative priority / relative importance.  Those conversations will uncover significant learnings that influence your thinking, and establish focused questions to which you will want answers later.  Before the workshop, you didn’t know which questions you needed to ask.
The heavy lifting comes later, in processing all of this information into multiple market hypotheses.  What is important is that you are gathering insights about the problems from the best-informed people, not simply processing a stack of tickets (or tissues). The second workshop is an impact mapping workshop.  Focusing on a specific task that users are performing, and really diving into why they are doing the task.  This activity applies both convergent and divergent thinking exercises to understand not only what people do (when using your product), but why they are doing it and how they measure success at their task.  From their you can discover alternative ways to solve the same problem, define measures of success for your product, determine how to instrument and what to measure about your product.  If you haven’t already bought Gojko Adzic’s book on Impact Mapping, just do it now. Conclusion

Professional services folks have massive amounts of customer data and insight – they only lack the (product management) skills to transform that insight into something usable by a product team.

The best way I’ve found to get value from that insight,  in a repeatable way across teams and individuals, is to incorporate running workshops that force teams to articulate what the customers are doing with the product (what are their goals and challenges).

When asking the questions this way, you get the answers you need.  By doing it in a collaborative workshop, you get more and better contributions from each of the team members than you would get through a series of interviews.

Categories: Blogs

Guest Blog: Cognition … what’s it all about?

Ben Linders - Fri, 06/03/2016 - 09:37

Improving Cognitive PerformanceIn this guest blog post on BenLinders.com Andrew Mawson from Advanced Workplace Associates talks about their ongoing research on cognition. The aim of that research is to provide guidelines that help knowledge workers do the right things to maximise their cognitive performance.

Cognition is just a scientific term for the functioning of the brain. Until recently measuring the effectiveness of the brain was a difficult challenge requiring laboratory conditions and very expensive equipment. However, over the last few years, researchers have designed software that reliably measure the performance of different parts of the brain.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

There are many different functions of the brain which are known as ‘domains’ but there are 5 of these domains that seem to be most important and which can be measured using software.

Five Key Cognitive Domains

The primary domains are:

  • Attention: the ability to focus one’s perception on target visual or auditory stimuli and filter out unwanted distractions.
  • Executive functioning: the ability to strategically plan one’s actions, abstraction, and cognitive flexibility – the ability to change strategy as needed.
  • Psychomotor and Speed accuracy: Reaction time / processing speed: related functions that deal with how quickly a person can react to stimuli and process information.
  • Episodic Memory: the ability to encode, store, and recall information. In most studies memory is further divided into recognition, recall, verbal, visual, episodic, and working memory. Each type of memory has specific tasks associated with that memory function.
  • Working Memory: is the system responsible for the transient holding and processing of new and already-stored information, and is an important process for reasoning, comprehension, learning and memory updating.

You can probably see that cognition and these domains matter hugely if you are involved in knowledge work. If for example your ‘Attention’ domain is not as effective as it could be, your ability to concentrate in meetings and whilst reading could be lower than somebody who has a better performing ‘Attention’ domain. Imagine if you were in a meeting with 4 other people and you missed a vital piece of the discussion. Regardless of how good your memory might be, if the information isn’t getting as far as your memory, then you won’t be able to recall it. So if at some future date when you are dealing with one of those four colleagues, they may well make an assumption that you absorbed the same information in the meeting as they did…but in fact you didn’t. You can probably see how this can lead to potential confusion. ‘Was that guy in the same meeting as you and I’?

Also, the environment in which you work may matter more to you is your ‘attention’ domain is weaker than someone who is stronger in this area. You may, for instance, have a greater need for a distraction free environment than someone who’s concentration allows them to block everything else out.

You can probably see that depending on your role, different domains have a greater significance to your effectiveness in delivering your role. So for instance if you are involved in a role where accuracy and speed are vital, perhaps if you are an airline pilot or an accountant then the ‘Psychomotor speed and accuracy’ domain may be critical to you. If you are a senior leader involved in determining strategy or planning, then ‘Executive’ function will be more critical.

You can see pretty quickly that in a world where the brains of your people are your key tools in generating value that the effectiveness of these domains matter enormously.

What we wanted to do in our research was to examine all the academic studies that had been done on the subject of ‘cognition’ around the world to establish what makes the most difference to the performance of different parts of the brain and identify what advice could be provided to make improvements then to come up with guidelines that help people do the right things to maximise their cognitive performance.

So over the next 6 months we’re going to reveal the results from our study at our Cognitive Fitness webpage, providing some guidelines that you can use to improve your own cognitive performance and those of your people.

Andrew Mawson is the owner and founder of Advanced Workplace Associates. He eats, sleeps and breathes all things workplace; you can connect with Andrew at LinkedIn.

Categories: Blogs

Hmmm… What does that mean?

George Dinwiddie’s blog - Thu, 06/02/2016 - 21:46

On numerous occasions I’ve observed long-time members of the Agile community complain about misinterpretations of what Agile means, and how it performed. Frequently this is precipitated by yet another blog post about how terrible Agile is, and how it damaged the life of the blogger. Sometimes it’s triggered by a new pronouncement of THE way to practice Agile software development, often in ways that are hardly recognizable as Agile. Or THE way to practice software development is declared post-Agile as if Agile is now obsolete and ready to be tossed in the trash bin.

The responses are both predictable and understandable. “If they’d only listen to what we actually said.” “Of course we don’t mean that.” “That’s not the Agile I know.” “Let’s take back Agile from those who misrepresent it!” It’s frustrating when people take terms you use and mangle the ideas that they represent to you into something unrecognizable and undesirable. I empathize with people making that response.

Recently I observed a similar situation where the shoe was on the other foot. I observed a long-time member of the Agile community describe a concept from another community where I’ve spent a lot of time and effort. The description was so far off the mark that I never would have guessed the concept that was being described.

As nearly as I can tell, they had formed a working definition from context, and from that definition had rejected the concept out of hand. They rejected it so forcefully that it seemed to taint their opinion of everyone connected with that concept. Indeed, “taint” may be insufficient, as they expressed their opinion not as “this person believes…” or “this person behaves…,” but as “this person is….”

I found this profoundly sad from several angles.

The concept is one that I’ve studied and used for years, and have developed layers of understanding that grow deeper over time. The person being dismissed is one I consider a friend and a mentor. It’s someone from whom I’ve learned a great deal over time, and that learning has significantly enriched my life.

The person making these comments is also someone I consider a friend. It distresses me greatly to watch one friend disparage another. I will generally defend the friend who is not there, as I did in this case and have done with regard to the other friend in other situations. This, of course, makes me a surrogate target representing the friend who is not there, and that increases the emotional magnitude of my distress.

The behavior of the friend who was present was so strikingly similar to the behavior I’ve observed that same friend rail against. In the situations where they were defending the concepts of Agile software development from what I considered unwarranted attack, I had felt a close affinity with what they were saying.

Now, seeing the same person taking the opposite role in a different context, I was having a hard time reconciling the difference in their behavior in the two situations. “They should know better than to dismiss a concept they don’t understand!”

Flashback to the year 2000. I was taking a coffee-break with a couple of colleagues at work. We were making jokes, being witty as software developers are wont to do. At one point in the conversation I used the phrase “Extreme Programming” as the punchline to a joke.

One colleague asked, “Extreme Programming? What’s that?”

“I don’t really know.” I had been researching Design Patterns on the Portland Pattern Repository and had seen the term being heatedly discussed, but had considered it noise in the way of my study of Design Patterns. The fact that there were obvious arguments about it, and that the term seemed silly on it face, had lead me to dismiss it out of hand. “I guess I should find out.”

This was the start of my study of Agile software development. I don’t know why my reaction, when confronted with my ignorance, was to enquire more deeply rather than defend my ignorance. I doubt that I react in that manner all the time. That particular reaction, though, has been hugely valuable for me. In many ways, it changed the direction of my life.

It’s not the term used, the name of the concept, that counts. It’s learning the nuances of the concept, starting with “Why would someone advocate this concept?” Assuming the answer to that question is that they’re an idiot leads nowhere productive. Investigating with curiosity often does.

What could I do about people who dismiss valuable concepts out of ignorance? I don’t have a good answer for that. Perhaps ignore the situation is the easiest non-negative response I can take. Arguing never seems to help, in my experience.

But when the shoe is on the other foot and someone suggests something that seems ridiculous at first glance, asking “Hmmm… What does that mean?” has served me better than rejection. At worst it goes nowhere and I’m left with “I don’t know.” Sometimes, however, it has opened my eyes to possibilities that I’d not yet imagined.

Categories: Blogs

Workshop outputs from “How Architects nurture Technical Excellence”

thekua.com@work - Thu, 06/02/2016 - 15:45
Workshop background

Earlier this week, I ran a workshop at the first ever Agile Europe conference organised by the Agile Alliance in Gdansk, Poland. As described in the abstract:

Architects and architecture are often considered dirty words in the agile world, yet the Architect role and architectural thinking are essential amplifiers for technical excellence, which enable software agility.

In this workshop, we will explore different ways that teams achieve Technical Excellence and explore different tools and approaches that Architects use to successfully influence Technical Excellence.

During the workshop, the participants explored:

  • What are some examples of Technical Excellence?
  • How does one define Technical Excellence?
  • Explored the role of the Architect in agile environments
  • Understood the broader responsibilities of an Architect working in agile environments
  • Focused on specific behaviours and responsibilities of an Architect that help/hinder Technical Excellence

What follows are the results of the collective experiences of the workshop participants during Agile Europe 2016.

How Architects nurture Technical Excellence from Patrick Kua Examples of Technical Excellence

  • A set of coding conventions & standards that are shared, discussed, abided by by the team
  • Introducing more formal code reviews worked wonders, code quality enabled by code reviews, user testing and coding standards, Peer code review process
  • Software modeling with UML
  • First time we’ve used in memory search index to solve severe performance RDBMS problems
  • If scrum is used, a good technical Definition of Done (DoD) is visible and applied
  • Shared APIs for internal and external consumers
  • Introducing ‘no estimates’ approach and delivering software/features well enough to be allowed to continue with it
  • Microservice architecture with docker
  • Team spirit
  • Listening to others (not! my idea is the best)
  • Keeping a project/software alive and used in prod through excellence customer support (most exclusively)
  • “The art must not suffer” as attitude in the team
  • Thinking wide!
  • Dev engineering into requirements
  • Problems clearly and explicitly reported (e.g. Toyota)
  • Using most recent libraries and ability to upgrade
  • Right tools for the job
  • Frequent availability of “something” working (like a daily build that may be incomplete functionality, but in principle works)
  • Specification by example
  • Setting up technical environment for new software, new team members quickly introduced to the project (clean, straightforward set up)
  • Conscious pursuit of Technical Excellence by the team through this being discussed in retros and elsewhere
  • Driver for a device executed on the device
  • Continuous learning (discover new tech), methodologies
  • Automatic deployment, DevOps tools use CI, CD, UT with TDD methodology, First implementation of CD in 2011 in the project I worked on, Multi-layered CI grid, CI env for all services, Continuous Integration and Delivery (daily use tools to support them), Continuous Integration, great CI
  • Measure quality (static analysis, test coverage), static code analysis integrated into IDE
  • Fail fast approach, feedback loop
  • Shader stats (statistical approach to compiler efficiency)
  • Lock less multithreaded scheduling algorithm
  • Heuristic algorithm for multi threaded attributes deduction
  • It is easy to extend the product without modifying everything, modularity of codebase
  • Learn how to use something complex (in depth)
  • Reuse over reinvention/reengineering
  • Ability to predict how a given solution will work/consequences
  • Good work with small effort (efficiency)
  • Simple design over all in one, it’s simple to understand what that technology really does, architecture of the product fits on whiteboard
Categories: Blogs

CQRS and REST: the perfect match

Jimmy Bogard - Wed, 06/01/2016 - 22:02

In many of my applications, the UI and API gravitate towards task-oriented UIs. Instead of “editing an invoice”, I “approve an invoice”, with specialized models, behaviors and screens just for accomplishing that task. But what happens when we move from a server-side application to one more distributed, to be accessed via an API?

In a previous post, I talked about the difference between entities, resources, and representations. It turns out that by removing the constraint around entities and resources, it opens the door to REST APIs that more closely match how we’d build the UI if it were a completely server-side application.

With a server side application, taking the example of invoices, I’d likely have a page to view invoices:

GET /invoices

This page would return the table of invoices, with links to view invoice details (or perhaps buttons to approve them). If I viewed invoice details, I’d click a link to view a page of invoice details:

GET /invoices/684

Because I prefer task-based UIs, this page would include links to specific activities you could request to perform. You might have an Approve link, a Deny link, comments, modifications etc. All of these are different actions one could take with an invoice. To approve an invoice, I’d click the link to see a page or modal:

GET /invoices/684/approve

The URLs aren’t important here, I could be on some crazy CMS that makes my URLs “GET /fizzbuzzcms/action.aspx?actionName=approve&entityId=684”, the important thing is it’s a distinct URL, therefore a distinct resource and a specific representation.

To actually approve the invoice, I fill in some information (perhaps some comments or something) and click “Approve” to submit the form:

POST /invoices/684/approve

The server will examine my form post, validate it, authorize the action, and if successful, will return a 3xx response:

HTTP/1.1 303 See Other
Location: /invoices/684

The POST, instead of creating a new resource, returned back with a response of “yeah I got it, see this other resource over here”. This is called the “Post-Redirect-Get” pattern. And it’s REST.

CQRS and REST

Not surprisingly, we can model our REST API exactly as we did our HTML-based web app. Though technically, our web app was already RESTful, it just served HTML as its representation.

Back to our API, let’s design a CQRS-centric set of resources. First, the collection resource:

GET /invoices

HTTP/1.1 200 OK
[
  {
    "id": 684,
    "invoiceNumber": "38042-L-275-684",
    "customerName": "Jon Smith",
    "orderTotal": 58.85,
    "href": "/invoices/684"
  },
  {
    "id": 688,
    "invoiceNumber": "33453-L-275-688",
    "customerName": "Maggie Smith",
    "orderTotal": 863.88,
    "href": "/invoices/688"
  }
]

I’m intentionally not using any established media type, just to illustrate the basics. No HAL or Siren or JSON-API etc.

Just like the HTML page, my collection resource could join in 20 tables to build out this representation, since we’ve already established there’s no connection between entities/tables and resources.

In my client, I can then follow the link to see more details about the invoice (or, alternatively, included links directly to actions). Following the details link:

GET /invoices/684

HTTP/1.1 200 OK
{
  "id": 684,
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "shippingAddress": "123 Anywhere"
  "lineItems": [ ]
  "href": "/invoices/684",
  "links": [
    { "rel": "approve", "prompt": "Approve", "href": "invoices/684/approve" },
    { "rel": "reject", "prompt": "Reject", "href": "invoices/684/reject" }
  ]
}

I now include links to additional resources, which in the CQRS world, those additional resources are commands. And just like our HTML version of things, these resources can return hypermedia controls, or, in the case of a modal dialog, I could have embedded the hypermedia controls inside the original response. Let’s go with the non-modal example:

GET /invoices/684/approve

HTTP/1.1 200 OK
{
  "invoiceNumber": "38042-L-275-684",
  "customerName": "Jon Smith",
  "orderTotal": 58.85,
  "href": "/invoices/684/approve",
  "fields": [
    { "type": "textarea", "optional": true, "name": "comments" }
  ],
  "prompt": "Approve"
}

In my command resource, I include enough information to instruct clients how to build a response (given they have SOME knowledge of our protocol). I even include some display information, as I would have in my HTML version. I have an array of fields, only one in my case, with enough information to instruct something to render it if necessary. I could then POST information up, perhaps with my JSON structure or form encoded if I liked, then get a response:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 303 See Other
Location: /invoices/684

Or, I could have my command return an immediate response and have its own data, because maybe approving an invoice kicks off its own workflow:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve/3506
{
  "id": 3506,
  "href": "/invoices/684/approve/3506",
  "status": "pending"
}

In that example I could follow the location or the body to the approve resource. Or maybe this is an asynchronous command, and approval acceptance doesn’t happen immediately and I want to model that explicitly:

POST /invoices/684/approve
comments=I love lamp

HTTP/1.1 202 Accepted
Location: /invoices/684/approve/3506
Retry-After: 120

I’ve received your approval request, and I’ve accepted it, but it’s not created yet so try this URL after 2 minutes. Or maybe approval is its own dedicated resource under an invoice, therefore I can only have one approval at a time, and my operation is idempotent. Then I can use PUT:

PUT /invoices/684/approve
comments=I love lamp

HTTP/1.1 201 Created
Location: /invoices/684/approve

If I do this, my resource is stored in that URL so I can then do a GET on that URL to see the status of the approval, and an invoice only gets one approval. Remember, PUT is idempotent and I’m operating under the resource identified by the URL. So PUT is only reserved for when the client can apply the request to that resource, not to some other one.

In a nutshell, because I can create a CQRS application with plain HTML, it’s trivial to create a CQRS-based REST API. All I need to do is follow the same design guidelines on responses, pay attention to the HTTP protocol semantics, and I’ve created an API that’s both RESTful and CQRSful.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

New SAFe 4.0 Introduction Whitepaper

Agile Product Owner - Wed, 06/01/2016 - 20:56

Greetings everyone!

We told you it’s coming, and many of you have been asking about it, so I’m happy to announce that the first whitepaper on SAFe 4.0 has been published, and is available for you to download at scaledagile.com/safe-whitepaper.

The official title is SAFe 4.0 Introduction: Overview of the Scaled Agile Framework for Lean Software and Systems Engineering.

The white paper distills the essence of SAFe from hundreds of website pages to just twenty-five, and provides a high level overview of the Framework, including its values, principles, practices, and strategy for implementation.

The white paper should be helpful in several ways:

  • Educate leaders, managers, and executives about how to apply SAFe for enterprise-class Lean-Agile software and systems development
  • Prepare students in the basics of SAFe prior to taking a course (e.g. Implementing SAFe 4.0 with SPC4 certification, Leading SAFe 4.0, etc.)
  • Help people understand SAFe 4.0 in greater detail, especially those who haven’t taken a 4.0 SAFe class

We look forward to your comments about the white paper and how the community might leverage it to support their SAFe transformations.

Stay tuned for updates about our SAFe Distilled, book, which I’m co-authoring with Dean Leffingwell and will be published by Pearson. I’m working with our publisher to see if we can share draft chapters for community feedback.

Always be SAFe,
–Richard Knaster, SAFe Fellow
@richardknaster

Categories: Blogs

The Agile Chip

Leading Agile - Mike Cottmeyer - Wed, 06/01/2016 - 15:04

I’m a Detroit guy and I like cars, especially fast cars, and nowadays they have computer chips you can install that make your car go faster. Imagine that, just plug in a chip and press on the accelerator and vroooom!

I’m also an agile coach and I see patterns.  With most every agile transformation I see a pattern when we start to talk about velocity and sustainable pace.  Some folks really zero in on just the velocity part.  It feels like they just found a new chip for their car and with visions of caffeine infused programmers they are thinking about hammering their right foot on the accelerator.

Indy Cars

Going fast can be a beautiful thing. With these new action cams we can watch an Indy Car Champion take a car through its paces and it’s like art in motion. It is also a beautiful thing when an agile team starts getting great gains in velocity. There is art to that as well, and the teams that are achieving great gains in velocity travelled a long hard road to get there.

Teams that are test driving and producing both Clean and SOLID code from the outset as well as having fully automated test suites are rare. These teams have well-groomed backlogs and they manage their commitments to the organization based on both previous velocity and sustainable pace. They are craftsmanship focused and continuously improving both their practices and their craft. But it doesn’t happen overnight and it certainly isn’t as easy as installing “An Agile Chip”.

Teams need an opportunity to build up their skills and get a stable velocity before they start looking at going faster. And they need help too. They need well-groomed backlogs, dependency free Epics, a clear understanding of what “done” is, and a clear roadmap for wherever they are going. All three of these; backlogs, working tested software, and the strong teams that produce them, take some time to develop.

The backlog is like the roadmap for the journey, and all of the epics and stories are like gas in the tank. Working tested software, is very much like the car we are driving, and of course the team is the driver. Yes, the team is the driver and the team presses the accelerator.

So where does that leave a manager?  Especially a hands-on manager who is used to driving the team or committing to a specific velocity?

The team still needs you to be vested and committed to their success, but the role changes from being hands on, in the car, to more of an owner in the pits. Teams need someone to clear the way for them to go fast. They need someone to help alleviate organizational impediments that can slow them down. They need the commitment, but to a new cause.

Race tracks are designed for speed. There are very few rules around how you drive; you just need to concentrate on going fast. The roadway is clear of impediments and it’s kept clear. Pit crews help, spotters, telemetry, everything around the team basically, the whole environment, is set up to go fast.

A lot of organizations are set up like a busy City Street. There may be some architecture that can be simplified or modularized.  Process overhead can often be streamlined if not eliminated.  Build and Release Management may be ripe for automation.  Testing automation can be another opportunity, and the list goes on.

A vested owner, helping clear the way for the team, can produce incredible opportunity for the team to go faster because otherwise, the roadways are basically full of impediments and organizational stop lights that slow everything to a crawl. If the environment is not set up for speed you can have the fastest of Indy Race Cars and it’s not going to get anywhere. The team is just going to sit there revving its engine and wasting energy.

Today’s agile teams are brimming with brilliant knowledge workers who don’t need traditional management.  They have pride in what they do and they love seeing what they create getting used out there in the world.  They care about their customers, their teams and their craft, and they can be trusted with the race car.  Give them the keys, clear the roadway ahead and you’ll be pleasantly surprised at where they take you.

The post The Agile Chip appeared first on LeadingAgile.

Categories: Blogs

Beginnen met Agile Retrospectives

Ben Linders - Wed, 06/01/2016 - 11:01

agile retrospective introductieAgile teams doen Agile Retrospectives om te leren en hun manier van werken aan te passen om continu te verbeteren. Het artikel leren en continue verbeteren in agile geeft een introductie in retrospectives en beschrijft hoe je kunt zorgen voor veiligheid zodat mensen open en eerlijk kunnen zijn. Dit artikel beschrijft waarom je retrospectives doet en hoe je met retrospectives kunt beginnen.

Waarom doe je retrospectives?

Insanity is doing the same things and expecting different results.‘ Als je de problemen die je ondervindt op wilt lossen, en meer waarde wilt leveren aan je klanten, dan moet je de manier waarop je je werk doet veranderen. Dat is de reden waarom Agile het gebruik van retrospectives aanbeveelt: om teams te helpen om problemen zelf op te lossen en zich te verbeteren!


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

Wat maakt retrospectives anders, welke voordelen leveren ze op? Een voordeel is dat retrospectives de macht aan het team geven, ze empoweren het team. Aangezien de teamleden zelf de retrospective doen en samen beslissen welke verbeteracties ze gaan doen, zal er weinig weerstand zijn tegen de veranderingen die nodig zijn.

Een ander voordeel is dat de acties die in een retrospectieve zijn afgesproken worden uitgevoerd door de leden van het team: er is geen hand-over! Het team analyseert wat er gebeurd is, bepaalt de acties, en de teamleden doen ze. Dit is veel effectiever en ook sneller en goedkoper :-).

Deze voordelen maken van Agile retrospectives een betere manier om verbeteringen te doen. Retrospectives zijn een van de succesfactoren voor het effectief gebruiken van Scrum. Je kunt verschillende retrospectieve oefeningen gebruiken om waarde toe te voegen voor de business. En retrospectives zijn ook een geweldig hulpmiddel om stabiele teams te creëren en behouden en hen te helpen om flexibel en effectief te werken en daarmee werkelijk Agile en Lean te worden.

Hoe begin je met retrospectives?

Er bestaan verschillende manieren om retrospectives te introduceren in organisaties. Je kunt Scrum-masters en andere mensen die retrospectives zullen faciliteren trainen (bijvoorbeeld met een Waardevolle Agile Retrospectives-workshop) om ze te leren hoe ze retrospectives goed kunnen doen. In de training leren ze hoe ze retrospectives kunnen doen in de Agile-teams door de diverse oefeningen te gebruiken.

Ik ben zelf begonnen met Agile retrospectives in ‘stealth mode’ te doen in mijn projecten. Ik noemde ze geen retrospectives maar gebruikte de term ‘evaluaties’. In plaats van te wachten tot het einde van het project stelde ik voor aan de teams om ze iedere iteratie te doen (het project werkte volgens RUP met iteraties van vier tot zes weken). De acties die uit de evaluatie kwamen, namen we in de volgende iteratie meteen mee. Dit voelde voor de teams heel natuurlijk.

Welke manier je ook kiest, zorg ervoor dat je retrospectives frequent blijft doen en dat de acties die uit de retrospective komen ook uitgevoerd worden. Zelfs als alles goed lijkt te gaan zijn er altijd manieren te vinden om te verbeteren!

Aan de slag!

Wil je meer weten over retrospectives en aan de slag gaan met retrospectives in je teams? Luis Gonçalves en ik hebben het boek Getting Value out of Agile Retrospectives geschreven. Dit boek is vertaald door een team van vrijwilligers naar het Nederlands: Waardevolle Agile Retrospectives. Het boek helpt je om voordelen te halen uit het doen van retrospectives. Er is ook een online Retrospective Oefeningen Toolbox die je kunt gebruiken om je eigen waardevolle agile retrospectives ontwerpen.

UtrechtOp 21 juni geef ik de workshop Valuable Agile Retrospectives in Utrecht. In deze succesvolle workshop leer je de waarom, wat en hoe van retrospectives en oefen je in teams met diverse manieren om retrospectives uit te voeren.

Opmerking: Dit artikel is gebaseerd op een artikel dat eerder gepubliceerd is op Computable.nl: Leren en verbeteren in Agile.

Categories: Blogs

Unix parallel: Populating all the USB sticks

Mark Needham - Wed, 06/01/2016 - 07:53

The day before Graph Connect Europe 2016 we needed to create a bunch of USB sticks containing Neo4j and the training materials and eventually iterated our way to a half decent approach which made use of the GNU parallel command which I’ve always wanted to use!


But first I needed to get a USB hub so I could do lots of them at the same time. I bought the EasyAcc USB 3.0 but there are lots of other ones that do the same job.

Next I mouunted all the USB sticks and then renamed the volumes to be NEO4J1 -> NEO4J7:

for i in 1 2 3 4 5 6 7; do diskutil renameVolume "USB DISK" NEO4J${i}; done

I then created a bash function called ‘duplicate’ to do the copying work:

function duplicate() {
  i=${1}
  echo ${i}
  time rsync -avP --size-only --delete --exclude '.*' --omit-dir-times /Users/markneedham/Downloads/graph-connect-europe-2016/ /Volumes/NEO4J${i}/
}

We can now call this function in parallel like so:

seq 1 7 | parallel duplicate

And that’s it. We didn’t get a 7x improvement in the throughput of USB creation from doing 7 in parallel but it took ~ 9 minutes to complete 7 compared to 5 minutes each. Presumably there’s still some part of the copying that is sequential further down – Amdahl’s law #ftw.

I want to go and find other things that I can use pipe into parallel now!

Categories: Blogs

On Learning and Information

lizkeogh.com - Elizabeth Keogh - Tue, 05/31/2016 - 17:32

This has been an interesting year for me. At the end of March I came out of one of the largest Agile transformations ever attempted (still going, surprisingly well), and learned way more than I ever thought possible about how adoption works at scale (or doesn’t… making it safe-to-fail turns out to be important).

The learning keeps going. I’ve just done Sharon L. Bowman’s amazing “Training from the Back of the Room” course, and following the Enterprise Services Planning Executive Summit, I’ve signed up for the five-day course for that, too.

That last one’s exciting for me. I’ve been doing Agile for long enough now that I’m finding it hard to spot new learning opportunities within the Agile space. Sure, there’s still plenty for me to learn about psychology,  we’re still getting that BDD message out and learning more all the time, and there’s occasional gems like Paul Goddard’s “Improving Agile Teams” that go to places I hadn’t thought of.

It’s been a fair few years since I experienced something of a paradigm shift in thinking, though. The ESP Summit gave that to me and more.

Starting from Where You Are Now

Getting 50+ managers of MD level and up in a room together, with relatively few coaches, changes the dynamic of the conversations. It becomes far less about how our particular toolboxes can help, and more about what problems are still outstanding that we haven’t solved yet.

Of course, they’re all human problems. The thing is that it isn’t necessarily the current culture that’s the problem; it’s often self-supporting structures and systems that have been in place for a long time. Removing one can often lead to a lack of support for another, which cascades. Someone once referred to an Agile transformation at a client as “the worst implementation of Agile I’ve ever seen”, and they were right; except it wasn’t an implementation, but an adoption. Of course it’s hard to do Agile when you can’t get a server, you’ve got regulatory requirements to consider, you’ve got five main stakeholders for every project, nobody understands the new roles they’ve been asked to play and you’re still running a yearly budgeting cycle – just some of the common problems that I’ve come across in a number of large clients.

Unless you’ve got a sense of urgency so powerful that you’re willing to risk throwing the baby out with the bathwater, incremental change is the way to go, but where do you start, and what do you change first?

The thing I like most about Kanban, and about ESP, is that “start from where you are now” mentality. Sure, it would be fantastic if we could start creating cross-functional teams immediately. But even if we do that, in a large organization it still takes weeks or months to put together any group that can execute on the proposed ideas and get them live, and it’s hard to see the benefits without doing that.

There’s been a bit of a shift in the Agile space away from the notion that cross-functional teams are necessarily where we start, which means we’re shifting away from some of the core concepts of Agile itself.

Dan North and Chris Matts, my long-time friends and mentors, have been busy creating a thing called Business Mapping, in which they help organizations match their investments and budgets to the capacity they actually have to deliver, while slowly growing “staff liquidity” that allows for more flexible delivery.

Enterprise Services Planning achieves much the same result, with a focus on disciplined, data-driven change that I found challenging but exciting: firstly because I realise I haven’t done enough data collection in the past, and secondly because it directs leaders to trust maths, rather than instincts. This is still Kanban, but on steroids: not just people working together in a team, but teams working together; not just leadership at every level, but people using the information at their disposal to drive change and experiment.

The Advent of Adhocracy

Professor Julian Birkenshaw’s keynote was the biggest paradigm shift I’ve experienced since Dave Snowden introduced me to Cynefin, and those of you who know how much I love that little framework understand that I’m not using the phrase lightly.

Julian talks about three different ages:

The Industrial Age: Capital and labour are scarce resources. Creates a bureaucracy in which position is privileged, co-ordination achieved by rules, decisions made through hierarchy, and people motivated by extrinsic rewards.

The Information Age: Capital and labour are no longer scarce, but knowledge and information are. Creates a meritocracy in which knowledge is privileged, co-ordination achieved by mutual adjustment, decisions made through logical argument and people motivated by personal mastery.

The Post-Information Age: Knowledge and information are no longer scarce, but action and conviction are. Creates an adhocracy in which action is privileged, co-ordination is achieved around opportunity, decisions are made through experimentation and people are motivated by achievement.

As Julian talked about this, I found myself thinking about the difference between the start-ups I’ve worked with and the large, global organizations.

I wondered – could making the right kind of information more freely available, and helping people within those organizations achieve personal mastery, give an organization the ability to move into that “adhocracy”? There are still plenty of places which worry about cost per head, when the value is actually in the relationships between people – the value stream – and not the people as individuals. If we had better measurements of that value, would it help us improve those relationships? Would we, as coaches and consultants, develop more of an adhocracy ourselves, and be able to seize opportunities for change as and when they become available?

I keep hearing people within those large organizations make comments about “start-up mindset” and ability to react to the market, but without having Dan and Chris’s “staff liquidity”, knowledge still becomes the constraint, and without having quick information about what’s working and what isn’t, small adjustments based on long-term plans rather than routine experimentation around opportunity becomes the norm.

So I’m going off to get myself more tools, so that I can help organizations to get that information, make sense of it, and create that flexibility; not just in their products and services, but in their changes and adoptions and transformations too.

And I’ll be thinking about this new pattern all the time. It feels like it fits into a bunch of other stuff, but I don’t know how yet.

Julian Birkenshaw says he has a book out next year. I can’t wait.


Categories: Blogs

Empathy and the Sponsored User

Empathy is an important attribute in Design Thinking. In order to solve our customer's problems, we really need to understand them. We need to walk in their shoes. But there's a limit to how far we can take this. We can spend hours talking to an astronaut, but we will never truly understand what it's like to walk in space.

Agile has this idea of the Product Owner, but you don't see much written in agile about empathy. One approach that can help you get past the empathy hurdle mentioned above is to find a key user (or users) to be part of your team. Some organizations call this a Sponsored User, the organization leading the project sponsors this person't participation in order to get their direct input into the product.

The sponsor user becomes one part of the multi-disciplinary team. Their input is important, but it isn't the only input. While they may understand the customer perspective, you need to balance all the project constraints, especially the time and cost it may take to implement some of the sponsored user's ideas. Don't lose sight of your minimal viable product (MVP) in trying to make the sponsored user happy.

You may also have more than one sponsored user, depending on the breadth of the solution you are trying to provide. If your current release has two or three major themes or epics, you could have a different sponsored user for each epic. Contrast this to the idea of having a single product owner responsible for the overall solution.

So when empathy isn't enough, make the user part of the team in order to keep the direction of your product moving the right way.
Categories: Blogs

Making Mad Men More Agile

TV Agile - Mon, 05/30/2016 - 18:04
The Mad Men era in advertising agencies is truly on the way out and collaborative agency models are spreading faster than the traditional top down approach. Agile principles can be used to great effect in advertising agencies and not only for digital projects but also to increase productivity and foster collaboration, creativity and innovation. In […]
Categories: Blogs

Using RabbitMQ To Share Resources Between Resource-Intensive Requests

Derick Bailey - new ThoughtStream - Mon, 05/30/2016 - 13:30

A question was asked on StackOverflow about managing long-running, resource intensive processes in a way that does not hog up all resources for a given request. That is, in a scenario where a lot of work must be done and it will take a long time, how can you have multiple users served and handled at the same time, without one of them having to wait for their work to begin?

Mine

This can be a difficult question to answer, at times. Sometimes a set of requests must be run serially, and there’s nothing you can do about it. In the case of the StackOverflow question, however, there was a specific scenario listed that can be managed in a “fair” way, with limited resources for handling the requests.

The Scenario: Emptying Trashcans

The original question and scenario are as follows:

I’m looking to solve a problem that I have with the FIFO nature of messaging severs and queues. In some cases, I’d like to distribute the messages in a queue to the pool of consumers on a criteria other than the message order it was delivered in. Ideally, this would prevent users from hogging shared resources in the system. Take this overly simplified scenario:

  • There is a feature within an application where a user can empty their trash can
  • This event dispatches a DELETE message for each item in trash can
  • The consumers for this queue invoke a web service that has a rate limited API

Given that each user can have very large volumes of messages in their trash can, what options do we have to allow concurrent processing of each trash can without regard to the enqueue time? It seems to me that there are a few obvious solutions:

  • Create a separate queue and pool of consumers for each user
  • Randomize the message delivery from a single queue to a single pool of consumers

In our case, creating a separate queue and managing the consumers for each user really isn’t practical. It can be done but I think I really prefer the second option if it’s reasonable. We’re using RabbitMQ but not necessarily tied to it if there is a technology more suited to this task.

Message Priorities And Timeouts?

As a first suggestion or idea to consider, the person asking the question talks about using message priorities and TTL (time-to-live) settings:

I’m entertaining the idea of using Rabbit’s message priorities to help randomize delivery. By randomly assigning a message a priority between 1 and 10, this should help distribute the messages. The problem with this method is that the messages with the lowest priority may be stuck in the queue forever if the queue is never completely emptied. I thought I could use a TTL on the message and then re-queue the message with an escalated priority but I noticed this in the docs:

Messages which should expire will still only expire from the head of the queue. This means that unlike with normal queues, even per-queue TTL can lead to expired lower-priority messages getting stuck behind non-expired higher priority ones. These messages will never be delivered, but they will appear in queue statistics.

This should generally be avoided, for the reasons mentioned in the docs. There’s too much potential for problems with messages never being delivered.

Timeout is meant to tell RabbitMQ that a message doesn’t need to be processed – or that it needs to be routed to a dead letter queue so that it can be processed by some other code. 

Priorities may solve part of the problem, but would introduce a scenario where files never get processed. If you have a priority 1 message sitting back in the queue somewhere, and you keep putting priority 2, 3, 5, 10, etc. into the queue, the 1 might not be processed.

For my money, I would suggest a different approach: sending delete requests serially, for a single file. 

Single Delete-File Requests

In this scenario, I would suggest taking a multi-step approach to the problem using the idea of a “saga” (aka a long-running workflow object).

When a user wants to delete their trashcan, you send a single message through RabbitMQ to a service that can handle the delete process. That service would create an instance of the DeleteTrashcan saga for that user’s trashcan.

The DeleteTrashcan saga would gather a list of all files in the trashcan that need to be deleted. Then it would start doing the real work by sending a single request to delete the first file in the list.

With each request to delete a single file, the saga waits for a response to say the file was deleted.

When the saga receives the response message to say the previous file has been deleted, it sends out the next request to delete the next file.

Once all the files are deleted, the saga updates itself (and any other part of the system, as needed) to say the trash can is empty.

How This Handles Multiple Users

When you have a single user requesting a delete, things will happen fairly quickly for them. They will get their trash emptied soon, as all of the delete-file requests in the queue will belong to that user:

u1 = User 1 Trashcan Delete Request
|u1|u1|u1|u1|u1|u1|u1|u1|u1|u1done|

When there are multiple users requesting a delete, the process of sending one delete-file request at a time means each user will have an equal chance of having their request handled next.

For example, two users could have their requests intermingled, like this:

u1 = User 1 Trashcan Delete Request

u2 = User 2 Trashcan Delete Request
|u1|u1|u1|u1|u1|u1|u2|u2|u1|u2|u2|u1|u2|u1|u1done|u2|u2|u2|u2|u2done|

Note that the requests for the 2nd user don’t start until some time after the requests from the first user. As soon as the second user makes the request to delete the trashcan, though, the individual delete-file requests for that user start to show up in the queue. This happens because u1 is only sending 1 delete-file request at a time, allowing u2 to have requests show up as needed.

Over-all, it will take a little longer for each person’s trashcan to be emptied, but they will see progress sooner rather than later. That’s an important aspect of people thinking the system is fast / responsive to their request.

But, this setup isn’t without potential problems.

Optimizing: Small File Set vs Large File Set

In a scenario where you have a small number of users with a small number of files, the above solution may prove to be slower than if you deleted all the files at once.

After all, there will be more messages sent across RabbitMQ – at least 2 for every file that needs to be deleted (one delete request, one delete confirmation response)

To optimize this, you could do a couple of things:

  • Have a minimum trashcan size before you split up the work like this. below that minimum, you just delete it all at once
  • Chunk the work into groups of files, instead of one at a time (maybe 10 or 100 files would be a better group size, than 1 file at a time)

Either (or both) of these solutions would help to improve the over-all performance of the process by batching the work and reducing the number of messages being sent. You would need to do some testing in your actual scenario to see which of these (or maybe both) would help and at what settings, though.

Beyond the batching optimization, there is at least one additional problem you may face – too many users.

The Too-Many-Users Problem

If you have 2 or 3 users requesting deletes, it won’t be a big deal. Each will have their files deleted in a fairly short amount of time.

But if you have 100 or 1000 users requesting deletes, it could take a very long time for an individual to get their trashcan emptied – or even started! 

In this situation, you may need to have a introduce level controlling process where all requests to empty trashcans would be managed.

This would be yet another Saga to rate-limit the number of active DeleteTrashcan sagas.

For example, if you have 100 active requests for deleting trashcans, the rate-limiting saga may only start 10 of them. With 10 running, it would wait for one to finish before starting the next one.

This would introduce some delay to processing the requests. But it has the potential to reduce the over-all time it takes to delete an individual trashcan.

Again, you would need to test your actual scenario to see if this is needed and see what the limits should be, for performance reasons.

Scaling Up and Out

One additional consideration in the performance of these resource-intensive requests is that of scaling up and out.

Scaling up is the idea of buying “larger” (more resources) servers to handle the requests. Instead of having a server with 64GB of memory and 8 CPU cores, perhaps a box with 256GB of memory and 32 CPU cores would perform better and allow you to increase the number of concurrent requests.

While scaling up can increase the effectiveness of an individual process instance, it becomes expensive and has limitations in the amount of memory, CPU, etc.

Scaling out may be a preferable method in some cases, then.

This is the idea that you buy many smaller boxes and load balance between all of them, instead of relying on one box to do all the work. 

For example, instead of buying one box with 256GB memory and 32 cores, you could buy 10 boxes with 32GB of memory and 4 cores each. This would create a total processing capacity that is slightly larger than the one box, while providing the ability to add more processing capacity by adding more boxes. 

Scaling your system up and/or out may help to alleviate the resource sharing problem, but likely won’t eliminate it. If you’re dealing with a situation like this, a good setup to divide and batch the processing between users will likely reap rewards in the long-run. 

Many Considerations, Many Possible Solutions

There are many things that need to be considered in resource intensive requests.

  • The number of users, the number of steps that need to be taken
  • The resources used by each of those steps
  • Whether or not individual steps can be done in parallel or must be done serially
  • … and so much more

Ultimately, the solution I describe here was not suitable for the person that asked, due to some additional constraints. They did, however, find a good solution using networking “fair queueing” patterns. 

You can read more about fair queueing with this post and this video.

Because each scenario is going to have different needs and different challenges, you will likely end up with a different solution, again. That’s ok.

Design patterns, such as Sagas and fair queueing, are not meant to be all-in-one solutions. They are meant to be options that should be considered.

Having tools in your toolbox, like Sagas and batch processing, however, will give you options to consider when looking at these situations. 

Categories: Blogs

Have impediments solved by Scrum team members

Ben Linders - Mon, 05/30/2016 - 12:52

Scrum team members take actionIn my workshops I often play the impediment game with teams. In that game team members discuss impediments that happen during Scrum sprints or Kanban flows, and decide how to deal with them and who should solve the impediment. The “who” is most often “any team member”. But in practice I see that teams struggle with impediments, and often expect the Scrum master or a manager to solve them. Let’s explore what can be done to have impediments solved by Scrum team members.

What I have seen in all of my workshops where I played the impediment game is that, for 90% or more of the actions needed,  team members agree that any team member can do the action. Occasionally there’s an action which they think should be done by the Scrum master. Assigning the action to somebody outside the team is rare. Even if the team needs a manager’s decision they will allocate the action to a team member who will arrange for support and assure follow up.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

What stops team members from taking action

Given that I hear team members saying in my workshops that they are willing and able to solve impediments, why does it still happen so often in practice that Scrum masters are the ones taking action? Or that actions are assigned to or picked up by managers outside the team?

What I hear when I’m coaching people is that it has to do with habits, it’s something that people are used to. In the old days with waterfall projects their manager would take action when there’s a problem, or tell them to do something. So team members start off in agile by expecting the same from the Scrum master or manager.

Another reason I hear is that team members are scared to take responsibility. They think they are not allowed to decide and take action. When they did it before, they were questioned or overruled, so they stopped taking actions themselves. So now they behave as they feel the organization expects from them.

This is a cultural problem, not a skills or capability issue. In my workshops I don’t have to train people how to take action. They know that very well! I create a safe setting using the impediment game, and when they play the game they go into their natural behavior which is to solve problems themselves. They do solve problems when at home, in their sport teams or when they practice their hobbies. Somehow they switch it of when they walk into the office. Let’s remove that switch!

Anything that slows down a team needs to be dealt with. Solving impediments matters! Handling impediments is a key value for teams and organizations to increase their agility. and yes, you need to have everybody on board to deal with problems effectively, including team members.

Have impediments solved by Scrum team members

Earlier I wrote about why Scrum masters shouldn’t be the one solving all impediments. There’s value in the Scrum master helping team members to find ways, and be an example, but it should be about teaching people how to fish in stead of feeding them. You do not want the Scrum master to be a bottle neck for the team. Self organizing means that the team as a whole is capable to deal with impediments.

The solution to have team members solving impediments is actually quite easy: stop Scrum masters and managers from solving impediments! Get rid of the mindset that it’s a Scrum master’s or manager’s task to solve problems; any professional can and should do that. Give space to team members so that they dare to take action.

These are things that help you enable and inspire team members to take action:

If team members are taking action, then you don’t need full time Scrum masters. I’ve worked in very mature teams and we didn’t need a Scrum master, since everybody could and would do the things that Scrum masters do. When something needed to be done we quickly decided who would do it, e.g. who would be most suited or by taking turns.

If you are still not convinced that impediments can and should be solved by team members after reading this article, then I invite you to come to one of my workshops to play the impediment game and find out

Categories: Blogs