Skip to content

Feed aggregator

Targetprocess v.3.10.6: minor bug fixes

TargetProcess - Edge of Chaos Blog - Mon, 12/12/2016 - 18:36
Improved Project and Team assignments from Person, Team, and Release views

This should have been fixed years ago. If you run dozens of Projects and Programs, then you know it's a pain to and add a new user or a cross-project release to the system and assign them to all the necessary Projects/Programs. You would have to select them one-by-one from a list that was sorted alphabetically but had no grouping by Programs.

Now it's easy - all Projects appear grouped under their Programs, so you can search for a Project and assign users/releases to all Projects in a Program straight away.

assign-to-projects

Fixed Bugs
  • Follow plugin: fixed a problem where emails would not be sent if an inactive user was following the changed entity
  • Fixed audit history failures on entity views when process-independent custom fields were changed
  • Fixed a problem where the auto-complete function for tags was interfering with selection from a tags cloud
  • Fixed POP email integration plugin which skipped rules for a subject keyword that contained an apostrophe

 

Categories: Companies

Changing gears in how we think about Agile Transformation

Leading Agile - Mike Cottmeyer - Mon, 12/12/2016 - 15:00

Changing the way we think about Agile Transformation

Addressing culture, practices, and systems are all essential to successfully navigate an Agile transformation.  All three work together, but where you begin the transformation process is the difference between success and frustration.

agile transformation infographic

Get a high res version of the infographic to share. Download

The post Changing gears in how we think about Agile Transformation appeared first on LeadingAgile.

Categories: Blogs

Making the Quantum Leap from Node.js to Microservices

Derick Bailey - new ThoughtStream - Mon, 12/12/2016 - 14:30

I recently received a question, via email, about how to get up and running with microservices in Node.js.

The person asking was interested in my Microservices w/ RabbitMQ Guide over at WatchMeCode, but was having a hard time wrapping their head around some of the aspects of microservices, discovery, shared resources, etc.

Quantum leap From the person asking:

I do have a question about something. Curious if it is covered in your rabbitMQ/Services section.

 

How do you make that leap to node micro-services. That seems so foreign to me. I have no idea what they are… and how to set up multiple micro-services that may need to “share resources”.

 

Like, having a node-sdk for your application etc..  then each of your micro-services registers with it… then other microservices can ‘request’ those other micro services.. by bringing in the sdk, then just calling of the registrations you want. How is that even possible? Is that even the right way to do it?

Your thoughts?

Honestly, there’s not much of a secret or trick to this.

Most of the hype and hand waving around microservice SDKs, service discovery, etc. is just that – hype and hand waving.

It makes people look good because it’s “big” or “at scale”, to talk about microservices and the surrounding tools, SDKs and services.

Microservices are a mostly simple thing.

They typically don’t need anything special, or complex.

And chances are, you already know how to build microservices.

Building

But our industry is so fixated on companies like Netflix, AWS, Heroku, Docker, etc. that we can’t see past the hype of “web scale” and Docker (which is a great tool – I love it, but it’s way overhyped).

When it comes down to doing the work, the idea of a “microservice” is just a renaming of existing concepts – concepts that have been around for a very long time.

Call it “Service Oriented Architecture” or whatever you want, the purpose is to keep an individual service small and focused, allowing it to be used by larger systems.

A microservice is small, focused.

It will do one thing, and do it well.

Simple

If you’ve read much about the unix philosophy, this is just an extension of that idea.

A single unix / linux command line tool will do one thing, taking text as input and producing text as output. A microservice will do one thing, typically taking a message as input and producing a message as output.

The exact format and protocol of those messages can vary greatly – HTTP, SOAP, AMQP (rabbitmq) and other protocols, with JSON, XML or other common (or proprietary) data formats.

There’s nothing new or different, here. These are tools and technologies that you’ve been using for years, as a developer.

When it comes to “shared resources”, there are a number of different aspects to consider.

But once again, it’s usually easier than most people want to admit.

Do you have a database server that multiple apps use? That’s a shared resource, already.

Did you build a back-end service that handles all of your authentication needs, and it’s just an HTTP call away? That’s a shared resource.

The notion of shared resources goes well beyond the infrastructure and plumbing, though. And frankly, a database is a terrible integration layer. 

But when it comes down to it, sharing resources amongst services is once again, just passing messages around like we have always done.

Where it gets interesting / complicated, is the point where we start talking about scaling services and making them discoverable.

There are a lot of very fancy “secret management” and “service discovery” systems around.

Again, this isn’t a new problem. We’ve had “service discovery” in one form or another for many, many years now.

Complex services

Remember WSDL? WCF-Discovery? Aspects of CORBA, DCOM, and countless other distributed systems architecture are all built around the idea of service discovery. 

These tools make scalable microservices much easier – and there are a lot of new players in this realm, taking advantage of the current technology trends. 

Things like consul or Eureka – and plenty of others – will help your applications “discover” the services they need.

There’s no magic in this, other than the “discovery service” providing the correct connection string for you (basically – it’s a bit more complex than that).

There’s also “secret stores” like Vault and Torus that make the storage and retrieval of authentication / authorization credentials for services, centralized.

Instead of using JSON documents or environment variables to store API keys for a service, you connect to the “secret store” and ask for the API keys that you need.

The are great tools to use when you have need to centrally manage services and API keys.

Most of the time, you won’t need these things.

Dont need them

I run a few systems built with a microservice architecture, in a very large healthcare network. These are critical systems that run nightly batch processes for the financial data warehouse.

I don’t bother with these large scale system architectures and scalability problems because I don’t need to, at this point.

I point my web app, my individual services, my scheduling system, and everything else at a known RabbitMQ and MongoDB instance, and that’s it.

I use JSON configuration files to store connection strings, and I let RabbitMQ be the central hub of communication for me.

It may not be “web scale” and won’t win me any awards, but I sleep well at night, knowing my system works.

The examples I show in the Microservices w/ RabbitMQ Guide are what I use in my production apps.

I built a simple Express web app that sends a message across RabbitMQ.

Rmq central

I built a simple Node.js “back-end” service that receives the message from RabbitMQ, and uses the data in the message to run some other process.

Sometimes there is a return message, sometimes there’s an update to the database.

Sometimes the database in question is “shared” across mutliple apps (though I try to avoid this).

And to run all of these services, I use simple tools like forever and pm2.

I don’t get into fancy “orchestration” and “service discovery” because I don’t need it. I just need to make sure my “node web/bin/www” command line is run when my server reboots, or when the node process crashes.

Building a system with a microservice architecture is not a quantum leap.

It’s just building small apps that do something useful, and coordinating them with HTTP, AMQP, etc.

The “trick” is figuring out which parts of your apps can and should be put into a separate node executable process.

That knowledge only comes from experience… a.k.a trial and error.

For example, stop sending email from your web app. Instead, push a message across RabbitMQ and have another node process read that message and send the actual email.

Find simple places where the web app can stop being responsible for “simple” things, like sending email, and you’ll be well on your way to building a proper microservices architecture.

Getting Started Is As Simple As Learning RabbitMQ

A part of WatchMeCode, the Guide to Building Microservices w/ Node.js and RabbitMQ will get you up and running fast.

NewImage

You’ll start with an introduction to RabbitMQ, and how to install and configure it. You’ll learn the basics of messaging with RabbitMQ, and see the most common application design patterns for messaging.

You’ll learn how to integrate RabbitMQ into your own services, your web application, and more. And you’ll do it with simplicity and easy, reducing lines of code while increasing features and capabilities in your applications.

From there, this guide will show you how to build a simple, production-ready microservice using Express as a front-end API and RabbitMQ as the communication protocol to and back-end service. 

It’s code that comes straight from my production needs, used by thousands of developers around the world as part of my WatchMeCode services infrastructure.

There’s no better way to get started, and no better resource than real-world code that exemplifies simplicity, running in a production environment.

The post Making the Quantum Leap from Node.js to Microservices appeared first on DerickBailey.com.

Categories: Blogs

Microservices, not so much news after all?

Xebia Blog - Sun, 12/11/2016 - 19:07
A while ago at Xebia we tried to streamline our microservices effort. In a kick-off session, we got quite badly side tracked (as is often the case) by a meta discussion about what would be the appropriate context and process to develop microservices. After an hour of back-and-forth, we reached consensus that might be helpful
Categories: Companies

The Simple Leader: Gratitude

Evolving Excellence - Sun, 12/11/2016 - 11:40

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen

Be thankful for what you have; you’ll end up having more.
If you concentrate on what you don’t have, you will never, ever have enough.
– Oprah Winfrey

Oftentimes we become so focused on fixing problems and resolving issues that our entire sense of reality shifts. We begin to live in a bubble that encompasses the negative and blocks the positive. Because they demand our attention, the negative aspects of work and life consume a disproportionate amount of our thinking, and eventually distorts our perceived reality.

You can re-center your perspective by grounding yourself in thanks for what is good with you or your team. What are you thankful for? Think about your health, your relationships, your business success. There will be more to be thankful for than you realize. Use a few minutes in the shower each morning, the first few minutes of your meditation, or even the first few minutes of each staff meeting to identify specific people and situations to be thankful for. Try to say thanks to at least one person each day, meaning fully and mindfully. Even better, write someone a thank you note by hand. Make it a self-sustaining habit, a routine.

I have much to be thankful for: my parents teaching me the joy of learning, which eventually led me to discover Lean and Zen; my wife teaching me how to be more compassionate, which has completely changed my perspective on life; and business partners and associates that have put up with some of my wild ideas.

Reflecting on gratitude at the beginning and end of each day creates calm bookends to what can be chaos for me. As problem solvers, we are naturally predisposed to focus on the negative, taking for granted the positive to the extent that we often become oblivious and unaware of just how much positive there is in our lives. Intentionally focusing on gratitude brings that perspective back to reality. Expressing gratitude in daily life, complimenting and helping others, or just smiling, reinforces the power of being thankful. Intentionally finding gratitude every day, has changed my perspective on life more than any other personal or professional leadership habit. I’ve discovered I have a lot to be thankful for, which helps me be more generous, sympathetic, and empathetic.

Categories: Blogs

Links for 2016-12-10 [del.icio.us]

Zachariah Young - Sun, 12/11/2016 - 10:00
Categories: Blogs

Announcement: We are Hiring a Training Sales Person

Learn more about transforming people, process and culture with the Real Agility Program

We are looking for a highly-motivated person to help us take our training business to the next level! This position is focused on sales, but includes other business development activities. The successful candidate for our training sales position will help us in several areas of growth including:

  • direct sales of our existing training offerings
  • expand our training loyalty program
  • launch and expand new training offerings
  • expand the market to new locations outside the GTA: Vancouver, Montreal, Ottawa, etc.
  • expand our partner/reseller network

Please check out the full job posting for a Training Sales Person here.  You can send that link to others who might be interested!

BERTEIG World Mindware Logo - Training Sales Person

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Announcement: We are Hiring a Training Sales Person appeared first on Agile Advice.

Categories: Blogs

Cost Accounting is a Problem for Agile (and Knowledge Work)

Johanna Rothman - Fri, 12/09/2016 - 16:30

The more I work with project portfolio teams and program managers, the more I understand one thing: Cost accounting makes little sense in the small for agile, maybe for all knowledge work. I should say that I often see cost accounting in the form of activity-based accounting. Each function contributes to some of the cost of the thing you’re producing.

Here’s why. Cost accounting is about breaking down a thing into its component parts, costing all of them, and then adding up the costs as you build it. Cost accounting makes sense for manufacturing, where you build the same thing over and over. Taking costs out of components makes sense. Taking costs out of the systems you use to create the product makes sense, too.

Cost accounting makes some sense for construction because the value is in the finished road or building. Yes, there is certainly knowledge work in parts of construction, but the value is in the final finished deliverable.

Software is neither construction nor manufacturing. Software is learning, where we deliver learning as pieces of finished product. When we have finished learning enough for now, we stop working on the deliverable.

Agile allows us to deliver pieces of value as we proceed. Other life cycles, such as incremental life cycles or even releasing your product in short waterfalls allows us to see value as we proceed before the whole darn thing is “done.”

We might need to know about costs as we proceed. That’s why we can calculate the run rate for a team and see how our feature throughput is working. If we start looking at feature throughput and the cost per feature, we can put pressure on ourselves to reduce the size of each feature. That would reduce the cost per feature. Smaller features allows us to learn faster and see value faster.

Cost accounting has a big problem. It does not account for Cost of Delay, which is a huge cost in many organizations. (See Diving for Hidden Treasures: Uncovering the Cost of Delay in Your Project Portfolio to read all about Cost of Delay.)

This is one of the reasons I like feature sizes of one day or less. You count features for throughput and use run rate as a way to estimate cost.

I don’t know enough about accounting to say more. As I learn with my clients and especially learn about the pressures on them, I’ll post more. I do know this: the more we talk about the cost of a feature without talking about its value, the less we understand about how our software can help us. The more we talk about velocity instead of feature throughput, the less we know about what it really costs for us to develop our software.

Cost accounting is about surrogate measures (earned value, the cost of tasks, etc.) instead of valuing the whole. Agile gives us a way to see and use value on a daily basis. It’s opposed to the way cost accounting works. Cost accounting is a problem. Not insurmountable, but a problem nevertheless.

There are alternatives to cost accounting: throughput accounting and lean accounting. To use those ideas, you need to enlist the finance people in an agile transformation. As always, it depends on your goals.

Categories: Blogs

Test for RSS please ignore.

Sonar - Fri, 12/09/2016 - 15:29
Categories: Open Source

Scrum Immersion workshop at GameStop - Case Study

Agile Complexification Inverter - Thu, 12/08/2016 - 21:10
Here's a overview of a Scrum Immersion workshop done at GameStop this month. A case study example.

Normally these workshops start with the leadership (the stakeholders or shareholders) which have a vision for a product (or project). This time we skipped this activity.

The purpose of the Workshop is to ensure alignment between the leadership team and the Agile Coaches with regards to the upcoming scrum workshop for the team(s). Set expectations for a transition from current (ad-hoc) practices to Scrum. Explain and educate on the role of the Product Owner.

Expected Outcomes:
  • Create a transition plan/schedule
  • Set realistic expectations for transition and next release
  • Overview of Scrum & leadership in an Agile environment
  • Identify a Scrum Product Owner – review role expectations
  • Alignment on Project/Program purpose or vision
  • Release goal (within context of Project/Program & Scrum transition)

Once we have alignment on the Product Owner role and the Project Vision we typically do a second workshop for the PO to elaborate the Product Vision into a Backlog. This time we skipped this activity.

The purpose of the Workshop is to educate the Product Owner (one person) and prepare a product backlog for the scrum immersion workshop. Also include the various consultants, SME, BA, developers, etc. in the backlog grooming process. Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Overview of Scrum & Product Owner role (and how the team supports this role)
  • Set PO role responsibilities and expectations
  • Alignment of Release goal (within context of Project/Program & Scrum transition)
  • Product Backlog ordered (prioritized) for the first 2 sprints
  • Agreement to Scrum cadence for planning meetings and grooming backlog and sprint review meetings

Once we have a PO engaged and we have a Product Backlog it is time to launch the team with a workshop - this activity typically requires from 2 to 5 days. This is the activity we did at GameStop this week.
The primary purpose of the workshop is to teach just enough of the Scrum process framework and the Agile mindset to get the team functioning as a Scrum team and working on the product backlog immediately after the workshop ends (begin Sprint One). Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Basic mechanics of Scrum process framework
  • Understanding of additional engineering practices required to be an effective Scrum team A groomed / refined product backlog for 1- 3 iterations
  • A backlog that is estimated for 1 – 3 iterations
  • A Release plan, and expectations of its fidelity – plans to re-plan
  • Ability to start the very next day with Sprint Planning

Images from the workshop

The team brainstormed and the prioritized the objectives and activities of the workshop.


Purpose and Objectives of the WorkshopThe team then prioritized the Meta backlog (a list of both work items and learning items and activities) for the workshop.

Meta Backlog of workshop teams - ordered by participants
Possible PBI for Next Meta Sprint
Possible PBI for Later Sprints
Possible PBI for Some Day
Possible PBI for Another Month or Never
A few examples of work products (outcomes) from the workshop.

Affinity grouping of Persona for the user role in stories
Project Success Sliders activityTeam Roster (# of teams person is on)
A few team members working hardThree stories written during elaboration activity
A few stories after Affinity Estimation
Release Planning:  Using the concept of deriving duration based upon the estimated effort.  We made some assumptions of the business desired outcome;  that was to finish the complete product backlog by a fixed date.
The 1st iteration of a Release Plan That didn't feel good to the team, so we tried a different approach.  To fix the scope and cost, but to have a variable timeframe.
The 2nd iteration of a Release Plan That didn't feel good to the PO, so we tried again.  This time we fixed the cost and time, but varied the features, and broke the product backlog into milestones of releasable, valuable software.
The 3rd iteration of a Release PlanThis initial release plan feels better to both the team and the PO, so we start here.  Ready for sprint planning tomorrow.



Categories: Blogs

Announcement: Additional Writing Workshop

Johanna Rothman - Thu, 12/08/2016 - 19:47

I have enough people in the Writing Workshop 1: Write Non-Fiction to Enhance Your Business and Reputation to add a second section. You are right for this workshop if:

  • You are thinking about writing more
  • You want to improve your writing
  • You want to develop a regular habit of writing

If a blank piece of paper scares you, this is the right workshop for you.

If you are an experienced writer and want to bring your skills to the next level, you want Writing Workshop 2: Secrets of Successful Non-Fiction Writers.

If you’re not sure which workshop is right for you, email me. I can help you decide.

Oh, and if you’re a product owner, please do consider my Practical Product Owner workshop. I still have room in that workshop.

The early bird registration ends December 16, 2017. If I fill both sections earlier, I will stop registration then.

Categories: Blogs

Our 5 year birthday!

Growing Agile - Thu, 12/08/2016 - 10:27
Growing Agile turned 5 in November 2016! Here is a fun infographic on what we have done in those 5 years. Thanks to those who could join us for our birthday celebration in Cape Town!  
Categories: Companies

Why Visual Management Techniques are so Powerful

Agile Complexification Inverter - Wed, 12/07/2016 - 23:58


How does the brain process visual clues to the environment and synthesize meaning about an ever changing landscape?  Tom Wujec explains the creation of mental models and why AutoDesk invest in visual management techniques to plan their strategic roadmaps.




Also in one of Tom Wujec's talks on How to Make Toast, he explains another important point of visual management - system's thinking and group work.

Don't worry... the mind will do all the work.  It will fill in the missing details, and abstract the patterns into the concept.  Here's an exercise, Squiggle Birds by David Gray, to experience this.




See Also:
Your Brain on ScrumMichael de la Maza on InfoQ

Visual Management Blog

Visual Thinking - Wikipedia

David Gray on Visual Thinking

Ultimate Wallboard Challenge 2010  time-lapse of Vodafone Web Team's board

iPad Interactive Whiteboard Remote

 This is your brain on Media
Multitasking: This is your brain on Media - Infographic by Column Five Media.
Categories: Blogs

Want to Engage Managers in Agile? Stop Using Agile Jargon

Bruno Collet - Agility and Governance - Wed, 12/07/2016 - 23:15
If you want to engage managers and executives in Agile, avoid agile jargon from existing agile frameworks and models such as: sprint (Scrum), servant-leader, release train (SAFe), or made-up words such as anticipaction (Agile Profile).

Not only do managers and executives not have common understanding of agile terminology, but more importantly they do not necessarily have interest for "becoming agile" in the first place. At first this might seem like an intractable obstacle but it's actually quite refreshing. Indeed it helps talking in terms of real business and management challenges instead of focusing on agile concepts and methodology. And guess what? Agile values and practices can be expressed quite well in common management language.
"Agile values and practices can be expressed quite well in common management language."
Actually, the broader and higher my interventions for agile transformation, the less I talk about agile.

In my opinion, promoters of agile methods, tools and frameworks have packaged agile commercially, creating a layer of opacity that backfires when we address management and organizational agility.

"The higher my level of intervention for agile transformation, the less I talk about agile".
Agile jargon originated from two broad sources that have shaped the general (mis)understanding of agile today. First the software development grassroots becoming popular more than a decade ago with frameworks such as Scrum and Extreme Programming (XP), now going more enterprise-level with SAFe and many other siblings. And second, strategy management firms and business schools such as Gartner and Harvard, as well as leading management authors such as Steve Denning (Radical Management) and Jurgen Appelo (Management 3.0) who joined the trend a few years ago and developed their own agile terminology.

Let's have a look at a list of agile wordings that have created confusion in many, many discussions (I've been there…) and find an equivalent in common management language.

  • VUCA = volatile (leave UCA for long-form explanation)
  • Bimodal = organized both for efficiency and innovation (or for predictability and exploration)
  • Velocity = speed of value delivery
  • User story, epic = product feature
  • Sprint = x-week iteration (where x is a number)
  • MVP, minimum viable product = first partial solution
  • Increment, release = new features delivered to users
  • Product owner = client representative
  • Scrum Master = team facilitator
  • Synchronize = fast decision-making across hierarchy and functions
  • Servant-leader = coach-style manager
  • Product backlog = prioritized list of features
  • Whole team, squad = end-to-end team
  • Budget-boxed = fixed budget
  • Time-boxed = fixed delivery date
  • Variable scope = dynamic prioritized list of features
  • Delivered value = must refer to context-specific metrics (that's a tricky one - unexplained, many see it as ROI which it's not)
Purists will find that these are not entirely equivalent. I can live with that. The goal is to get a message through, not to write a glossary.

Share your experience!

ABOUT THE AUTHOR:

Bruno Collet helps organizations benefit from agility, mainly in the area of digital transformation. He develops the Metamorphose framework to accelerate transformation results. His career has led him to collaborate with organizations in Montreal, Belgium and elsewhere, including the Société de Transport de Montréal (STM), National Bank of Canada, Loto-Québec and Proximus (formerly Belgacom). Holder of MBA, MScIT, PMP and PMI-ACP degrees, Bruno Collet is also author of a blog on agile transformation, and speaker at PMI, Agile Tour Montréal and Agile China.

Disclaimer: trademarks used in this article are used for illustrative purpose only. The article is not sponsored by, or associated with, the trademark owners.
Categories: Blogs

Using Flow Metrics to Deliver Faster

The best way to optimize the delivery of value to your customers is to optimize flow. Dan Vacanti explains what you need to begin using flow metrics.

The post Using Flow Metrics to Deliver Faster appeared first on Blog | LeanKit.

Categories: Companies

Kanban vs. Scrum, take two

Kanbanery - Wed, 12/07/2016 - 15:55

Artykuł Kanban vs. Scrum, take two pochodzi z serwisu Kanbanery.

Software development is not manufacturing. You can’t take a system designed for building cars and use it to manage software projects. Building a car is a series of identical tasks which don’t change as long as things are going well. There’s one best way to perform each task. Software development is knowledge work. Every step is different, every time. It’s about discovering the best way to do things, and then discovering better ways. It’s creative work, not subject to the rules of manufacturing process management.

Yes. Of course. I agree. But I’m getting really tired of hearing it.

This post is going to be in the form of a rant. Not just because I need it; that would be just selfish. I’m choosing the format of a rant because this needs to be said once and for all in a way that will make an impression.

Because the Kanban Method is not the Toyota Production System. No more than Scrum is rugby.

You never hear anyone saying that software development is not a game. You can’t take a tactic designed for deciding which team gets control of a leather ball and use it to manage software projects. Forcing your way through a concentrated group of blockers is a physical challenge while software development is knowledge work.

I can imagine that the names of two tools frequently applied (often at the same time) to improving value delivery by software companies had similar origins.

Jeff Sutherland and Ken Schwaber, the inventors of Scrum might have been rugby fans, or at least familiar with the sport. Seeing eight people gather every day into a tight circle to work together (the daily standup) might have reminded them of something, like a scrum in rugby. Beyond that, the analogy makes no sense. In rugby, a scrum is used to restart the play after it’s been stopped by an official for some minor infraction of the rules. No one ever says, “we can’t use Scrum in our software team because we aren’t restarting work after someone got in trouble for breaking company policy.” Everyone knows that’s not the kind of scrum we’re talking about.

David Anderson, inventor of the Kanban Method, might have seen that cards in a stack or slots on a board make visible things that are normally invisible, like demand or capacity. Not unlike the way they had to find to make things like capacity in a software development system visible and therefore, manageable. What are those those things called? In Japanese, the word kanban means a visual signal. In Chinese, it means a signpost or a board. So what to call a new tool which incorporates a board full of visual signals?

Kanban-tokens-300x168

Source: Scrum  & Kanban

Or Scrum is related to rugby in the same way that the Kanban Method is related to the Toyota Production System. Both borrowed one word based on a loose association. That’s all.

So indeed, Toyota does use boards on the wall full of visual signals of invisible things that they want to manage. Rugby teams do sometimes put their heads together and work as a group to get something done. But there the similarities end. Can we just leave it at that, please?

Artykuł Kanban vs. Scrum, take two pochodzi z serwisu Kanbanery.

Categories: Companies

Stable Teams – Predictability Edition

Leading Agile - Mike Cottmeyer - Wed, 12/07/2016 - 15:00

Today I am addressing a key component to help teams succeed. Stable Teams.

Stable Teams Defined

I find that many people interpret those two words in many different ways. Here’s my definition.

A stable team is a team that stays together for ideally multiple releases or quarters. They have a few characteristics:

  1. Individuals on the team only belong to one team.
  2. The Team has one backlog.
  3. The backlog is formed by a consistent entity.
  4. The team stays together for multiple releases or quarters barring life events like illness, HR issues, etc.
Context

Stable teams can be applied at any level in an organization. Communities of practice, agile delivery teams, programs and portfolios all have a backlog.

Results

Plenty of studies have been done on the appropriate size of the team or how productive the teams can be if they are stable. Here are the key points:

  1. Teams that stay together have a higher throughput.
  2. Teams that stay together are more predictable.
  3. Teams that stay together are happier.
The Softer Side

Our community is multi-dimensional in how agile is implemented.  I implement a structure first approach. Some implement a cultural approach so that they change the mindset of the organization. Still, others implement a practices approach where practices are taught and the teams are responsible for the outcome.

The reason I choose a structure approach is fairly straightforward. It’s not that I don’t care about culture and practices, but it’s out of respect for the longevity of the team, the culture, and the practices. I am intentionally creating a container that protects the team.  Protects them from what you might ask?

Culture

With a cultural approach, the worst thing I can do is teach accountability, self-organization, and how to be a generalist and have that subverted by team members being swapped off of teams. Teams that stay together figure out how to use each others unique strengths over time. They can be responsible for their own outcome and held accountable for it too. They can figure out how to best do the job at hand. If I pull someone out of the team, that kills accountability and their self-organization is blown to bits. Go ahead and try to hold them accountable. They will hate you for the long hours they pull trying to get the work done.  Respect the structure, and seek to change culture after protecting the container.

Practices

With a practices approach. The container for the team to take those practices and run with them isn’t created. The team may not be able to effectively create a working agreement or retrospect to come up with new ways to improve or implement the practice. They can attempt to self-organize, but it’s beyond their control much of the time.

Structure

So I turn to structure. If I structure the team, protect the team, and support them, they have the best chance for success. Not all teams will succeed. Some will improve slower than others. But they have a shot at changing their own culture. We can hold them accountable for their outcome because we have enabled them to do so. They can become generalists over time so when life events happen, the team can cover and prop up their team member. When attrition occurs, the team can absorb the change more readily.

Assuming that all made sense, what the heck is preventing stabilization of teams?

To list a few:

1.  Focus on individual utilization.

Some organizations focus on maximizing individual utilization, they even overbook. In large PMO organizations that focus on utilization of individuals, there is a sense that team members aren’t always working. This just isn’t accurate. While in the short term a team member or two can be underutilized, that is a maturity of practice issue that can be overcome. Utilization can indeed be improved. Not only that, but those people managing how much individuals are used across the organization can focus on something else because utilization is stable too. Knowing how much capital investment you have vs opex is much more valuable than scheduling someone’s percentage of work on a feature.

2. Side Projects

Kill them with fire.   Actually no… some side projects make great candidates for the backlog of the organization. Providing a single source of truth gives clarity to teams. Tech Debt can be prioritized along side of features. Teams need to be able to quantify the value. That’s generally easy to teach them. In the end, they will find that they get much more done. On another note, if the team is receiving work from the side. On the down low. That needs to stop. It’s an impediment to Team Stability. Find the source, figure out why it’s happening and eradicate it. That’s not simple, people have side projects for a reason. They are trying to solve a problem. Figure out if it’s an education issue, an alignment issue, or otherwise.

3. Dependencies on other teams

Dependencies are always a truth and cost of doing business in my opinion. There are some really cool things we can do to get rid of them, but in the meantime, we need to shoot for the most capable team with the fewest dependencies on other teams that we can. Capability modeling can be a life-raft to help with this. Structuring around your existing capabilities and enabling the teams by putting what they need to take care of those capabilities is critical to predictability. Dependencies still need to be managed, but not as much if we are smart about how we staff the team to enable them and figure out capability ownership by the team.

Stable Teams are a non-negotiable part of a predictable system.  Sure, there are outliers, but by and large, stable teams are one of the biggest ways you can help yourself and your organization.  If you can’t figure out how to do it or are not empowered, get help.  And remember, this isn’t just about Scrum teams, it’s about teams.

 

The post Stable Teams – Predictability Edition appeared first on LeadingAgile.

Categories: Blogs

Cognitive Complexity, Because Testability != Understandability

Sonar - Wed, 12/07/2016 - 13:35

Thomas J. McCabe introduced Cyclomatic Complexity in 1976 as a way to guide programmers in writing methods that “are both testable and maintainable”. At SonarSource, we believe Cyclomatic Complexity works very well for measuring testability, but not for maintainability. That’s why we’re introducing Cognitive Complexity, which you’ll begin seeing in upcoming versions of our language analyzers. We’ve designed it to give you a good relative measure of how difficult the control flow of a method is to understand.

Cyclomatic Complexity doesn’t measure maintainability

To get started let’s look at a couple of methods:

int sumOfPrimes(int max) {              // +1
  int total = 0;
  OUT: for (int i = 1; i <= max; ++i) { // +1
    for (int j = 2; j < i; ++j) {       // +1
      if (i % j == 0) {                 // +1
        continue OUT;
      }
    }
    total += i;
  }
  return total;
}                  // Cyclomatic Complexity 4
 
  String getWords(int number) { // +1
    switch (number) {
      case 1:                   // +1
        return "one";
      case 2:                   // +1
        return "a couple";
      default:                  // +1
        return "lots";
    }
  }        // Cyclomatic Complexity 4

These two methods share the same Cyclomatic Complexity, but clearly not the same maintainability. Of course, this comparison might not be entirely fair; even McCabe acknowledged in his original paper that the treatment of case statements in a switch didn't seem quite right:

The only situation in which this limit [of 10 per method] has seemed unreasonable is when a large number of independent cases followed a selection function (a large case statement)...

On the other hand, that's exactly the problem with Cyclomatic Complexity. The scores certainly tell you how many test cases are needed to cover a given method, but they aren't always fair from a maintainability standpoint. Further, because even the simplest method gets a Cyclomatic Complexity score of 1, a large domain class can have the same Cyclomatic Complexity as a small class full of intense logic. And at the application level, studies have shown that Cyclomatic Complexity correlates to lines of code, so it really doesn't tell you anything new.

Cognitive Complexity to the rescue!

That's why we've formulated Cognitive Complexity, which attempts to put a number on how difficult the control flow of a method is to understand, and therefore to maintain.

I'll get to some details in a minute, but first I'd like to talk a little more about the motivations. Obviously, the primary goal is to calculate a score that's an intuitively "fair" representation of maintainability. In doing so, however, we were very aware that if we measure it, you will try to improve it. And because of that, we want Cognitive Complexity to incent good, clean coding practices by incrementing for code constructs that take extra effort to understand, and by ignoring structures that make code easier to read.

Basic criteria

We boiled that guiding principle down into three simple rules:

  • Increment when there is a break in the linear (top-to-bottom, left-to-right) flow of the code
  • Increment when structures that break the flow are nested
  • Ignore "shorthand" structures that readably condense multiple lines of code into one
Examples revisited

With those rules in mind, let's take another look at those first two methods:

                                // Cyclomatic Complexity    Cognitive Complexity
  String getWords(int number) { //          +1
    switch (number) {           //                                  +1
      case 1:                   //          +1
        return "one";
      case 2:                   //          +1
        return "a couple";
      default:                  //          +1
        return "lots";
    }
  }                             //          =4                      =1

As I mentioned, one of the biggest beefs with Cyclomatic Complexity has been its treatment of switch statements. Cognitive Complexity, on the other hand, only increments once for the entire switch structure, cases and all. Why? In short, because switches are easy, and Cognitive Complexity is about estimating how hard or easy control flow is to understand.

On the other hand, Cognitive Complexity increments in a familiar way for the other control flow structures: for, while, do while, ternary operators, if/#if/#ifdef/..., else if/elsif/elif/..., and else, as well as for catch statements. Additionally, it increments for jumps to labels (goto, break, and continue) and for each level of control flow nesting:

                                // Cyclomatic Complexity    Cognitive Complexity
int sumOfPrimes(int max) {              // +1
  int total = 0;
  OUT: for (int i = 1; i <= max; ++i) { // +1                       +1
    for (int j = 2; j < i; ++j) {       // +1                       +2 (nesting=1)
      if (i % j == 0) {                 // +1                       +3 (nesting=2)
        continue OUT;                   //                          +1
      }
    }
    total += i;
  }
  return total;
}                               //         =4                       =7

As you can see, Cognitive Complexity takes into account the things that make this method harder to understand than getWords - the nesting and the continue to a label. So that while the two methods have equal Cyclomatic Complexity scores, their Cognitive Complexity scores clearly reflect the dramatic difference between them in understandability.

In looking at these examples, you may have noticed that Cognitive Complexity doesn't increment for the method itself. That means that simple domain classes have a Cognitive Complexity of zero:

                              // Cyclomatic Complexity       Cognitive Complexity
public class Fruit {

  private String name;

  public Fruit(String name) { //        +1                          +0
    this.name = name;
  }

  public void setName(String name) { // +1                          +0
    this.name = name;
  }

  public String getName() {   //        +1                          +0
    return this.name;
  }
}                             //        =3                          =0

So now class-level metrics become meaningful. You can look at a list of classes and their Cognitive Complexity scores and know that when you see a high number, it really means there's a lot of logic in the class, not just a lot of methods.

Getting started with Cognitive Complexity

At this point, you know most of what you need to get started with Cognitive Complexity. There are some differences in how boolean operators are counted, but I'll let you read the white paper for those details. Hopefully, you're eager to start using Cognitive Complexity, and wondering when tools to measure it will become available.

We'll start by adding method-level Cognitive Complexity rules in each language, similar to the existing ones for Cyclomatic Complexity. You'll see this first in the mainline languages: Java, JavaScript, C#, and C/C++/Objective-C. At the same time, we'll correct the implementations of the existing method level "Cyclomatic Complexity" rules to truly measure Cyclomatic Complexity (right now, they're a combination of Cyclomatic and Essential Complexity.)

Eventually, we'll probably add class/file-level Cognitive Complexity rules and metrics. But we're starting with Baby Steps.

Categories: Open Source

Darwin Perspective on Agile Architecture

TV Agile - Wed, 12/07/2016 - 11:59
Through the comparison of the Darwin’s theory of evolution to software development, we will try to find an answer to how to build sustainable, agile and antifragile software systems, which software architecture are responsive and adaptable to the challenges of a volatile business environments. Video producer: http://aceconf.com/
Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.