Skip to content

Feed aggregator

All people are awesome. Always. #sgza

Growing Agile - Thu, 11/10/2016 - 10:00
Danie Roux gave the opening keynote at Scrum Gathering South Africa 2016 (#SGZA). You can listen to the talk here: https://t.co/yoMup5vyye. He said: All people are always awesome. In all ways. And Always One of the turning points for me as a coach was a session with Lyssa Adkins when I realized I thought there were […]
Categories: Companies

Links for 2016-11-09 [del.icio.us]

Zachariah Young - Thu, 11/10/2016 - 10:00
Categories: Blogs

Thank You for Making the Inaugural SAFe Summit a Success!

Agile Product Owner - Thu, 11/10/2016 - 05:38

Two short weeks ago, over 400 attendees from 17 countries trekked to Colorado for the first SAFe Summit, making it the largest gathering of its kind to focus exclusively on SAFe and its community of practice. It was an inspiring moment in SAFe’s history, humbling for those of us who have a hand in building the Framework, and one that we hope will have a long-lasting positive impact on those who attended.

summit_ballroomClick to enlarge

Given SAFe’s maturity in the marketplace—now with 80,000 practitioners worldwide—it was important for us to take this major step to support the community, the enterprises they serve, and to provide a platform where people charged with making SAFe succeed in the field could provide indepth feedback to its handlers. Our thought was to hold a ‘big tent’ event for everyone engaged with SAFe—partners, practitioners, instructors, consultants, and enterprise business leaders—because it literally takes a village to support a successful implementation, and that village can only remain healthy and vibrant as long as it collaborates and allows for new ideas and knowledge to flow through it.

day2_openspace_2Open space: this is what community contribution looks like.

As we know from PI Planning, there’s no substitute for coming together under one roof, and the Summit was a stellar example of that. We saw high-energy engagement across the board, new connections forged, and we gained substantive insights from attendees and speakers that wouldn’t have been possible otherwise.

A brief wrap-up can’t do justice to what took place during the four days of the Summit, but here are some themes and topics that we’ll be thinking about and working on in 2017:

Highlights & Themes summit_ryan_martensRyan Martens

Empathy. Our friend Ryan Martens (former CTO of Rally Software) took the stage in a super-hero cape and channeled the great Jean Tabaka when he challenged us to consider the idea that empathy is the missing element in many workplaces. Through this lens, he suggested that you can make technical agility stick by affecting the heart and DNA of the overall business. How do we engage leadership and executives? Through empathy and understanding their context.

Change the heart to change the system.” —Ryan Martens, ‘Agile Hippie’ and Beekeeper

SAFe thrives in a strong learning culture. The four enterprise adopters who shared their SAFe journeys had fascinating stories to tell. Their organizations couldn’t have been more different from each other in terms of size, industry, and development challenges (insurance, pharmaceutical, defense, telecommunications), but they all had something in common: a strong commitment to supporting a learning culture where practitioners are encouraged to continually acquire new knowledge and skills relevant to their role in the implementation.

Long-term coaching is essential, especially dealing with non-software teams who are new to the terminology.” –Yael Man, Elbit

SAFe Fellows Jennifer Fawcett and Inbar Oren happy to get feedback on Essential SAFeSAFe Fellows Jennifer Fawcett and Inbar Oren happy to get Rapid Feedback on Essential SAFe. Click to enlarge.

Essential SAFe. Affectionally called ‘Little-Big SAFe,’ Essential SAFe debuted as a concept earlier this year and was a hot session topic at the Summit. It’s a set of minimal practices without which SAFe might no longer be ‘safe.’ It can provide guidance for organizations that are customizing SAFe but don’t want to stray so far they lose advantages, and it can act as an easy entry point for organizations who aren’t ready for full-on SAFe, but want to start practicing and getting the benefits as soon as possible. We appreciate all the feedback we received at the Summit, and will be introducing a fully-supported version of Essential SAFe in the coming months. In the meantime, you can read more about it here.

A powerful but more accessible perspective to achieving enterprise agility.” —Aspire Consulting

Value Streams—can’t live without ’em. There was much furious note-taking in the two sessions that had to do with value streams. That’s because it’s no secret that the flow of value is entirely dependent on how well you apply and map and analyze your value streams. Without that broader context, your planning, execution, and I&A will always fall short of their potential. As Ryan Martens said, “This little tab changes everything.”

SPCTs and SPCT candidates at the SAFe SummitSPCTs and SPCT candidates at the SAFe Summit

The Making of SPCTs. SPCTs are the only ones who can train and certify SPCs, making them key drivers of the health and well-being of the entire SAFe community. We were happy to see so many SPCTs and SPCT candidates join us for working sessions dedicated to ensuring that the SPCT certification program maintains the most rigorous and meaningful standards for improving the quality of SPCs and SAFe.

 

New Course SAFe 4.0 Scrum Master

ssm_coverScrum Masters are critical players in a Lean-Agile enterprise and can make all the difference when it comes to an effective SAFe implementation. To that end, we released a new course at the Summit, SAFe 4.0 Scrum Master. This course rounds out a full team curriculum of training including our popular SAFe 4.0 for Teams and SAFe 4.0 Product Manager/Product Owner courses. In addition, for those looking to significantly advance their Scrum Master skills, we recently introduced the SAFe 4.0 Advanced Scrum Master course. The entry-level SAFe 4.0 Scrum Master serves as a prerequisite for that advanced course. View public classes and learn more about the course at scaledagile.com/scrum-master.

Two New Books from SAFe Authors

new_safe_booksThe Rollout, by SAFe Fellow Alex Yakyma. Based on dozens of real examples, this novel about leadership and building a Lean-Agile Enterprise with SAFe provides a war chest of tools and techniques every change agent and Lean-Agile leader can use to succeed with their SAFe implementation. Learn more at therolloutbook.com

Tribal Unity, by SPCT Em Campbell-Pretty. Em does a great job of  describing how a Lean-Agile culture can be fostered directly with the ‘team-of-agile-teams,’ the essential building block of enterprise agility. Learn more here.

Downloads, Videos, and 2017 Summit

Go to scaledagile.com/safe-summit to find many of the presentations from our Summit speakers, as well as an inspiring collection of on-site video interviews provided by our Summit Media Partner, AgileAmped.download-and-view

To all of our attendees, speakers, exhibitors, partners, volunteers, and facilitators who made this first event so successful, we want to extend our deepest appreciation, especially the four enterprise adopters for sharing their stories of challenges and growth with SAFe.

We’ll be coming together under one roof again next year, so stay tuned for an announcement on dates and location by joining the Summit email list.

Stay SAFe!
—Dean and the entire team from Scaled Agile

Categories: Blogs

What to Do When Scrum Doesn’t Work

Scrum Expert - Wed, 11/09/2016 - 20:55
Henrik Kniberg goes through a handful of concrete steps for diagnosing and debugging Scrum problems. He talks about using the process wrong, blaming the messenger, being impatient, not adapting the process or using the wrong process. Henrik Kniberg also introduces some new Scrum terminology such as Scrumdamentalism, Sadoscrumism, and Scrumbutophobia. Video producer: http://agileindia.org
Categories: Communities

Continuous Delivery is More Than Tools: It is a Culture

TV Agile - Wed, 11/09/2016 - 19:47
Some enterprise IT organisations are adopting Continuous Delivery and DevOps thinking that tools is enough to do the job, that all of a sudden they will go faster to market and build quality in because they are automating their existing delivery process. Just throwing tools at the problem is not enough; to be successful, organisations […]
Categories: Blogs

Play4Agile, February 16-19 2017, Johannesberg, Germany

Scrum Expert - Wed, 11/09/2016 - 08:00
The Play4Agile conference is a four-day event taking place in Germany. This conference is for Agile, Scrum and Lean coaches, facilitators, game and innovation experts who want to exchange questions, ideas and experiences on using games in Agile project management teams and organizations. Play4Agile follows an unconference format where the participants can create their own conference, proposing their own discussion on Agile topics: sessions, games you want to play, game ideas you want to develop, evening activities. The Play4Agile conference provides an open playground to inspire each other and to learn how using serious games can help us achieve our goals. Play4Agile is a gathering of experienced peers from all over the world to create and play games in an inspiring environment. Web site: http://play4agile.org/ Location for the Play4Agile conference: Seminar Center Rückersbach, Kolpingstraße 1, 63867 Johannesberg, Germany
Categories: Communities

Agile Teams Coaching in Methods & Tools Fall 2016 issue

DevAgile.com - Tue, 11/08/2016 - 22:45
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Fall 2016 issue that discusses alternatives to acceptance tests, Agile transformation, software project estimation, Agile coaching and the following free software tools ...
Categories: Communities

Week-long Sprints Work for Weekly Newsletters

Learn more about transforming people, process and culture with the Real Agility Program

What I like the most about Agile-thinking is the principle of taking action with very little planning. This philosophy of learn-as-you-go creates space and time for the team to experiment with ideas to create a successful product.

For the past year, I have participated in an agile experiment of sorts. Basically, the goal was to write a weekly newsletter. But more specifically, the intention was to create meaningful content to readers of the newsletter which would empower them to continue to make positive change in their organizations by applying Agile methods.

Six weeks after starting the newsletter, I attended my first Certified ScrumMaster (CSM) training in Toronto, Ontario. At first, I thought I could manage the newsletter content and delivery using Scrum. I quickly realized I couldn’t. Even if I viewed myself as a ScrumMaster, I wasn’t working on a Scrum team. There was no Product Owner. It couldn’t be run using Scrum.

However, I realized something essential that I could glean from Scrum and that is the idea of Sprints. I realized right away that if I viewed the creation and delivery of the newsletter in one-week Sprints I likely could be successful. And indeed, this application of a Scrum method was extremely useful.

Thinking about delivering a newsletter in one-week Sprints helped me to think about the smallest amount of content which could be easily and predictably delivered weekly. As my capacity, and the capacity of the team improved, so could the level of complexity also increase.

As the level of complexity increased, the newsletter itself improved in quality.

I would like to write more about how a newsletter can be created and distributed using Sprints and other Agile methods because doing it this way helped me to stay adaptive & flexible as the newsletter was refined.

5 keys for using Sprints to create & distribute a newsletter

  1. Understand “Done Done!” – Before CSM training, the newsletter was “done” when I pressed ‘send’ on my computer. When I better understood the meaning of “Done Done” in a Sprint I changed my thinking and behaviour. When I sent the first draft to be proof-read, this was “Done” and when it was returned to me edited and when I did final revision then it was ready for scheduling. When I pressed “Schedule” then the newsletter was “Done Done.” I would plan to schedule the newsletter three days before it was expected to be released. That gave me three days of  ‘buffer’ to accommodate last-minute changes, if necessary. I was learning to become more Agile.
  2. Learn to Accommodate Last-Minute Changes – If last minutes changes cannot be easily accommodated, then the product delivery is likely not Agile. When I started creating and distributing a weekly product, with the expectation that things could change at any time then I learned to establish a “bare-minimum” which could be produced even if changes occurred. This gave me the ability to be flexible and adaptive and much more Agile.
  3. Be Agile; Don’t Do Agile – When I went to CSM training, I thought I would learn how to do Agile things on my team. When I completed the training and started applying Sprints to the weekly distribution of a newsletter, then I realized I must “Be” Agile in my approach, in my communications, and in my creation of the product. I learned that Agile is really a state of mind and not a “thing” at all. Agile is about continuous action, reflection and planning with an open-mind and a readiness to always learn and grow and change.
  4. Action, Reflection, Planning – Before using one week Sprints, I didn’t give myself enough time to reflect and plan the next Sprint. I had a backlog with enough items to keep me busy for 6 months. My work-in-progress was a nightmare and unmanageable. I had four weeks worth of drafts saved and often got confused between what content was going out when. Establishing a regular weekly cadence helped me take control of this “mess” by just taking small action steps, reflecting on them weekly, and using the learning to plan the next steps. This revolutionized my work.
  5. Prepare For Growth – When a product is delivered successfully with Sprints, it keeps getting better and better. This leads to goals being met and growth happening on the team. In this case, it lead to increasing numbers of subscribers and the establishment of a collaborative team approach to creating and distributing the newsletter. Without Sprints, without an Agile mindset, I’m absolutely certain the goals would not have been achieved and growth wouldn’t have occurred. But with Sprints, things just keep getting better and better every week. I love it!

******************************************************

If you’d like to subscribe to the weekly newsletter I mentioned here, you can do so at this link.

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Week-long Sprints Work for Weekly Newsletters appeared first on Agile Advice.

Categories: Blogs

Why Tailored Agile Transformation Solutions Are More Effective, Less Expensive and Less Risky

NetObjectives - Tue, 11/08/2016 - 18:36
Our contention and experience is that solutions tailored to an organization’s current situation, challenges, and culture can be more effective and less costly than predefined ones that are applied out of the box. While there are risks to the former, these can be avoided. The different set of risks to taking predefined solutions, ironically, can only be avoided by tailoring them. This article...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Registration Open for January 2017 Writing Workshops

Johanna Rothman - Tue, 11/08/2016 - 15:23

If you are thinking about writing more or better for next year, take a look at my writing workshops.

I am offering Writing Workshop 1: Write Non-Fiction to Enhance Your Business and Reputation again, so you can learn how to create a daily writing habit, write in small chunks, and start to publish.

I am offering a new writing workshop for people who want to publish more (and be paid for their writing): Writing Workshop 2: Secrets of Successful Non-Fiction Writers.

Take Workshop 1 if you are unsure of your writing. It’s a terrific overview and will help you start with a regular writing habit.

Take Workshop 2 if you are ready to take your writing to the next level. This workshop is about getting paid for your writing, and publishing more often and broadly.

If you’re not sure which workshop is right for you, email me and we can discuss what would work for you.

Super early bird registration ends November 18, 2016 for Workshop 1. Super early bird registration ends November 25, 2016 for Workshop 2.

If you are thinking of writing “more” in 2017, commit now. Make it happen for you.

Categories: Blogs

2 great workshops on Nov. 22nd

Agile Ottawa - Tue, 11/08/2016 - 14:33
Agile Ottawa does a few interesting workshops asides from their monthly Meetup and conference. This year we are happy to announce 2 great workshops on Nov. 22nd. Collaborative Decision Making This creative collaboration workshop provides tools that address such issues … Continue reading →
Categories: Communities

Usable software over working software – Guest Post

Growing Agile - Tue, 11/08/2016 - 12:42
There are common practices I use every day. As part of my toolkit I spend a lot of time and effort on ‘Trying new ways of doing things’ and ‘Asking questions when I do not understand something’. These practices provide me insight into what works and what doesn’t. Answers become conversations about what is essential […]
Categories: Companies

How Does That Feel?

Agile Tools - Tue, 11/08/2016 - 07:15

sailing

“We cannot direct the wind, but we can adjust the sails.”
-Bertha Calloway

I was out racing for the first time in a long time this weekend. I was rusty and sailing on a boat that I was unfamiliar with. Furthermore, I didn’t know anyone on the crew. So I started doing what I like to see others do when racing: I just started talking about what I was seeing happening around me.

“Do you see that boat over there?”

“Hey, look, there’s a puff of wind over there.”

“It looks like the breeze might be filling in from over there.”

I kept that little monologue up, not constantly, but on a fairly regular basis. Just letting others know what I think I’m seeing. At some point during the race, one of the guys looks at me and says, “Tom, I hear you talking about pressure over here, and puffs over there, and I’m not really sure what you are talking about. How do you know there’s really wind over there?”

That’s a great question. And there are a couple of answers. The first answer is that I simply don’t know. I’m really just guessing. It’s the wind that we are talking about after all, and I have no more special insight than the next guy when it comes to divining the nature of the winds. However, I do have a few years of experience, and it turns out that more often than not I tend to get it right. That’s because I’m looking for certain signs on the water that indicate what might be the presence of wind. Something like a telltale pattern of ripples on the surface can indicate a small downdraft…or it could indicate a small school of fish ruffling the water. Now I usually know the difference, but I could be wrong. Trust me, it happens all the time. But I don’t worry about that when I’m racing. I think there is value in sharing all observations about the race course that help to give my team a tactical advantage.

People tend to assume that the person driving the boat, usually a very experienced and capable individual, knows what is best and has a good grip on the situation on the water. Nothing could be further from the truth. It turns out that when you are the skipper, you often have your head stuck in the boat. It’s not the skipper’s fault – it comes with the job. You are trying to steer to the telltales on the sails. You are reacting to pressure on the tiller. You are worried about the next mark rounding. But you can’t look at everything at once. That’s where a crew that can be feeding you that information is very valuable. It also helps if they can be sharing the information with each other. After all, they are no more likely to get it right than anyone else. That’s OK if there are more than one set of eyes looking at the issue. So if I think I see a puff and I call it out, another team member may disagree and point out the school of fish just beneath the surface of the water that I missed. The dialog is self correcting. It’s a constant patter of conversation where we share our impressions, some false, some true, that help us to confirm or deny our race strategy.

The other thing that I frequently do is ask questions like, “how does that feel?” Again, I have lots of experience sailing, but I’ve never sailed on this boat before. So I make changes to the sail trim and then I ask, “Did that help?” Maybe it does, or maybe not.

So not only am I talking about the physical nature of the race course, but I’m also checking in with my crew mates. Now I don’t do this out of any overabundance of concern about their well being. It’s much more practical than that. My actions are impacting their performance. Now maybe they will tell me how they are impacted or maybe they won’t. In fact, it’s often the case that people won’t tell you unless you ask. So I ask a lot. I change the sail trim and I check back with the skipper, “How’s does that feel now? Better? Worse?” I check with the guy trimming the main, “How about you?” Sometimes the answer is just a shrug. That’s fine, that’s good feedback too.

I’ve noticed a curious thing that seems to happen. As you model this behavior, others start to pick it up and do it too. At the start of the race, maybe I’m the only guy who’s talking. Two hours later as we cross the finish line, people are calling out puffs and asking for feedback from each other. People seem to pick up on it pretty quick if it’s useful. And if not, well, then maybe you don’t get invited back. Like I said, I don’t always get it right.

I wonder if the same sort of communication is useful for our development teams? What sort of things should we be talking about? What kind of observations are useful? Where are the ripples on the water for a software development team? I know they are racing – that much is for sure. Is the boss’s door closed? Is Joe late getting into the office? Does that meeting have an agenda? I don’t know, I’m guessing that some of that is water cooler conversation that probably isn’t worth a whole lot. On the other hand, what if I come into the office and mention that one of our biggest competitors just made a key acquisition. That’s going to send a few ripples through the water. What if there is an issue in production? More ripples. Maybe even some waves.

So there may indeed be some utility in sharing your observations about the business, the technology, the current state of the production system. It’s all wind on the water. It’s tactical information that may or may not be useful. But you are definitely better off talking about it.

So What about asking questions? You know like, “How does that feel?” Boy, there’s a question that software guys just absolutely love to get asked. How often are we checking in to get feedback on how our actions have affected those around us. Once a sprint? Of course I can’t wait that long in sailing, because the race is long over by then. The feedback would hardly even be relevant if I waited that long. In order for us to fine tune our performance and work together as a team, we need to be constantly engaging in a dialog that tests our assumptions about the value of the changes we are making. Did that help? How does that feel? It’s a fuzzy sort of qualitative conversation that I’m sure makes some folks uncomfortable. But maybe that’s because we’re using it wrong.

You see, when I ask the helmsman how a change feels, he knows what I’m asking about. He knows I don’t give a damn about his emotional state. I want to know if the boat just got easier to steer. Did the boat speed up? Did it slow down? Perhaps the same should apply to software teams. We need to make sure that we understand how the conversation is intended. When I ask how things feel, it’s not necessarily the touchy feely question you might think. Rather, I might really be interested in how fast you think you are going.

So, how does that feel?


Filed under: Agile Tagged: communication, sailing, talking, wind
Categories: Blogs

Women In Agile

Scrum Expert - Mon, 11/07/2016 - 18:25
Women In Agile is a movement partly supported by the Agile Alliance that aims to get more women involved in the Agile community through blogging, speaking at events and networking. On the web site of Women in Agile you will find links to various resources about women in Agile, like a list of blogs of Agile women. There are also various pointers to the Women In Agile discussions held at the Agile conference and their videos. The purpose of the Women in Agile initiative is to encourage, support, and expand women’s presence in the Agile community. The Agile Alliance organized a Women In Agile Workshop. Get all the information and participate to this community on http://womeninagile.com/
Categories: Communities

The state of mainframe continuous delivery

Leading Agile - Mike Cottmeyer - Mon, 11/07/2016 - 15:00
What’s in this article

Mainframe continuous delivery overview
The literature
Issues with Mainframe-hosted solutions
Observations from the field
A glimpse into the future

Mainframe Continuous Delivery Overview

Continuous delivery is an approach to software delivery that seeks to break down the rigid series of phases through which software normally passes on the journey from a developer’s workstation to a production environment, so that value can be delivered to stakeholders with as little delay as possible. Wikipedia has a nice summary of continuous delivery that includes a sequence diagram showing a simplified continuous delivery process.

Practical continuous delivery for the mainframe environment has long been considered especially challenging. When we need to support applications that cross platforms, from mobile devices to web browsers to mid-tier systems to back-end systems, the challenges become enormous.

Here’s a simplified depiction of a generic continuous delivery process:

Generic continuous delivery process

That picture will be familiar to developers who work on front-end stacks, as it has become relatively straightforward to set up a CD pipeline using (for instance) Github, Travis CI, and Heroku (or similar services).

When the “stack” is extended to the heterogeneous technologies commonly found in mainframe shops, here’s where we are, generally speaking:

cd-mainframe

Many mainframe shops have mature tooling in place to support the migration of software from one environment to the next in their pipeline, as suggested by the green circles containing checkmarks.

The yellow “warning” triangles show steps in the CD pipeline where mainframe shops seem to have limited support as of this year. Notice that most of these steps are related to automated testing of one kind or another. On the whole, mainframe shops lack automated tests. Almost all testing is performed manually.

The first step in the diagram—version control—is shown with a yellow triangle. Most mainframe shops use version control for mainframe-resident code only. A separate version control system is used for all “distributed” code. The use of multiple version control systems adds a degree of complexity to the CD pipeline.

In addition, mainframe shops tend to use version control products that were originally designed to take snapshots of clean production releases, to be used for rollback after problematic installs. These products may or may not be well-suited to very short feedback cycles, such as the red-green-refactor cycle of test-driven development.

Mainframe shops are far behind in a few key areas of CD. They typically do not create, provision, and launch test environments and production environments on the fly, as part of an automated CD process. Instead, they create and configure static environments, and then migrate code through those environments. They don’t switch traffic from old to new targets because there is only one set of production targets.

The environments are configured manually, and the configurations are tweaked as needed to support new releases of applications. Test environments are rarely configured identically to production environments, and some shops have too few test environments for all development teams to share, causing still more delay in the delivery of value.

Database schema are typically managed in the same way as execution environments. They are created and modified manually and tweaked individually. Test databases are often defined differently than production ones, particularly with respect to things like triggers and referential integrity settings.

Test data management for all levels of automated tests is another problematic area. Many shops take snapshots of production data and scrub it for testing. This approach makes it difficult, if not impossible, to guarantee that a given test case will be identical every time it runs. The work of copying and scrubbing data is often handled by a dedicated test data management group or team, leading to cross-team dependencies, bottlenecks, and delays.

Finally, most mainframe shops have no automated production system monitoring in place. They deal with production issues reactively, after a human notices something is not working and reports it to a help desk, or after a system crashes or hangs. Should they need to roll back a deployment, the effort becomes an “all hands on deck” emergency that temporarily halts other value-add work in progress.

The literature

In reading published material on the subject of agile development / continuous deployment / DevOps for mainframe environments, I find two general types of information:

  1. Fluffy articles that summarize the concepts and admonish mainframe managers and operations to consider the importance of shortening lead times and tightening feedback loops in the delivery pipeline. None of these describes any working implementation currently in place anywhere.
  2. Articles crafted around specific commercial software products that support some subset of a continuous delivery pipeline for mainframe systems. None of these describes any working implementation currently in place anywhere.

As a starting point for learning about the challenges of continuous delivery in a mainframe environment, these types of articles are fine. There are a few shortcomings when it comes down to brass tacks.

Fluffy introductory articles

The limitations in the first type of article are easy to see. It’s important to understand the general concepts and the platform-specific issues at a high level, but after that you really need something more concrete.

Sometimes these very general articles remind me of the “How To Do It” sketch from Monty Python.

Alan: …here’s Jackie to tell you how to rid the world of all known diseases.Jackie: Well, first of all become a doctor and discover a marvelous cure for something, and then, when the medical world really starts to take notice of you, you can jolly well tell them what to do and make sure they get everything right so there’ll never be diseases any more.

Alan: Thanks Jackie, that was great. […] Now, how to play the flute. (picking up a flute) Well you blow in one end and move your fingers up and down the outside.

All well and good, except you can’t really take that advice forward. There just isn’t enough information. For instance, it makes a difference which end of the flute you blow in. Furthermore, it’s necessary to move your fingers up and down the outside in a specific way. These facts aren’t clear from the presentation. The details only get more and more technical from there.

Articles promoting commercial products

The second type of article provides information about concrete solutions. Companies have used these commercial solutions to make some progress toward continuous delivery. In some cases, the difference between the status quo ante and the degree of automation they’ve been able to achieve is quite dramatic.

Here are a few representative examples.

You may know the name Microfocus due to their excellent Cobol compiler. Microfocus has picked up Serena, a software company with several useful mainframe products, to bolster their ability to support mainframe customers.

It’s possible to combine some of these products to construct a practical continuous delivery pipeline for the mainframe platform:

  • Serena ChangeMan ZMF with the optional Enterprise Release extension
  • Serena Release Control
  • Serena Deployment Automation Tool
  • Microfocus Visual COBOL

Compuware offers a solution that, like Microfocus’ solution, comprises a combination of different products to fill different gaps in mainframe continuous delivery:

  • Compuware ISPW
  • Compuware Topaz Workbench
  • XebiaLabs XL Release

IBM, the source of all things mainframe, can get you part of the way to a continuous delivery pipeline, as well. The “IBM Continuous Integration Solution for System Z” comprises several IBM products:

  • Rational Team Concert
  • Rational Quality Manager
  • Rational Test Workbench
  • Rational Integration Tester (formerly GreenHat)
  • Rational Development and Test Environment (often called RD&T)
  • IBM UrbanCode Deploy

Any of those offerings will get you more than half the pieces of a continuous delivery pipeline; different pieces in each case, but definitely more than half.

The software companies that focus on the mainframe platform are sincere about providing useful products and services to their customers. Even so, articles about products are sales pitches by definition, and a sales pitch naturally emphasizes the positives and glosses over any inconvenient details.

Issues with mainframe-hosted solutions

There are a few issues with solutions that run entirely, or almost entirely, on the mainframe.

Tight coupling of CD tooling with a single target platform

Ideally, a cross-platform CD pipeline ought to be managed independently of any of the production target platforms, build environments, or test environments. Only those components that absolutely must run directly on a target platform should be present on that platform.

For example, to deploy to a Unix or Linux platform it’s almost always possible to copy files to target directories. It’s rarely necessary to run an installer. Similarly, it’s a generally-accepted good practice to avoid running installers on any production Microsoft Windows instances. When Windows is used on production servers, it’s usually stripped of most of the software that comes bundled with it by default.

You don’t want to provide a means for the wrong people to install or build code on servers. At a minimum, code is built in a controlled environment and vetted before being promoted to any target production environment. Even better, the code and the environment that hosts it are both created as part of the build process; there’s no target environment waiting for things to be installed on it.

This means the CD tooling—or at least the orchestration piece—runs on its own platform, separate from any of the development, test, staging, production, or other platforms in the environment. It orchestrates other tools that may have to run on specific platforms, but the process-governing software itself doesn’t live on any platform that is also a deployment target.

An advantage is that the build and deploy process, as well as live production resiliency support, can build, configure, and launch any type of environment as a virtual machine without any need for a target instance to be pre-configured with parts of the CD pipeline installed. For mainframe environments, this approach is not as simple but can extend to launching CICS regions and configuring LPARs and zOS-hosted Linux VMs on the fly.

A further advantage of keeping the CD tooling separate from all production systems is that it’s possible to swap out any component or platform in the environment without breaking the CD pipeline. With the commercial solutions available, the CD tooling lives on one of the target deployment platforms (namely, the mainframe). Should the day come to phase out the mainframe, it would be necessary to replace the entire CD pipeline, a core piece of technical infrastructure. The enterprise may wish to keep that flexibility in reserve.

It isn’t always possible to deploy by copying binaries and configuration files to a target system. There may be various reasons for this. In the case of the mainframe, the main reason is that no off-platform compilers and linkers can prepare executable binaries you can just “drop in” and run.

Mainframe compatibility options in products like Microfocus COBOL and Gnu COBOL don’t produce zOS-ready load modules; they provide source-level compatibility, so you can transfer the source code back and forth without any modifications. A build of the mainframe components of an application has to run on-platform, so at some point in the build-and-deploy sequence the source code has to be copied to the mainframe to be compiled.

This means build tools like compilers and linkers must be installed on production mainframes. That isn’t a problem, as mainframe systems are designed to keep build tools separate from production areas. But the fact builds must run on-platform doesn’t mean the CD pipeline orchestration tooling itself has to run on-platform (except, maybe, for an agent that interacts with the orchestrator). For historical and cultural reasons, this concept can be difficult for mainframe specialists to accept.

Multiple version control systems

When you use a mainframe-based source code manager (Serena ChangeMan, CA-Endevor, etc.) for mainframe-hosted code, and some other version control system (Git, Subversion, etc.) for all the “distributed” source code, you have the problem of dual version control systems. Moving all the “distributed” code to the mainframe just for the purpose of version control surely makes no sense.

When your applications cut through multiple architectural layers, spanning mobile devices, web apps, Windows, Linux/Unix, and zOS, having dual version control systems significantly increases the likelihood of version conflicts and incompatible components being packaged together. Rollbacks of partially-completed deployments can be problematic, as well.

It’s preferable for all source code to be managed in the same version control sytem, and for that system to be independent of any of the target platforms in the environment. One of the key challenges in this approach is cultural, and not technical. Mainframe specialists are accustomed to having everything centralized on-platform. The idea of keeping source code off-platform may seem rather odd to them.

But there’s no reason why source code has to live on the same platform where executables will ultimately run, and there are plenty of advantages to keeping it separate. Advantages include:

  • Ability to use off-platform development tools that offer much quicker turnaround of builds and unit tests than any on-platform configuration
  • Ability to keep development and test relational databases absolutely synchronized with production schema by building from the same DDL on the fly (assuming DB2 on all platforms)
  • Ability to keep application configuration files absolutely synchronized across all environments, as all environments use the same copy of configuration files checked out from the same version control system
  • other advantages along the same general lines

If you assume source code management systems are strictly for programming language source code, the above list may strike you as surprising. Actually, any and all types of “source” (in a general sense) ought to be versioned and managed together. This includes, for all target platforms that host components of a cross-platform application:

  • source code
  • application configuration files
  • system-related configuration settings (e.g., batch job scheduler settings, preconfigured CICS CSD files, etc.)
  • database schema definitions (e.g., DDL for relational DBs)
  • automated checks/tests at all levels of abstraction
  • documentation (for all audiences)
  • scripts for configuring/provisioning servers
  • JCL for creating application files (VSAM, etc.)
  • JCL for starting mainframe subsystems (e.g., CICS)
  • scripts and/or JCL for application administration (backup/restore, etc.)
  • scripts and/or JCL for running the application
  • anything else related to a version of the application

All these items can be managed using any version control system hosted on any platform, regardless of what sort of target system they may be copied to, or compiled for.

Limited support for continuous integration

In typical “agile”-style software development work, developers depend on short feedback cycles to help them minimize the need for formality to keep the work moving forward as well as to help ensure high quality and good alignment with stakeholder needs.

Mainframe-based development tools tend to induce delay into the developers’ feedback cycle. It’s more difficult to identify and manage dependencies, more time-consuming to build the application, and often more labor-intensive to prepare test data than in the “distributed” world of Java, Ruby, Python, and C#. For historical reasons, this isn’t necessarily obvious to mainframe specialists, as they haven’t seen that sort of work flow before.

In traditional mainframe environments, it’s common for developers to keep code checked out for weeks at a time and to attempt a build only when they are nearly ready to hand off the work to a separate QA group for testing. They are also accustomed to “merge hell.” Many mainframe developers simply assume “merge hell” is part of the job; the nature of the beast, if you will. Given that frame of reference, tooling that enables developers to integrate changes and run a build once a day seems almost magically powerful.

Mainframe-based CI/CD tools do enable developers to build at least once per day. But that’s actually too slow to get the full benefit of short feedback cycles. It’s preferable to be able to turn around a single red-green-refactor TDD cycle in five or ten minutes, if not less, with your changes integrated into the code base every time. That level of turnaround is all but unthinkable to many mainframe specialists.

Mainframe-based version control systems weren’t designed with that sort of work flow in mind. They were spawned in an era when version control was used to take a snapshot of a clean production release, in case there was a need to roll back to a known working version of an application in future. These tools weren’t originally designed for incremental, nearly continuous integration of very small code changes. Despite recent improvements that have inched the products closer to that goal, it’s necessary to manage version control off-platform in order to achieve the feedback cycle times and continuous integration contemporary developers want.

Limited support for automated unit testing

Contemporary development methods generally emphasize test automation at multiple levels of abstraction, and frequent small-scale testing throughout development. Some methods call for executable test cases to be written before writing the production code that makes the tests pass.

These approaches to development require tooling that enables very small subsets of the code to be tested (as small as a single path through a single method in a Java class), and for selected subsets of test cases to be executed on demand, as well as automatically as part of the continuous integration flow.

Mainframe-based tooling to support fine-grained automated checks/tests is very limited. The best example is IBM’s zUnit testing framework, supporting Cobol and PL/I development as part of the Rational suite. But even this product can’t support unit test cases at a fine level of granularity. The smallest “unit” of code it supports is an entire load module.

Some tools are beginning to appear that improve on this, such as the open source cobol-unit-test project for Cobol, and t-rexx for test-driving Rexx scripts, but no such tool is very mature at this time. The cobol-unit-test project can support fine-grained unit testing and test-driving of Cobol code off-platform using a compiler like Microfocus or Gnu COBOL, on a developer’s Windows, OSX, or Linux machine or in a shared development environment. No mainframe-based tools can support this.

Dependencies outside the developer’s control

A constant headache in mainframe development is the fact it’s difficult to execute a program without access to files, databases, and subroutine libraries the developer doesn’t control. Even the simplest, smallest-scale automated test depends on the availability and proper configuration of a test environment, and these are typically managed by a different group than the development teams.

Every developer doesn’t necessarily have their own dedicated test files, databases, CICS regions, or LPARs. In many organizations, developers don’t even have the administrative privileges necessary to start up a CICS region for development or testing, or to modify CICS tables in a development region to support their own needs; a big step backward as compared with the 1980s. Developers have to take turns, sometimes waiting days or weeks to gain access to a needed resource.

Mainframe-based and server-based CD tooling addresses this issue in a hit-or-miss fashion, but none provides robust stubbing and mocking support for languages like Cobol and PL/I.

Some suites of tools include service virtualization products that can mitigate some of the dependencies. Service virtualization products other than those listed above may be used in conjunction, as well (e.g., Parasoft, HP).

The ability to run automated checks for CICS applications at finer granularity than the full application is very limited short of adding test-aware code to the CICS environment. IBM’s Rational Suite probably does the best job of emulating CICS resources off-platform, but at the cost of requiring multiple servers to be configured. These solutions provide only a partial answer to the problem.

Disconnected and remote development is difficult

One factor that slows developers down is the necessity to connect to various external systems. Even with development tools that run on Microsoft Windows, OSX, or Linux, it’s necessary for developers to connect to a live mainframe system to do much of anything.

To address these issues, IBM’s Rational suite enables developers to work on a Windows workstation. This provides a much richer development environment than the traditional mainframe-based development tools. But developers can’t work entirely isolated from the network. They need an RD&T server and, possibly, a Green Hat server to give them VSAM and CICS emulation and service virtualization for integration and functional testing.

Each of these connections is a potential failure point. One or more servers may be unavailable at a given time. Furthermore, the virtual services or emulated facilties may be configured inappropriately for a developer’s needs.

Keep in mind the very short feedback cycles that characterize contemporary development methods. Developers typically spend as much as 90% of their time at the “unit” level; writing and executing unit checks and building or modifying production code incrementally, to make those checks pass. They spend proportionally less time writing and executing checks at the integration, functional, behavioral, and system levels.

Therefore, an environment that enables developers to work without a connection to the mainframe or to mainframe emulation servers can enable them to work in very quick cycles most of the time.

In addition, the level of granularity provided by zUnit isn’t sufficient to support very short cycles such as Ruby, Python, C#, or Java developers can experience with their usual tool stacks.

In practical terms, to get to the same work flow for Cobol means doing most of the unit-level development on an isolated Windows, OSX, or Linux instance with an independent Cobol compiler such as Microfocus or Gnu COBOL, and a unit testing tool that can isolate individual Cobol paragraphs. Anything short of that offers only a partial path toward continuous delivery.

Observations from the field Version control

Possibly the most basic element in a continuous delivery pipeline is a version control system for source code, configuration files, scripts, documentation, and whatever else goes into the definition of a working application. Many mainframe shops use a mainframe-based version control system such as CA-Endevor or Serena ChangeMan. Many others have no version control system in place.

The idea of separating source repositories from execution target platforms has not penetrated. In principle there is no barrier to keeping source code and configuration files (and similar artifacts) off-platform so that development and unit-level testing can be done without the need to connect to the mainframe or to additional servers. Yet, it seems most mainframe specialists either don’t think of doing this, or don’t see value in doing it.

Automated testing (checking)

Most mainframe shops have little to no automated testing (or checking or validation, as you prefer). Manual methods are prevalent, and often testing is the purview of a separate group from software development. Almost as if they were trying to maximize delay and miscommunication, some shops use offshore testing teams located as many timezones away as the shape of the Earth allows.

So, what’s all this about “levels” of automated testing? Here’s a depiction of the so-called test automation pyramid. You can find many variations of this diagram online, some simpler and some more complicated than this one.

Automated testing (checking)

Most mainframe shops have little to no automated testing (or checking or validation, as you prefer). Manual methods are prevalent, and often testing is the purview of a separate group from software development. Almost as if they were trying to maximize delay and miscommunication, some shops use offshore testing teams located as many timezones away as the shape of the Earth allows.

So, what’s all this about “levels” of automated testing? Here’s a depiction of the so-called test automation pyramid. You can find many variations of this diagram online, some simpler and some more complicated than this one.

test automation pyramid

This is all pretty normal for applications written in Java, C#, Python, Ruby, C/C++ and other such languages. It’s very unusual to find these different levels of test automation in a mainframe shop. Yet, it’s feasible to support several of these levels without much additional effort:

Mainframe test automation pyramid
Automation is quite feasible and relatively simple for higher-level functional checking and verifying system qualities (a.k.a. “non-functional” requirements). The IBM Rational suite includes service virtualization (and so do other vendors), making it practical to craft properly-isolated automated checks at the functional and integration levels. Even so, relatively few mainframe shops have any test automation in place at any level. Some mainframe specialists are surprised to learn there is such a thing as different “levels” of automated testing; they can conceive only of end-to-end tests with all interfaces live. This is a historical and cultural issue, and not a technical one.

At the “unit” level, the situation is reversed. The spirit is willing but the tooling is lacking. IBM offers zUnit, which can support test automation for individual load modules. To get down to a suitable level of granularity for unit testing and TDD, there are no well-supported, commercial tools. To be clear: A unit test case exercises a single path through a single Cobol paragraph or PL/I block. The “unit” in zUnit is the load module; I would call that a component test rather than a unit test. There are a few Open Source unit testing solutions to support Cobol, but nothing for PL/I. And this is where developers spend 90% of their time. It is an area that would benefit from further tool development.

Test data management

When you see a presentation about continuous delivery at a conference, the speaker will display illustrations of their planned transition to full automation. No one (that I know of) has fully implemented CD in a mainframe environment. The presentations typically show test data management as just one more box among many in a diagram, the same size as all the other boxes. The speaker says they haven’t gotten to that point in their program just yet, but they’ll address test data management sometime in the next few months. They sound happy and confident. This tells me they’re speeding toward a brick wall, and they aren’t aware of it.

Test data management may be the single largest challenge in implementing a CD pipeline for a heterogeneous environment that includes mainframe systems. People often underestimate it. They may visualize something akin to an ActiveRecord migration for a Ruby application. How hard could that be?

Mainframe applications typically use more than one access method. Mainframe access methods are roughly equivalent to filesystems on other platforms. It’s common for a mainframe application to manipulate files using VSAM KSDS, VSAM ESDS, and QSAM access methods, and possibly others. To support automated test data management for these would be approximately as difficult as manipulating NTFS, EXT4, and HFS+ filesystems from a single shell script on a single platform. That’s certainly do-able, but it’s only the beginning of the complexity of mainframe data access.

A mature mainframe application that began life 25 years ago or more will access multiple databases, starting with the one that was new technology at the time the application was originally written, and progressing through the history of database management systems since that time. They are not all SQL-enabled, and those that are SQL-enabled generally use their own dialect of SQL.

In addition, mainframe applications often comprise a combination of home-grown code, third-party software products (including data warehouse products, business rules engines, and ETL products—products that have their own data stores), and externally-hosted third-party services. Development teams (and the test data management scripts they write) may not have direct access to all the data stores that have to be populated to support automated tests. There may be no suitable API for externally-hosted services. The company’s own security department may not allow popular testing services like Sauce Labs to access applications running on internal test environments, and may not allow test data to go outside the perimeter because sensitive information could be gleaned from the structure of the test data, even if it didn’t contain actual production values.

Creating environments on the fly

Virtualization and cloud services are making it more and more practical to spin up virtual machines on demand. People use these services for everything from small teams maintaining Open Source projects to resilient solution architectures supporting large-scale production operations. A current buzzword making the rounds is hyperconvergence, which groups a lot of these ideas and capabilities together.

But there are no cloud services for mainframes. The alternative is to handle on-demand creation of environments in-house. Contemporary models of mainframe hardware are capable of spinning up environments on demand. It’s not the way things are usually done, but that’s a question of culture and history and is not a technical barrier to CD.

IBM’s z/VM can manage multiple operating systems on a single System z machine, including z/OS. With PR/SM (Processor Resource/System Manager) installed, z/OS logical partitions (LPARs) are supported. Typically, mainframe shops define a fixed set of LPARs and allocate development, test, and production workloads across them. The main reason it’s done that way is that creating an LPAR is a multi-step, complicated process. People prefer not to have to do it frequently. (All the more reason to automate it, if you ask me.)

A second reason, in some cases, is that the organization hasn’t updated its operating procedures since the 1980s. They have a machine that is significantly more powerful than older mainframes, and they continue to operate it as if it were severely underpowered. I might observe this happens because year after year people say “the mainframe is dying, we’ll replace it by this time next year,” so they figure it isn’t worth an investment greater than the minimum necessary to keep the lights on.

Yet, the mainframe didn’t die. It evolved.

Production system monitoring

A number of third-party tools (that is, non-IBM tools) can monitor production environments on mainframe systems. Most shops don’t use them, but they are available. A relatively easy step in the direction of CD is to install appropriate system monitoring tools.

Generally, such tools are meant for performance monitoring. They help people tune their mainframe systems. They aren’t really meant to support dynamic reconfiguration of applications on the fly.

Ideally, we want these tools to be able to do more than just notify someone when they detect a problematic condition. The same sort of resiliency as reactive architectures provide would be most welcome for mainframe systems, as well. This may be a future development.

A glimpse into the future?

I saw a very interesting demo machine a couple of years ago. An IBMer brought it to a demo of the Rational suite for a client. It was an Apple MacBook Pro with a full-blown instance of zOS installed. It was a single-user mainframe on a laptop. It was not, and still is not, a generally-available commercial product.

That sort of thing will only become more practical and less costly as technology continues to advance. One can imagine a shop in which each developer has their own personal zOS system. Maybe they’ll be able to run zOS instances as VMs under VirtualBox or VMware. Imagine the flexibility and smoothness of the early stages in a development work flow! Quite a far cry from two thousand developers having to take turns sharing a single, statically-defined test environment for all in-flight projects.

The pieces of the mainframe CD puzzle are falling into place by ones and twos.

The post The state of mainframe continuous delivery appeared first on LeadingAgile.

Categories: Blogs

Breaking Boxes

lizkeogh.com - Elizabeth Keogh - Mon, 11/07/2016 - 13:38

I love words. I really, really love words. I like poetry, and reading, and writing, and conversations, and songs with words in, and puns and wordplay and anagrams. I like learning words in different languages, and finding out where words came from, and watching them change over time.

I love the effect that words have on our minds and our models of our world. I love that words have connotations, and that changing the language we use can actually change our models and help us behave in different ways.

Language is a strange thing. It turns out that if you don’t learn language before the age of 5, you never really learn language; the constructs for it are set up in our brains at a very early age.

George Lakoff and Mark Johnson propose in their book, “Metaphors we Live By”, that all human language is based on metaphorical constructs. I don’t pretend to understand the book fully, and I believe there’s some contention about whether its premise truly holds, but I still found it a fascinating book, because it’s about words.

There was one bit which really caught my attention. “Events and actions are conceptualized metaphorically as objects, activities as substances, states as containers… activities are viewed as containers for the actions and other activities that make them up.” They give some examples:

I put a lot of energy into washing the windows.

Outside of washing the windows, what else did you do?

This fascinated me. I started seeing substances, and containers, everywhere!

I couldn’t do much testing before the end of the sprint.

As if “testing” was a substance, like cheese… we wanted 200g of testing, but we could only get 100g. And a sprint is a timebox – we even call it a box! I think in software, and with Agile methods, we do this even more.

The ticket was open for three weeks, but I’ve closed it now.

How many stories are in that feature?

It’s outside the scope of this release.

Partly I think this is because we like to decompose problems into smaller problems, because that helps us solve them more easily, and partly because we like to bound our work so that we know when we’re “done”, because it’s satisfying to be able to take responsibility for something concrete (spot the substance metaphor) and know you did a good job. There’s probably other reasons too.

There’s only one problem with dividing things into boxes like this: complexity.

In complex situations, problems can’t be decomposed into small pieces. We can try, for sure, and goodness knows enough projects have been planned that way… but when we actually go to do the work, we always make discoveries, and the end result is always different to what we predicted, whether in functionality or cost and time or critical reception or value and impact… we simply can’t predict everything. The outcomes emerge as the work is done.

I was thinking about this problem of decomposition and the fact that software, being inherently complex, is slightly messy… of Kanban, and our desire to find flow… of Cynthia Kurtz’s Cynefin pyramids… and of my friend and fellow coach, Katherine Kirk, who is helping me to see the world in terms of relationships.

It seemed to me that if a complex domain wasn’t made up of the sum of its parts, it might be dominated by the relationship between those parts instead.  In Cynthia Kurtz’s pyramids, the complex domain is pictured as if the people on the ground get the work done (self-organizing teams, for instance) but have a decoupled hierarchical leader.

I talked to Dave Snowden about this, and he pointed me at one of his newer blog posts on containing constraints and coupling constraints, which makes more sense as the hierarchical leader (if there is one!) isn’t the only constraint on a team’s behaviour. So really, the relationships between people are actually constraints, and possibly attractors… now we’re getting to the limit of my Cynefin knowledge, which is always a fun place to be!

Regardless, thinking about work in terms of boxes tends to make us behave as if it’s boxes, which tends to lead us to treat something complex as if it’s complicated, which is disorder, which usually leads to an uncontrolled dive into chaos if it persists, and that’s not usually a good thing.

So I thought… what if we broke the boxes? What would happen if we changed the metaphor we used to talk about work? What if we focused on people and relationships, instead of on the work itself? What would that look like?

Let’s take that “testing” phrase as an example:

I couldn’t do much testing before the end of the sprint.

In the post I made for the Lean Systems Society, “Value Streams are Made of People”, I talked about how to map a value stream from the users to the dev team, and from the dev team back to the users. I visualize the development team as living in a container. So we can do the same thing with testing. Who’s inside the “testing” box?

Let’s say it’s a tester.

Who’s outside? Who gets value or benefits from the testing? If the tester finds nothing, there was no value to it (which we might not know until afterwards)… so it’s the developer who gets value from the feedback.

So now we have:

I couldn’t give the devs feedback on their work before the end of the sprint.

And of course, that sprint is also a box. Who’s on the inside? Well, it’s the dev team. And who’s on the outside? Why can’t the dev team just ship it to the users? They want to get feedback from the stakeholders first.

So now we have:

I couldn’t give the devs feedback on their work before the stakeholders saw it.

I went through some of the problems on PM Stackexchange. Box language, everywhere. I started making translations.

Should multiple Scrum teams working on the same project have the same start/end dates for their Sprints?

Becomes:

Does it help teams to co-ordinate if they get feedback from their stakeholders, then plan what to do next, at the same time as each other?

Interesting. Rephrasing it forced me to think about the benefits of having the same start/end dates. Huh. Of course, I’m having to make some assumptions in both these translations as to what the real problem was, and with who; there are other possibilities. Wouldn’t it have been great if we could have got the original people experiencing these problems to rephrase them?

If we used this language more frequently, would we end up focusing a little less on the work in our conceptual “box”, and more on what the next people in the stream needed from us so that they could deliver value too?

I ran a workshop on this with a pretty advanced group of Kanban coaches. I suggested it probably played into their explicit process policies. “Wow,” one of them said. “We always talk about our policies in terms of people, but as soon as we write them down… we go back to box language.”

Of course we do. It’s a convenient way to refer to our work (my translations were inevitably longer). We’re often held accountable and responsible for our box. If we get stressed at all we tend to worry more about our individual work than about other people (acting as individuals being the thing we do in chaos) and there’s often a bit of chaos, so that can make us revert to box language even more.

But I do wonder how much less chaos there would be if we commonly used language metaphors of people and relationships over substance and containers.

If, for instance, we made sure the tester had what they needed from us devs, instead of focusing on just our box of work until it’s “done”… would we work together better as a team?

If we realised that the cost might be in the people, but the value’s in the relationships… would we send less work offshore, or at least make sure that we have better relationships with our offshore team members?

If we focused on our relationship with users and stakeholders… would we make sure they have good ways of giving feedback as part of our work? Would we make it easier for them to say “thank you” as a result?

And when there’s a problem, would a focus on improving relationships help us to find new things to try to improve how our work gets “done”, too?


Categories: Blogs

Agile Open Northwest, Portland, USA, February 8-10 2017

Scrum Expert - Mon, 11/07/2016 - 08:00
The Agile Open Northwest conference is a two-day event about Agile practices and techniques that takes place in Portland. Participants will be able to start, discover, and share discussions around Agile and Scrum topics. The Agile Open Northwest conference follows the open space format for conferences. Open space is a simple methodology for self-organizing conference tracks. It relies on participation by people who have a passion for the topics to be discussed. There is no preplanned list of topics, only time slots and a space in the main meeting room where interested participants propose topics and pick time slots Web site: http://www.agileopennorthwest.org/ Location for the Agile Open Northwest conference: Leftbank Annex, 101 N Weidler, Portland, OR 97227
Categories: Communities

5 Links To Engaging Retrospectives

Learn more about transforming people, process and culture with the Real Agility Program

When a team starts implementing Scrum they will soon discover the value and the challenge in retrospectives.

Project Retrospectives: A Handbook for Team Reviews says that “retrospectives offer organizations a formal method for preserving the valuable lessons learned from the successes and failures of every project. These lessons and the changes identified by the community will foster stronger teams and savings on subsequent efforts.”

In other words, retrospectives create a safe place for reflections so that the valuable lessons can be appreciated, understood and applied to new opportunities for growth at hand.

The Retrospective Prime Directive says:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

With these noble principles in mind, there should be no fear from any team member about the learning, discoveries and occasions for progress.

These 5 retrospective techniques may be useful for other teams who are looking for fun ways to reflect and learn and grow.

  1. Success Criteria – The Success Criteria activity helps clarifying intentions, target outcomes, and results for success criteria. It is a futurospective activity for identifying and framing intentions, target outcomes and success criteria.
  2. 360 degrees appreciation – The 360 degrees appreciation is a retrospective activity to foster open appreciation feedback within a team. It is especially useful to increase team moral and improve people relationship.
  3.  Complex Pieces – Complex pieces is a great energizer to get people moving around while fostering a conversation about complex systems and interconnected pieces.
  4. Known Issues – The Known Issues activity is a focused retrospective activity for issues that are already known. It is very useful for situations where the team (1) either knows their issues and want to talk about the solutions, or (2) keep on running out of time to talk about repetitive issues that are not the top voted ones.
  5. Candy Love – Candy love is a great team building activity that gets the participants talking about their life beyond the work activities

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post 5 Links To Engaging Retrospectives appeared first on Agile Advice.

Categories: Blogs

When Outsourcing Makes Sense

Leading Answers - Mike Griffiths - Sun, 11/06/2016 - 16:57
Disclaimer: This article is based on my personal experience of software project development work over a 25 year period running a mixture of local projects, outsourced projects and hybrid models. The data is my own and subjective, but supported by... Mike Griffiths
Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.