Skip to content

Blogs

The state of mainframe continuous delivery

Leading Agile - Mike Cottmeyer - Mon, 11/07/2016 - 15:00
What’s in this article

Mainframe continuous delivery overview
The literature
Issues with Mainframe-hosted solutions
Observations from the field
A glimpse into the future

Mainframe Continuous Delivery Overview

Continuous delivery is an approach to software delivery that seeks to break down the rigid series of phases through which software normally passes on the journey from a developer’s workstation to a production environment, so that value can be delivered to stakeholders with as little delay as possible. Wikipedia has a nice summary of continuous delivery that includes a sequence diagram showing a simplified continuous delivery process.

Practical continuous delivery for the mainframe environment has long been considered especially challenging. When we need to support applications that cross platforms, from mobile devices to web browsers to mid-tier systems to back-end systems, the challenges become enormous.

Here’s a simplified depiction of a generic continuous delivery process:

Generic continuous delivery process

That picture will be familiar to developers who work on front-end stacks, as it has become relatively straightforward to set up a CD pipeline using (for instance) Github, Travis CI, and Heroku (or similar services).

When the “stack” is extended to the heterogeneous technologies commonly found in mainframe shops, here’s where we are, generally speaking:

cd-mainframe

Many mainframe shops have mature tooling in place to support the migration of software from one environment to the next in their pipeline, as suggested by the green circles containing checkmarks.

The yellow “warning” triangles show steps in the CD pipeline where mainframe shops seem to have limited support as of this year. Notice that most of these steps are related to automated testing of one kind or another. On the whole, mainframe shops lack automated tests. Almost all testing is performed manually.

The first step in the diagram—version control—is shown with a yellow triangle. Most mainframe shops use version control for mainframe-resident code only. A separate version control system is used for all “distributed” code. The use of multiple version control systems adds a degree of complexity to the CD pipeline.

In addition, mainframe shops tend to use version control products that were originally designed to take snapshots of clean production releases, to be used for rollback after problematic installs. These products may or may not be well-suited to very short feedback cycles, such as the red-green-refactor cycle of test-driven development.

Mainframe shops are far behind in a few key areas of CD. They typically do not create, provision, and launch test environments and production environments on the fly, as part of an automated CD process. Instead, they create and configure static environments, and then migrate code through those environments. They don’t switch traffic from old to new targets because there is only one set of production targets.

The environments are configured manually, and the configurations are tweaked as needed to support new releases of applications. Test environments are rarely configured identically to production environments, and some shops have too few test environments for all development teams to share, causing still more delay in the delivery of value.

Database schema are typically managed in the same way as execution environments. They are created and modified manually and tweaked individually. Test databases are often defined differently than production ones, particularly with respect to things like triggers and referential integrity settings.

Test data management for all levels of automated tests is another problematic area. Many shops take snapshots of production data and scrub it for testing. This approach makes it difficult, if not impossible, to guarantee that a given test case will be identical every time it runs. The work of copying and scrubbing data is often handled by a dedicated test data management group or team, leading to cross-team dependencies, bottlenecks, and delays.

Finally, most mainframe shops have no automated production system monitoring in place. They deal with production issues reactively, after a human notices something is not working and reports it to a help desk, or after a system crashes or hangs. Should they need to roll back a deployment, the effort becomes an “all hands on deck” emergency that temporarily halts other value-add work in progress.

The literature

In reading published material on the subject of agile development / continuous deployment / DevOps for mainframe environments, I find two general types of information:

  1. Fluffy articles that summarize the concepts and admonish mainframe managers and operations to consider the importance of shortening lead times and tightening feedback loops in the delivery pipeline. None of these describes any working implementation currently in place anywhere.
  2. Articles crafted around specific commercial software products that support some subset of a continuous delivery pipeline for mainframe systems. None of these describes any working implementation currently in place anywhere.

As a starting point for learning about the challenges of continuous delivery in a mainframe environment, these types of articles are fine. There are a few shortcomings when it comes down to brass tacks.

Fluffy introductory articles

The limitations in the first type of article are easy to see. It’s important to understand the general concepts and the platform-specific issues at a high level, but after that you really need something more concrete.

Sometimes these very general articles remind me of the “How To Do It” sketch from Monty Python.

Alan: …here’s Jackie to tell you how to rid the world of all known diseases.Jackie: Well, first of all become a doctor and discover a marvelous cure for something, and then, when the medical world really starts to take notice of you, you can jolly well tell them what to do and make sure they get everything right so there’ll never be diseases any more.

Alan: Thanks Jackie, that was great. […] Now, how to play the flute. (picking up a flute) Well you blow in one end and move your fingers up and down the outside.

All well and good, except you can’t really take that advice forward. There just isn’t enough information. For instance, it makes a difference which end of the flute you blow in. Furthermore, it’s necessary to move your fingers up and down the outside in a specific way. These facts aren’t clear from the presentation. The details only get more and more technical from there.

Articles promoting commercial products

The second type of article provides information about concrete solutions. Companies have used these commercial solutions to make some progress toward continuous delivery. In some cases, the difference between the status quo ante and the degree of automation they’ve been able to achieve is quite dramatic.

Here are a few representative examples.

You may know the name Microfocus due to their excellent Cobol compiler. Microfocus has picked up Serena, a software company with several useful mainframe products, to bolster their ability to support mainframe customers.

It’s possible to combine some of these products to construct a practical continuous delivery pipeline for the mainframe platform:

  • Serena ChangeMan ZMF with the optional Enterprise Release extension
  • Serena Release Control
  • Serena Deployment Automation Tool
  • Microfocus Visual COBOL

Compuware offers a solution that, like Microfocus’ solution, comprises a combination of different products to fill different gaps in mainframe continuous delivery:

  • Compuware ISPW
  • Compuware Topaz Workbench
  • XebiaLabs XL Release

IBM, the source of all things mainframe, can get you part of the way to a continuous delivery pipeline, as well. The “IBM Continuous Integration Solution for System Z” comprises several IBM products:

  • Rational Team Concert
  • Rational Quality Manager
  • Rational Test Workbench
  • Rational Integration Tester (formerly GreenHat)
  • Rational Development and Test Environment (often called RD&T)
  • IBM UrbanCode Deploy

Any of those offerings will get you more than half the pieces of a continuous delivery pipeline; different pieces in each case, but definitely more than half.

The software companies that focus on the mainframe platform are sincere about providing useful products and services to their customers. Even so, articles about products are sales pitches by definition, and a sales pitch naturally emphasizes the positives and glosses over any inconvenient details.

Issues with mainframe-hosted solutions

There are a few issues with solutions that run entirely, or almost entirely, on the mainframe.

Tight coupling of CD tooling with a single target platform

Ideally, a cross-platform CD pipeline ought to be managed independently of any of the production target platforms, build environments, or test environments. Only those components that absolutely must run directly on a target platform should be present on that platform.

For example, to deploy to a Unix or Linux platform it’s almost always possible to copy files to target directories. It’s rarely necessary to run an installer. Similarly, it’s a generally-accepted good practice to avoid running installers on any production Microsoft Windows instances. When Windows is used on production servers, it’s usually stripped of most of the software that comes bundled with it by default.

You don’t want to provide a means for the wrong people to install or build code on servers. At a minimum, code is built in a controlled environment and vetted before being promoted to any target production environment. Even better, the code and the environment that hosts it are both created as part of the build process; there’s no target environment waiting for things to be installed on it.

This means the CD tooling—or at least the orchestration piece—runs on its own platform, separate from any of the development, test, staging, production, or other platforms in the environment. It orchestrates other tools that may have to run on specific platforms, but the process-governing software itself doesn’t live on any platform that is also a deployment target.

An advantage is that the build and deploy process, as well as live production resiliency support, can build, configure, and launch any type of environment as a virtual machine without any need for a target instance to be pre-configured with parts of the CD pipeline installed. For mainframe environments, this approach is not as simple but can extend to launching CICS regions and configuring LPARs and zOS-hosted Linux VMs on the fly.

A further advantage of keeping the CD tooling separate from all production systems is that it’s possible to swap out any component or platform in the environment without breaking the CD pipeline. With the commercial solutions available, the CD tooling lives on one of the target deployment platforms (namely, the mainframe). Should the day come to phase out the mainframe, it would be necessary to replace the entire CD pipeline, a core piece of technical infrastructure. The enterprise may wish to keep that flexibility in reserve.

It isn’t always possible to deploy by copying binaries and configuration files to a target system. There may be various reasons for this. In the case of the mainframe, the main reason is that no off-platform compilers and linkers can prepare executable binaries you can just “drop in” and run.

Mainframe compatibility options in products like Microfocus COBOL and Gnu COBOL don’t produce zOS-ready load modules; they provide source-level compatibility, so you can transfer the source code back and forth without any modifications. A build of the mainframe components of an application has to run on-platform, so at some point in the build-and-deploy sequence the source code has to be copied to the mainframe to be compiled.

This means build tools like compilers and linkers must be installed on production mainframes. That isn’t a problem, as mainframe systems are designed to keep build tools separate from production areas. But the fact builds must run on-platform doesn’t mean the CD pipeline orchestration tooling itself has to run on-platform (except, maybe, for an agent that interacts with the orchestrator). For historical and cultural reasons, this concept can be difficult for mainframe specialists to accept.

Multiple version control systems

When you use a mainframe-based source code manager (Serena ChangeMan, CA-Endevor, etc.) for mainframe-hosted code, and some other version control system (Git, Subversion, etc.) for all the “distributed” source code, you have the problem of dual version control systems. Moving all the “distributed” code to the mainframe just for the purpose of version control surely makes no sense.

When your applications cut through multiple architectural layers, spanning mobile devices, web apps, Windows, Linux/Unix, and zOS, having dual version control systems significantly increases the likelihood of version conflicts and incompatible components being packaged together. Rollbacks of partially-completed deployments can be problematic, as well.

It’s preferable for all source code to be managed in the same version control sytem, and for that system to be independent of any of the target platforms in the environment. One of the key challenges in this approach is cultural, and not technical. Mainframe specialists are accustomed to having everything centralized on-platform. The idea of keeping source code off-platform may seem rather odd to them.

But there’s no reason why source code has to live on the same platform where executables will ultimately run, and there are plenty of advantages to keeping it separate. Advantages include:

  • Ability to use off-platform development tools that offer much quicker turnaround of builds and unit tests than any on-platform configuration
  • Ability to keep development and test relational databases absolutely synchronized with production schema by building from the same DDL on the fly (assuming DB2 on all platforms)
  • Ability to keep application configuration files absolutely synchronized across all environments, as all environments use the same copy of configuration files checked out from the same version control system
  • other advantages along the same general lines

If you assume source code management systems are strictly for programming language source code, the above list may strike you as surprising. Actually, any and all types of “source” (in a general sense) ought to be versioned and managed together. This includes, for all target platforms that host components of a cross-platform application:

  • source code
  • application configuration files
  • system-related configuration settings (e.g., batch job scheduler settings, preconfigured CICS CSD files, etc.)
  • database schema definitions (e.g., DDL for relational DBs)
  • automated checks/tests at all levels of abstraction
  • documentation (for all audiences)
  • scripts for configuring/provisioning servers
  • JCL for creating application files (VSAM, etc.)
  • JCL for starting mainframe subsystems (e.g., CICS)
  • scripts and/or JCL for application administration (backup/restore, etc.)
  • scripts and/or JCL for running the application
  • anything else related to a version of the application

All these items can be managed using any version control system hosted on any platform, regardless of what sort of target system they may be copied to, or compiled for.

Limited support for continuous integration

In typical “agile”-style software development work, developers depend on short feedback cycles to help them minimize the need for formality to keep the work moving forward as well as to help ensure high quality and good alignment with stakeholder needs.

Mainframe-based development tools tend to induce delay into the developers’ feedback cycle. It’s more difficult to identify and manage dependencies, more time-consuming to build the application, and often more labor-intensive to prepare test data than in the “distributed” world of Java, Ruby, Python, and C#. For historical reasons, this isn’t necessarily obvious to mainframe specialists, as they haven’t seen that sort of work flow before.

In traditional mainframe environments, it’s common for developers to keep code checked out for weeks at a time and to attempt a build only when they are nearly ready to hand off the work to a separate QA group for testing. They are also accustomed to “merge hell.” Many mainframe developers simply assume “merge hell” is part of the job; the nature of the beast, if you will. Given that frame of reference, tooling that enables developers to integrate changes and run a build once a day seems almost magically powerful.

Mainframe-based CI/CD tools do enable developers to build at least once per day. But that’s actually too slow to get the full benefit of short feedback cycles. It’s preferable to be able to turn around a single red-green-refactor TDD cycle in five or ten minutes, if not less, with your changes integrated into the code base every time. That level of turnaround is all but unthinkable to many mainframe specialists.

Mainframe-based version control systems weren’t designed with that sort of work flow in mind. They were spawned in an era when version control was used to take a snapshot of a clean production release, in case there was a need to roll back to a known working version of an application in future. These tools weren’t originally designed for incremental, nearly continuous integration of very small code changes. Despite recent improvements that have inched the products closer to that goal, it’s necessary to manage version control off-platform in order to achieve the feedback cycle times and continuous integration contemporary developers want.

Limited support for automated unit testing

Contemporary development methods generally emphasize test automation at multiple levels of abstraction, and frequent small-scale testing throughout development. Some methods call for executable test cases to be written before writing the production code that makes the tests pass.

These approaches to development require tooling that enables very small subsets of the code to be tested (as small as a single path through a single method in a Java class), and for selected subsets of test cases to be executed on demand, as well as automatically as part of the continuous integration flow.

Mainframe-based tooling to support fine-grained automated checks/tests is very limited. The best example is IBM’s zUnit testing framework, supporting Cobol and PL/I development as part of the Rational suite. But even this product can’t support unit test cases at a fine level of granularity. The smallest “unit” of code it supports is an entire load module.

Some tools are beginning to appear that improve on this, such as the open source cobol-unit-test project for Cobol, and t-rexx for test-driving Rexx scripts, but no such tool is very mature at this time. The cobol-unit-test project can support fine-grained unit testing and test-driving of Cobol code off-platform using a compiler like Microfocus or Gnu COBOL, on a developer’s Windows, OSX, or Linux machine or in a shared development environment. No mainframe-based tools can support this.

Dependencies outside the developer’s control

A constant headache in mainframe development is the fact it’s difficult to execute a program without access to files, databases, and subroutine libraries the developer doesn’t control. Even the simplest, smallest-scale automated test depends on the availability and proper configuration of a test environment, and these are typically managed by a different group than the development teams.

Every developer doesn’t necessarily have their own dedicated test files, databases, CICS regions, or LPARs. In many organizations, developers don’t even have the administrative privileges necessary to start up a CICS region for development or testing, or to modify CICS tables in a development region to support their own needs; a big step backward as compared with the 1980s. Developers have to take turns, sometimes waiting days or weeks to gain access to a needed resource.

Mainframe-based and server-based CD tooling addresses this issue in a hit-or-miss fashion, but none provides robust stubbing and mocking support for languages like Cobol and PL/I.

Some suites of tools include service virtualization products that can mitigate some of the dependencies. Service virtualization products other than those listed above may be used in conjunction, as well (e.g., Parasoft, HP).

The ability to run automated checks for CICS applications at finer granularity than the full application is very limited short of adding test-aware code to the CICS environment. IBM’s Rational Suite probably does the best job of emulating CICS resources off-platform, but at the cost of requiring multiple servers to be configured. These solutions provide only a partial answer to the problem.

Disconnected and remote development is difficult

One factor that slows developers down is the necessity to connect to various external systems. Even with development tools that run on Microsoft Windows, OSX, or Linux, it’s necessary for developers to connect to a live mainframe system to do much of anything.

To address these issues, IBM’s Rational suite enables developers to work on a Windows workstation. This provides a much richer development environment than the traditional mainframe-based development tools. But developers can’t work entirely isolated from the network. They need an RD&T server and, possibly, a Green Hat server to give them VSAM and CICS emulation and service virtualization for integration and functional testing.

Each of these connections is a potential failure point. One or more servers may be unavailable at a given time. Furthermore, the virtual services or emulated facilties may be configured inappropriately for a developer’s needs.

Keep in mind the very short feedback cycles that characterize contemporary development methods. Developers typically spend as much as 90% of their time at the “unit” level; writing and executing unit checks and building or modifying production code incrementally, to make those checks pass. They spend proportionally less time writing and executing checks at the integration, functional, behavioral, and system levels.

Therefore, an environment that enables developers to work without a connection to the mainframe or to mainframe emulation servers can enable them to work in very quick cycles most of the time.

In addition, the level of granularity provided by zUnit isn’t sufficient to support very short cycles such as Ruby, Python, C#, or Java developers can experience with their usual tool stacks.

In practical terms, to get to the same work flow for Cobol means doing most of the unit-level development on an isolated Windows, OSX, or Linux instance with an independent Cobol compiler such as Microfocus or Gnu COBOL, and a unit testing tool that can isolate individual Cobol paragraphs. Anything short of that offers only a partial path toward continuous delivery.

Observations from the field Version control

Possibly the most basic element in a continuous delivery pipeline is a version control system for source code, configuration files, scripts, documentation, and whatever else goes into the definition of a working application. Many mainframe shops use a mainframe-based version control system such as CA-Endevor or Serena ChangeMan. Many others have no version control system in place.

The idea of separating source repositories from execution target platforms has not penetrated. In principle there is no barrier to keeping source code and configuration files (and similar artifacts) off-platform so that development and unit-level testing can be done without the need to connect to the mainframe or to additional servers. Yet, it seems most mainframe specialists either don’t think of doing this, or don’t see value in doing it.

Automated testing (checking)

Most mainframe shops have little to no automated testing (or checking or validation, as you prefer). Manual methods are prevalent, and often testing is the purview of a separate group from software development. Almost as if they were trying to maximize delay and miscommunication, some shops use offshore testing teams located as many timezones away as the shape of the Earth allows.

So, what’s all this about “levels” of automated testing? Here’s a depiction of the so-called test automation pyramid. You can find many variations of this diagram online, some simpler and some more complicated than this one.

Automated testing (checking)

Most mainframe shops have little to no automated testing (or checking or validation, as you prefer). Manual methods are prevalent, and often testing is the purview of a separate group from software development. Almost as if they were trying to maximize delay and miscommunication, some shops use offshore testing teams located as many timezones away as the shape of the Earth allows.

So, what’s all this about “levels” of automated testing? Here’s a depiction of the so-called test automation pyramid. You can find many variations of this diagram online, some simpler and some more complicated than this one.

test automation pyramid

This is all pretty normal for applications written in Java, C#, Python, Ruby, C/C++ and other such languages. It’s very unusual to find these different levels of test automation in a mainframe shop. Yet, it’s feasible to support several of these levels without much additional effort:

Mainframe test automation pyramid
Automation is quite feasible and relatively simple for higher-level functional checking and verifying system qualities (a.k.a. “non-functional” requirements). The IBM Rational suite includes service virtualization (and so do other vendors), making it practical to craft properly-isolated automated checks at the functional and integration levels. Even so, relatively few mainframe shops have any test automation in place at any level. Some mainframe specialists are surprised to learn there is such a thing as different “levels” of automated testing; they can conceive only of end-to-end tests with all interfaces live. This is a historical and cultural issue, and not a technical one.

At the “unit” level, the situation is reversed. The spirit is willing but the tooling is lacking. IBM offers zUnit, which can support test automation for individual load modules. To get down to a suitable level of granularity for unit testing and TDD, there are no well-supported, commercial tools. To be clear: A unit test case exercises a single path through a single Cobol paragraph or PL/I block. The “unit” in zUnit is the load module; I would call that a component test rather than a unit test. There are a few Open Source unit testing solutions to support Cobol, but nothing for PL/I. And this is where developers spend 90% of their time. It is an area that would benefit from further tool development.

Test data management

When you see a presentation about continuous delivery at a conference, the speaker will display illustrations of their planned transition to full automation. No one (that I know of) has fully implemented CD in a mainframe environment. The presentations typically show test data management as just one more box among many in a diagram, the same size as all the other boxes. The speaker says they haven’t gotten to that point in their program just yet, but they’ll address test data management sometime in the next few months. They sound happy and confident. This tells me they’re speeding toward a brick wall, and they aren’t aware of it.

Test data management may be the single largest challenge in implementing a CD pipeline for a heterogeneous environment that includes mainframe systems. People often underestimate it. They may visualize something akin to an ActiveRecord migration for a Ruby application. How hard could that be?

Mainframe applications typically use more than one access method. Mainframe access methods are roughly equivalent to filesystems on other platforms. It’s common for a mainframe application to manipulate files using VSAM KSDS, VSAM ESDS, and QSAM access methods, and possibly others. To support automated test data management for these would be approximately as difficult as manipulating NTFS, EXT4, and HFS+ filesystems from a single shell script on a single platform. That’s certainly do-able, but it’s only the beginning of the complexity of mainframe data access.

A mature mainframe application that began life 25 years ago or more will access multiple databases, starting with the one that was new technology at the time the application was originally written, and progressing through the history of database management systems since that time. They are not all SQL-enabled, and those that are SQL-enabled generally use their own dialect of SQL.

In addition, mainframe applications often comprise a combination of home-grown code, third-party software products (including data warehouse products, business rules engines, and ETL products—products that have their own data stores), and externally-hosted third-party services. Development teams (and the test data management scripts they write) may not have direct access to all the data stores that have to be populated to support automated tests. There may be no suitable API for externally-hosted services. The company’s own security department may not allow popular testing services like Sauce Labs to access applications running on internal test environments, and may not allow test data to go outside the perimeter because sensitive information could be gleaned from the structure of the test data, even if it didn’t contain actual production values.

Creating environments on the fly

Virtualization and cloud services are making it more and more practical to spin up virtual machines on demand. People use these services for everything from small teams maintaining Open Source projects to resilient solution architectures supporting large-scale production operations. A current buzzword making the rounds is hyperconvergence, which groups a lot of these ideas and capabilities together.

But there are no cloud services for mainframes. The alternative is to handle on-demand creation of environments in-house. Contemporary models of mainframe hardware are capable of spinning up environments on demand. It’s not the way things are usually done, but that’s a question of culture and history and is not a technical barrier to CD.

IBM’s z/VM can manage multiple operating systems on a single System z machine, including z/OS. With PR/SM (Processor Resource/System Manager) installed, z/OS logical partitions (LPARs) are supported. Typically, mainframe shops define a fixed set of LPARs and allocate development, test, and production workloads across them. The main reason it’s done that way is that creating an LPAR is a multi-step, complicated process. People prefer not to have to do it frequently. (All the more reason to automate it, if you ask me.)

A second reason, in some cases, is that the organization hasn’t updated its operating procedures since the 1980s. They have a machine that is significantly more powerful than older mainframes, and they continue to operate it as if it were severely underpowered. I might observe this happens because year after year people say “the mainframe is dying, we’ll replace it by this time next year,” so they figure it isn’t worth an investment greater than the minimum necessary to keep the lights on.

Yet, the mainframe didn’t die. It evolved.

Production system monitoring

A number of third-party tools (that is, non-IBM tools) can monitor production environments on mainframe systems. Most shops don’t use them, but they are available. A relatively easy step in the direction of CD is to install appropriate system monitoring tools.

Generally, such tools are meant for performance monitoring. They help people tune their mainframe systems. They aren’t really meant to support dynamic reconfiguration of applications on the fly.

Ideally, we want these tools to be able to do more than just notify someone when they detect a problematic condition. The same sort of resiliency as reactive architectures provide would be most welcome for mainframe systems, as well. This may be a future development.

A glimpse into the future?

I saw a very interesting demo machine a couple of years ago. An IBMer brought it to a demo of the Rational suite for a client. It was an Apple MacBook Pro with a full-blown instance of zOS installed. It was a single-user mainframe on a laptop. It was not, and still is not, a generally-available commercial product.

That sort of thing will only become more practical and less costly as technology continues to advance. One can imagine a shop in which each developer has their own personal zOS system. Maybe they’ll be able to run zOS instances as VMs under VirtualBox or VMware. Imagine the flexibility and smoothness of the early stages in a development work flow! Quite a far cry from two thousand developers having to take turns sharing a single, statically-defined test environment for all in-flight projects.

The pieces of the mainframe CD puzzle are falling into place by ones and twos.

The post The state of mainframe continuous delivery appeared first on LeadingAgile.

Categories: Blogs

Breaking Boxes

lizkeogh.com - Elizabeth Keogh - Mon, 11/07/2016 - 13:38

I love words. I really, really love words. I like poetry, and reading, and writing, and conversations, and songs with words in, and puns and wordplay and anagrams. I like learning words in different languages, and finding out where words came from, and watching them change over time.

I love the effect that words have on our minds and our models of our world. I love that words have connotations, and that changing the language we use can actually change our models and help us behave in different ways.

Language is a strange thing. It turns out that if you don’t learn language before the age of 5, you never really learn language; the constructs for it are set up in our brains at a very early age.

George Lakoff and Mark Johnson propose in their book, “Metaphors we Live By”, that all human language is based on metaphorical constructs. I don’t pretend to understand the book fully, and I believe there’s some contention about whether its premise truly holds, but I still found it a fascinating book, because it’s about words.

There was one bit which really caught my attention. “Events and actions are conceptualized metaphorically as objects, activities as substances, states as containers… activities are viewed as containers for the actions and other activities that make them up.” They give some examples:

I put a lot of energy into washing the windows.

Outside of washing the windows, what else did you do?

This fascinated me. I started seeing substances, and containers, everywhere!

I couldn’t do much testing before the end of the sprint.

As if “testing” was a substance, like cheese… we wanted 200g of testing, but we could only get 100g. And a sprint is a timebox – we even call it a box! I think in software, and with Agile methods, we do this even more.

The ticket was open for three weeks, but I’ve closed it now.

How many stories are in that feature?

It’s outside the scope of this release.

Partly I think this is because we like to decompose problems into smaller problems, because that helps us solve them more easily, and partly because we like to bound our work so that we know when we’re “done”, because it’s satisfying to be able to take responsibility for something concrete (spot the substance metaphor) and know you did a good job. There’s probably other reasons too.

There’s only one problem with dividing things into boxes like this: complexity.

In complex situations, problems can’t be decomposed into small pieces. We can try, for sure, and goodness knows enough projects have been planned that way… but when we actually go to do the work, we always make discoveries, and the end result is always different to what we predicted, whether in functionality or cost and time or critical reception or value and impact… we simply can’t predict everything. The outcomes emerge as the work is done.

I was thinking about this problem of decomposition and the fact that software, being inherently complex, is slightly messy… of Kanban, and our desire to find flow… of Cynthia Kurtz’s Cynefin pyramids… and of my friend and fellow coach, Katherine Kirk, who is helping me to see the world in terms of relationships.

It seemed to me that if a complex domain wasn’t made up of the sum of its parts, it might be dominated by the relationship between those parts instead.  In Cynthia Kurtz’s pyramids, the complex domain is pictured as if the people on the ground get the work done (self-organizing teams, for instance) but have a decoupled hierarchical leader.

I talked to Dave Snowden about this, and he pointed me at one of his newer blog posts on containing constraints and coupling constraints, which makes more sense as the hierarchical leader (if there is one!) isn’t the only constraint on a team’s behaviour. So really, the relationships between people are actually constraints, and possibly attractors… now we’re getting to the limit of my Cynefin knowledge, which is always a fun place to be!

Regardless, thinking about work in terms of boxes tends to make us behave as if it’s boxes, which tends to lead us to treat something complex as if it’s complicated, which is disorder, which usually leads to an uncontrolled dive into chaos if it persists, and that’s not usually a good thing.

So I thought… what if we broke the boxes? What would happen if we changed the metaphor we used to talk about work? What if we focused on people and relationships, instead of on the work itself? What would that look like?

Let’s take that “testing” phrase as an example:

I couldn’t do much testing before the end of the sprint.

In the post I made for the Lean Systems Society, “Value Streams are Made of People”, I talked about how to map a value stream from the users to the dev team, and from the dev team back to the users. I visualize the development team as living in a container. So we can do the same thing with testing. Who’s inside the “testing” box?

Let’s say it’s a tester.

Who’s outside? Who gets value or benefits from the testing? If the tester finds nothing, there was no value to it (which we might not know until afterwards)… so it’s the developer who gets value from the feedback.

So now we have:

I couldn’t give the devs feedback on their work before the end of the sprint.

And of course, that sprint is also a box. Who’s on the inside? Well, it’s the dev team. And who’s on the outside? Why can’t the dev team just ship it to the users? They want to get feedback from the stakeholders first.

So now we have:

I couldn’t give the devs feedback on their work before the stakeholders saw it.

I went through some of the problems on PM Stackexchange. Box language, everywhere. I started making translations.

Should multiple Scrum teams working on the same project have the same start/end dates for their Sprints?

Becomes:

Does it help teams to co-ordinate if they get feedback from their stakeholders, then plan what to do next, at the same time as each other?

Interesting. Rephrasing it forced me to think about the benefits of having the same start/end dates. Huh. Of course, I’m having to make some assumptions in both these translations as to what the real problem was, and with who; there are other possibilities. Wouldn’t it have been great if we could have got the original people experiencing these problems to rephrase them?

If we used this language more frequently, would we end up focusing a little less on the work in our conceptual “box”, and more on what the next people in the stream needed from us so that they could deliver value too?

I ran a workshop on this with a pretty advanced group of Kanban coaches. I suggested it probably played into their explicit process policies. “Wow,” one of them said. “We always talk about our policies in terms of people, but as soon as we write them down… we go back to box language.”

Of course we do. It’s a convenient way to refer to our work (my translations were inevitably longer). We’re often held accountable and responsible for our box. If we get stressed at all we tend to worry more about our individual work than about other people (acting as individuals being the thing we do in chaos) and there’s often a bit of chaos, so that can make us revert to box language even more.

But I do wonder how much less chaos there would be if we commonly used language metaphors of people and relationships over substance and containers.

If, for instance, we made sure the tester had what they needed from us devs, instead of focusing on just our box of work until it’s “done”… would we work together better as a team?

If we realised that the cost might be in the people, but the value’s in the relationships… would we send less work offshore, or at least make sure that we have better relationships with our offshore team members?

If we focused on our relationship with users and stakeholders… would we make sure they have good ways of giving feedback as part of our work? Would we make it easier for them to say “thank you” as a result?

And when there’s a problem, would a focus on improving relationships help us to find new things to try to improve how our work gets “done”, too?


Categories: Blogs

5 Links To Engaging Retrospectives

Learn more about transforming people, process and culture with the Real Agility Program

When a team starts implementing Scrum they will soon discover the value and the challenge in retrospectives.

Project Retrospectives: A Handbook for Team Reviews says that “retrospectives offer organizations a formal method for preserving the valuable lessons learned from the successes and failures of every project. These lessons and the changes identified by the community will foster stronger teams and savings on subsequent efforts.”

In other words, retrospectives create a safe place for reflections so that the valuable lessons can be appreciated, understood and applied to new opportunities for growth at hand.

The Retrospective Prime Directive says:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

With these noble principles in mind, there should be no fear from any team member about the learning, discoveries and occasions for progress.

These 5 retrospective techniques may be useful for other teams who are looking for fun ways to reflect and learn and grow.

  1. Success Criteria – The Success Criteria activity helps clarifying intentions, target outcomes, and results for success criteria. It is a futurospective activity for identifying and framing intentions, target outcomes and success criteria.
  2. 360 degrees appreciation – The 360 degrees appreciation is a retrospective activity to foster open appreciation feedback within a team. It is especially useful to increase team moral and improve people relationship.
  3.  Complex Pieces – Complex pieces is a great energizer to get people moving around while fostering a conversation about complex systems and interconnected pieces.
  4. Known Issues – The Known Issues activity is a focused retrospective activity for issues that are already known. It is very useful for situations where the team (1) either knows their issues and want to talk about the solutions, or (2) keep on running out of time to talk about repetitive issues that are not the top voted ones.
  5. Candy Love – Candy love is a great team building activity that gets the participants talking about their life beyond the work activities

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post 5 Links To Engaging Retrospectives appeared first on Agile Advice.

Categories: Blogs

When Outsourcing Makes Sense

Leading Answers - Mike Griffiths - Sun, 11/06/2016 - 16:57
Disclaimer: This article is based on my personal experience of software project development work over a 25 year period running a mixture of local projects, outsourced projects and hybrid models. The data is my own and subjective, but supported by... Mike Griffiths
Categories: Blogs

What is Customer eXperience (CX)?

Manage Well - Tathagat Varma - Sun, 11/06/2016 - 05:53
(This originally appeared as an interview/blog at http://www.zykrr.com/blog/2016/09/11/Tathagat-interview.html) How will you define CX to a layman? CX to me is that unequivocally superior experience for the specific purpose of a given product or a service every single time – even when sometimes I have to pay more for it. Let’s take an example of something that […]
Categories: Blogs

Retrospectives: Sometimes dreaded, sometimes loved but always essential

Learn more about transforming people, process and culture with the Real Agility Program

Among all the components of Scrum, retrospectives are one of my favourites. A properly planned, efficiently executed, and regularly run retrospective can be like the glue that holds a team together.

My first experience in running a retrospective had surprising results.  We were working in a team of five but only two were present in the retrospective. Not only that, but of these two, neither could decide who should be running the retrospective. To be clear, this was not a Scrum team. But it is a team who is using some Agile methods to deliver a product once a week. Retrospectives are one of the methods. So without a clear ScrumMaster to facilitate the retrospective it was, let’s say, a little messy.

Despite all this, there were some positive results. The team had released a product every three weeks with success. The retrospective on the third week revealed challenges & progress, obstacles and opportunities.

The method used was the format of a Talking Stick Circle, where one person holds the floor and shares their reflections while others listen without interrupting and then the next person speaks and so on.

The major learning was that there were decisions to be made about who was doing which task at what time and in the end the direction was clear. Enthusiasm was high and the path forward was laid. The retrospective was a success.

The most remarkable part of the experience was hearing what was meaningful for others. When both people could share what they valued, hoped for and aspired to with the project it was easy to see what could be done next, using the skills, capacities and talents of team members.

For more resources on agile retrospectives, check out this link.

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Retrospectives: Sometimes dreaded, sometimes loved but always essential appeared first on Agile Advice.

Categories: Blogs

New Agile Planning Game For Parents & Children

Learn more about transforming people, process and culture with the Real Agility Program planningpokerdecks

 

Challenge: As children age, they want to have more say in how they spend their time. Sometimes they don’t know how to express what is important to them or they can’t prioritize their time.

Solution: Parents can easily include their children in decision-making by using an Agile playing card method.  Here’s how it works.

TO PLAY THE GAME – 

You will need:

A pack of playing cards

A stack of post-it notes

Enough pens for everyone playing

Steps to play:

SET UP

  1. Distribute post its & pens to each player
  2. Set a timer for 2 minutes
  3. Each person writes things they want to do for the pre-determined time (on a weekend, throughout a week, or in an evening)
  4. There is no limit to how much you can write.

PLAY

  1. Decide who goes first.
  2. Take turns placing one post it on the table
  3. Each person decides the value of that activity (0-8)
  4. When everyone has decided play the cards
  5. Notice what everyone chose.
  6. If someone played a 1 or 0 it’s nice to listen to why they rated it so low
  7. Place the sum of the two numbers on the corner of the post it.
  8. Continue this going back & forth putting the activities in a sequence with the higher numbers at the top

DISCUSS

  1. When all the post it activities have been gone through, then look at the top 5 or 6 items. These will likely be the ones you will have time for.
  2. Discuss if there are any over-laps and shift the list accordingly.
  3. Discuss what makes the most sense and what both would like to do. The chances are good that all of these items are ones that both/all people rated high, that’s what put them on the top of the list.
  4. It should be relatively easy to find a way to do the top 3-5 activities with little effort.

PLAN for the day and ENJOY!!!

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post New Agile Planning Game For Parents & Children appeared first on Agile Advice.

Categories: Blogs

The second impediment, or why should you care about engineering practices?

Scrum Breakfast - Fri, 11/04/2016 - 13:12
Sometimes I think, Switzerland is land of product owners. Thanks to our strong economy, there is much more development to do than have capacity for. So we off-shore and near-shore quite a bit. And technical topics seem to produce a yawn among managers and product owners. "Not really my problem," they seem to be saying. I'd like you to challenge that assumption!

I don't often change the main messages of my Scrum courses. For years, I have been talking about "Inspect and Adapt", "Multitasking is Evil" and "Spillover is Evil." Recently I have added a new message:

Bugs are Evil.

Why this change? While researching for my Scrum Gathering Workshop on Code Dojo's, I found a paper by Alistair Cockburn from XP Sardinia. He wrote, In 1000 hours (i.e. in one month), a team can write 50'000 lines of code. And they will send 3'500 bugs to quality assurance.

Doing the math based on industry standard assumptions, I found that that team will create 12'000 hours of effort for themselves, quality assurance, operations, and customer support. 1 year of waste produced for no good reason!

Is this really true? Well, at my last Scrum Master course, I met someone whose teams spend roughly 2/3rds of their time fixing bugs, much of them in emergency mode. Technical debt is a real danger! Imagine if you were paying 2/3rd of your income to pay the rent and plug holes in the ceiling of your apartment! That product is on the verge of bankruptcy!

Technical topics often generate a yawn among product owners and managers. But it's your money and your team's capacity which is being wasted!

So I'd like to encourage you to pay attention to engineering practices. Bugs are evil! Remember that and make sure everyone in your organization knows it too. As a leader, you are best positioned to ask the question, how can have fewer bugs?"

P.S. This is the topic for Monday's Scrum Breakfast Club, "Why should you care about engineering practices?" Check out the event, and come to my Manager and Product Owner friendly introduction to Pair Programming and Test Driven Development.
Categories: Blogs

Video: Agile Social Action at a Neighbourhood Level

Learn more about transforming people, process and culture with the Real Agility Program

This post marks the beginning of a new Agile experiment supported by BERTEIG.

Quite simply, the idea is to apply Agile methods and principles outlined in the Agile Manifesto to a social action project at a neighbourhood level.

The objective is to use the empowering principles of Agile to help eliminate the extremes between wealth and poverty.

The approach is to pair up one family who has items to share with another family who is in need in order to provide a weekly care package including food and other basic care supplies.

The sharing takes place in the real world with the delivery of a package weekly but also corresponds to an online platform which allows for the sharing to happen at an sustainable cadence.

The initiative was formally launched three weeks ago and this video is the first which addresses some basic structures of the framework. This video is a bit like a one-person retrospective.

One of the principles of BERTEIG is to strive to create unity and to be of service to humanity. This socio-economic Agile experiment is a way in which BERTEIG is reaching out to help others and contributing towards the advancement of a small neighbourhood moving along the continuum from poverty to prosperity, materially and spiritually.

 

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Video: Agile Social Action at a Neighbourhood Level appeared first on Agile Advice.

Categories: Blogs

Agile Product Planning and Analysis

TV Agile - Thu, 11/03/2016 - 17:48
This talk presents a method for Agile product planning and analysis with application examples. Discover to deliver is a method was recently published by Ellen Gottesdiener and Mary Gorman, recognized experts in Agile requirements management and collaboration. Discover to deliver aims to help software teams discover valuable features to deliver them faster. Video producer: http://www.agile.lt/
Categories: Blogs

Efficiency Rants and Raves: Twitter Chat Thursday

Johanna Rothman - Thu, 11/03/2016 - 00:07

I’m doing a Twitter chat November 3 at 4pm Eastern/8pm UK with David Daly. David posted the video of our conversation as prep for the Twitter chat.

Today he tweeted this: “How do you optimize for features? That’s flow efficiency.” Yes, I said that.

There were several Twitter rants about the use of the word “efficiency.” Okay. I can understand that. I don’t try to be efficient as much as I try to be effective.

However, I’ve discussed the ideas of resource efficiency and flow efficiency in several places:

And more. Take a look at the flow efficiency tag on my site.

Here’s the problem with the word, “efficiency.” It’s already in the management lexicon. We can’t stop people from using it. However, we can help them differentiate between resource efficiency (where you optimize for a person), and flow efficiency (where you optimize for features). One of the folks discussing this in the hashtag said he optimized for learning, not speed of features. That’s fine.

Flow efficiency optimizes for moving work through the team. If the work you want is learning, terrific. If the work you want is a finished feature, no problem. Both these require the flow through the team—flow efficiency—and not optimization for a given person.

I’ve mentioned this book before, but I’ll suggest it again. Please take a look at this book: This is Lean: Resolving the Efficiency Paradox.

If I want to change management, I need to speak their language. Right now, “efficiency” is part of their language. I want to move that discussion to helping them realize there is a difference between resource efficiency and flow efficiency.

I hope you decide to join us on the chat (which is about hiring for DevOps). I will be typing as fast as my fingers will go

Categories: Blogs

The Simple Leader: Plan, Do, Study, Adjust

Evolving Excellence - Wed, 11/02/2016 - 10:14

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


Excellent firms don’t believe in excellence—
only in constant improvement and constant change.
– Tom Peters

The PDSA (Plan-Do-Study-Act) cycle is the core component of continuous improvement programs. You may have heard it called PDCA (Plan-Do-Check-Act)—and they are very similar— but I have come to prefer PDSA, with the A standing for Adjust, for reasons I’ll explain shortly. Understanding the cycle and its application to continuous improvement is critical for leadership. But first, a history lesson.

In November 2010, Ronald Moen and Clifford Norman wrote a well-researched article in Quality Progress that detailed the history behind PDCA and PDSA. The cycles have their origins in 1939, when Walter Shewhart created the SPI (Specification- Production-Inspection) cycle. The SPI cycle was geared toward mass production operations, but Shewhart soon realized the potential application of the scientific method to problem solving, writing that “it may be helpful to think of the three steps in the mass production process as steps in the scientific method. In this sense, specification, production and inspection correspond respectively to hypothesizing, carrying out an experiment and testing the hypothesis. The three steps constitute a dynamic scientific process of acquiring knowledge.”

At the time, W. Edwards Deming was working with Shewhart to edit a series of Shewhart’s lectures into what would become Shewhart’s Statistical Method from the Viewpoint of Quality Control, published in 1939. Deming eventually modified the cycle and presented his DPSR (Design-Production-Sales-Research) cycle in 1950, which is now referred to as the Deming cycle or Deming wheel. According to Masaaki Imai, Toyota then modified the Deming wheel into the PDCA (Plan-Do-Check-Act) cycle and began applying it to problem solving.

In 1986, Deming again revised the Shewhart cycle, with another modification added in 1993 to make it the PDSA (Plan- Do-Study-Act) cycle, or what Deming called the Shewhart cycle for learning and improvement. (Deming never did like the PDCA cycle. In
1990, he wrote Ronald Moen, saying “be sure to call it PDSA, not the corruption PDCA.” A year later he wrote, “I don’t know the source of the cycle that you propose. How the PDCA ever came into existence I know not.”)

The PDCA cycle has not really evolved in the past 40 years and is still used today at Toyota. The PDSA cycle continues to evolve, primarily in the questions asked at each stage. Although both embody the scientific method, I personally prefer the PDSA cycle, because “study” is more intuitive than “check.” Deming himself had a problem with the term “check,” as he believed it could be misconstrued as “hold back.” I also prefer “Adjust” to “Act,” as it conveys a better sense of ongoing, incremental improvement. Just be aware that some very knowledgeable and experienced people prefer the pure PDCA!

Let’s take a look at each component of PDSA:

  • Plan: Ask objective questions about the process and create a plan to carry out the experiment: who, what, when, where, and a prediction.
  • Do: Execute the plan, make observations, and document problems and unexpected issues.
  • Study: Analyze the data, compare it to expectations, and summarize what was learned.
  • Adjust: Adopt and standardize the new method if successful; otherwise, identify changes to be made in preparation for starting the whole cycle over again.

It’s important to realize that PDSA cycle is valuable at both process and organizational levels, something we have already discussed (in slightly different terms) in this book. For example, you start the plan stage of the PDSA cycle while evaluating your current state and creating a hoshin plan. As you execute the annual and breakthrough objectives of the hoshin plan, you move into the “do” quadrant. On a regular basis, you evaluate the hoshin plan and the results of the goals (study), then modify it as necessary for the next revision of the hoshin plan (act).

Throughout the rest of this section, I will discuss various problem-solving and improvement tools and methods for process- scale improvements. Note that they all follow the same PDSA cycle.

Categories: Blogs

Neo4j: Find the intermediate point between two lat/longs

Mark Needham - Wed, 11/02/2016 - 00:10

Yesterday I wrote a blog post showing how to find the midpoint between two lat/longs using Cypher which worked well as a first attempt at filling in missing locations, but I realised I could do better.

As I mentioned in the last post, when I find a stop that’s missing lat/long coordinates I can usually find two nearby stops that allow me to triangulate this stop’s location.

I also have train routes which indicate the number of seconds it takes to go from one stop to another, which allows me to indicate whether the location-less stop is closer to one stop than the other.

For example, consider stops a, b, and c where b doesn’t have a location. If we have these distances between the stops:

(a)-[:NEXT {time: 60}]->(b)-[:NEXT {time: 240}]->(c)

it tells us that point ‘b’ is actually 0.2 of the distance from ‘a’ to ‘c’ rather than being the midpoint.

There’s a formula we can use to work out that point:

a = sin((1−f)⋅δ) / sin δ
b = sin(f⋅δ) / sin δ
x = a ⋅ cos φ1 ⋅ cos λ1 + b ⋅ cos φ2 ⋅ cos λ2
y = a ⋅ cos φ1 ⋅ sin λ1 + b ⋅ cos φ2 ⋅ sin λ2
z = a ⋅ sin φ1 + b ⋅ sin φ2
φi = atan2(z, √x² + y²)
λi = atan2(y, x)
 
δ is the angular distance d/R between the two points.
φ = latitude
λ = longitude

Translated to Cypher (with mandatory Greek symbols) it reads like this to find the point 0.2 of the way from one point to another

with {latitude: 51.4931963543, longitude: -0.0475185810} AS p1, 
     {latitude: 51.47908, longitude: -0.05393950 } AS p2
 
WITH p1, p2, distance(point(p1), point(p2)) / 6371000 AS δ, 0.2 AS f
WITH p1, p2, δ, 
     sin((1-f) * δ) / sin(δ) AS a,
     sin(f * δ) / sin(δ) AS b
WITH radians(p1.latitude) AS φ1, radians(p1.longitude) AS λ1,
     radians(p2.latitude) AS φ2, radians(p2.longitude) AS λ2,
     a, b
WITH a * cos(φ1) * cos(λ1) + b * cos(φ2) * cos(λ2) AS x,
     a * cos(φ1) * sin(λ1) + b * cos(φ2) * sin(λ2) AS y,
     a * sin(φ1) + b * sin(φ2) AS z
RETURN degrees(atan2(z, sqrt(x^2 + y^2))) AS φi,
       degrees(atan2(y,x)) AS λi
╒═════════════════╤════════════════════╕
│φi               │λi                  │
╞═════════════════╪════════════════════╡
│51.49037311149128│-0.04880308288561931│
└─────────────────┴────────────────────┘

A quick sanity check plugging in 0.5 instead of 0.2 finds the midpoint which I was able to sanity check against yesterday’s post:

╒═════════════════╤═════════════════════╕
│φi               │λi                   │
╞═════════════════╪═════════════════════╡
│51.48613822097523│-0.050729537454086385│
└─────────────────┴─────────────────────┘

That’s all for now!

Categories: Blogs

For and Against and For Software Craftsmanship

Leading Agile - Mike Cottmeyer - Tue, 11/01/2016 - 13:00

The idea of software craftsmanship, as expressed in the Manifesto for Software Craftsmanship, is (in part) to encourage software developers to strive for excellence in their work in order to create productive partnerships with customers and to add value steadily for those customers.
The highly respected software developer and customer-focused consultant, Dan North, blogged in 2011 that “Software Craftsmanship risks putting the software at the centre rather than the benefit the software is supposed to deliver.” Let’s ignore (or try to ignore) the obvious contradiction between the critique of the concept and its actual expression, and examine an analogy Dan uses to illustrate his point.

He points out that in a craft such as, for instance, cathedral-building, the work is intrinsically beautiful in its own right. In contrast, using the same sort of stone as was used in the cathedral to build a bridge, the goal is to make the bridge sturdy and utilitarian, such that people don’t even notice it.

As I see it, both the cathedral and the bridge are equally beautiful. Each is designed to serve a particular purpose. One purpose of the cathedral is to inspire awe and wonder in those who see it. This is one of the ways in which it performs its function in the society. One purpose of the bridge is to be functional without distracting the user from his/her own business. This is one of the ways in which it performs its function in the society. These are different design goals, and yet both require the same degree of engineering skill and craftsmanship.

Joel Spolsky has also questioned the usefulness of the term “craftsmanship” as applied to software. In a piece dating from 2003, he writes “If writing code is not assembly-line style production, what is it? Some have proposed the label craftsmanship. That’s not quite right, either, because I don’t care what you say: that dialog box in Windows that asks you how you want your help file indexed does not in any way, shape, or form resemble what any normal English speaker would refer to as ‘craftsmanship.'”

He’s right. The average English speaker does associate some sort of subjective notion of “beauty” or “artistry” with the word “craftsmanship.” But average English speakers don’t know any more about what makes the utilitarian bridge “beautiful” to an engineer than they know what makes the Windows dialog box “beautiful” to a software developer. And they don’t need to know that. It isn’t part of their world. They’re getting what they need from the cathedral, the bridge, and the dialog box. That is, in fact, the reason those things are recognized as beautiful by makers. If the dialog box performs its function without interfering with the user’s workflow, it’s damned beautiful. It’s as beautiful as a cathedral.

Another highly-respected software expert, Liz Keogh, has also weighed in against the idea of software craftsmanship; or at least, against the way the idea has been expressed. She writes, “I dislike the wording of the manifesto’s points because I don’t think they differentiate between programmers who genuinely care about the value they deliver, programmers who care about the beauty of their code, and programmers who hold a mistaken belief in their own abilities. Any software developer–even the naive, straight-out-of-college engineer with no knowledge of design and little knowledge of incremental delivery–could sign up to that manifesto in the mistaken belief that they were doing the things it espouses.”

She’s right. Many individuals overestimate their own abilities. I disagree that this invalidates the attempt to express an aspirational goal…and it says right near the top of the manifesto: “As aspiring software craftsmen…” (emphasis mine). So, it isn’t a question of people believing they’re already software craftsmen. Therefore, although Liz is correct in saying some people overestimate their own abilities, that fact has nothing to do with the document in question.

Liz is also right that different statements in the manifesto address different topics: Both customer value and code quality are mentioned. One is a goal and the other is a means. Both should be mentioned.

And there’s a false dichotomy in Liz’s comment, I think. Why would a software craftsperson not care about both the value they deliver and the beauty of their code? Does one negate the other? Indeed, doesn’t attention to clean design help support value delivery? Badly-designed code is more likely to contain errors and more likely to be hard to maintain than well-designed code.

The manifesto, like most products of humans beings, is imperfect. If we were to wait for a thing to be perfect before finding it in any way useful, then we’d still be fleeing from sabre-toothed cats in the tall grass of the savannah. Actually, come to think of it, we wouldn’t. We’d be dead. Our ancestors would have eschewed any less-than-perfect means of escape. Having eschewed, they would have been chewed.

Why is it that critics of software craftsmanship seem to miss the point? I might offer three humble observations.

1. Snap judgment

The criticisms of the manifesto almost universally suggest the critic has not read the document carefully. It’s possible that some people read the title and skim the thing, and then react on a gut level to one or more words they assume carry some implication they disapprove of.

It seems to me “add value steadily [for] customers” doesn’t mean “elevate the software above customer value.” Similarly, “aspiring software craftsmen” doesn’t mean “I overestimate my own abilities.”

2. Inability to compartmentalize thinking

When we try to understand a complicated thing, it’s often useful to switch between big-picture and focused thinking. We want to keep the whole in mind without discarding our ability to comprehend its parts.

The big picture is that the purpose of software development is to provide value to the stakeholders of the software. I doubt anyone means to elevate the craft of software development above that purpose. To think about, talk about, and strive to excel in the craft of software development takes nothing away from the larger goal of providing value to stakeholders. Indeed, such activity is motivated by the desire to provide that value. I might suggest it would be difficult, if not impossible, to deliver value to customers without paying due attention to craftsmanship.

If we turn the critique around, we might ask: How does one propose to provide value to the stakeholders of software without understanding or applying good software development practices? How does one propose to develop an understanding of good software development practices, and sound habits to apply them, without making a conscious and mindful effort to do so? If the stonemasons and others involved in construction had ignored the skills of their respective crafts, how good would the cathedral be? The bridge?

3. Limited conception of beauty

If we define “beauty” to suggest a close alignment between the finished product and its design goals, then I suggest both the cathedral and the bridge are beautiful from the perspective of the engineers, architects, and craftsmen who contributed to their construction. The fact the cathedral catches the attention of passers-by while the bridge goes unnoticed as people cross it means nothing less than both structures have achieved their design goals. Their “users” appreciate the value both objects bring, even if they don’t grasp the nuances of craftsmanship that went into their construction. And without those nuances, the cathedral would be nothing more than “a big hut for people to meet in” and the bridge would be a disaster waiting to happen.

Conclusion

At LeadingAgile, we appreciate software craftsmanship. We understand it exists solely to enable the delivery of value to customers. We’re frankly a bit confused when we hear or read interpretations that miss that point. We also understand that without attention to excellence of execution, no one can deliver value to customers. Has the manifesto been signed by people who may not be well qualified to speak of craftsmanship? Maybe, but the document is more a commitment than a diploma, so I think it’s fine for anyone to sign it who is on board with the concept. When you sign it, you place yourself publicly on a lifelong journey of learning and self-improvement. I’m at a loss to see what’s wrong with that.

The post For and Against and For Software Craftsmanship appeared first on LeadingAgile.

Categories: Blogs

Coaches, Managers, Collaboration and Agile, Part 3

Johanna Rothman - Mon, 10/31/2016 - 22:33

I started this series writing about the need for coaches in Coaches, Managers, Collaboration and Agile, Part 1. I continued in Coaches, Managers, Collaboration and Agile, Part 2, talking about the changed role of managers in agile. In this part, let me address the role of senior managers in agile and how coaches might help.

For years, we have organized our people into silos. That meant we had middle managers who (with any luck) understood the function (testing or development) and/or the problem domain (think about the major chunks of your product such as Search, Admin, Diagnostics, the feature sets). I often saw technical organizations organized into product areas with directors at the top, and some functional directors such as those test/quality and/or performance.

In addition to the idea of functional and domain silos, some people think of testing or technical writing as services. I don’t think that way. To me, it’s not a product unless you can release it. You can’t release a product without having an idea of what the testers have discovered and, if you need it, user documentation for the users.

I don’t think about systems development. I think about product development. That means there are no “service” functions, such as test. We need cross-functional teams to deliver a releasable product. But, that’s not how we have historically organized the people.

When an organization wants to use agile, coaches, trainers, and consultants all say, “Please create cross-functional teams.” What are the middle managers supposed to do? Their identity is about their function or their domain. In addition, they probably have MBOs (Management By Objective) for their function or domain. Aside from not working and further reducing flow efficiency, now we have affected their compensation. Now we have the container problem I mentioned in Part 2.

Middle and senior managers need to see that functional silos don’t work. Even silos by part of product don’t work. Their compensation has to change. And, they don’t get to tell people what to do anymore.

Coaches can help middle managers see what the possibilities are, for the work they need to do and how to muddle through a cultural transition.

Instead of having managers tell people directly what to do, we need senior management to update the strategy and manage the project portfolio so we optimize the throughput of a team, not a person. (See Resource Management is the Wrong Idea; Manage Your Project Portfolio Instead and Resource Efficiency vs. Flow Efficiency.)

The middle managers need coaching and a way to see what their jobs are in an agile organization. The middle managers and the senior managers need to understand how to organize themselves and how their compensation will change as a result of an agile transformation.

In an agile organization, the middle managers will need to collaborate more. Their collaboration includes: helping the teams hire, creating communities of practice, providing feedback and meta-feedback, coaching and meta-coaching, helping the teams manage the team health, and most importantly, removing team impediments.

Teams can remove their local impediments. However, managers often control or manage the environment in which the teams work. Here’s an example. Back when I was a manager, I had to provide a written review to each person once a year. Since I met with every person each week or two, it was easy for me to do this. And, when I met with people less often, I discovered they took initiative to solve problems I didn’t know existed. (I was thrilled.)

I had to have HR “approve” these reviews before I could discuss them with the team member. One not-so-experienced HR person read one of my reviews and returned it to me. “This person did not accomplish their goals. You can’t give them that high a ranking.”

I explained that the person had finished more valuable work. And, HR didn’t have a way to update goals in the middle of a year. “Do you really want me to rank this person lower because they did more valuable work than we had planned for?”

That’s the kind of obstacle managers need to remove. Ranking people is an obstacle, as well as having yearly goals. If we want to be able to change, the goals can’t be about projects.

We don’t need to remove HR, although their jobs must change. No, I mean the HR systems are an impediment. This is not a one-conversation-and-done impediment. HR has systems for a reason. How can the managers help HR to become more agile? That’s a big job and requires a management team who can collaborate to help HR understand. That’s just one example. Coaches can help the managers have the conversations.

As for senior management, they need to spend time developing and updating the strategy. Yes, I’m fond of continuous strategy update, as well as continuous planning and continuous project portfolio management.

I coach senior managers on this all the time.

Let me circle back around to the question in Part 1: Do we have evidence we need coaches? No.

On the other hand, here are some questions you might ask yourself to see if you need coaches for management:

  • Do the managers see the need for flow efficiency instead of resource efficiency?
  • Do the managers understand and know how to manage the project portfolio? Can they collaborate to create a project portfolio that delivers value?
  • Do the managers have an understanding of how to do strategic direction and how often they might need to update direction?
  • Do the managers understand how to move to more agile HR?
  • Do the managers understand how to move to incremental funding?

If the answers are all yes, you probably don’t need management coaching for your agile transformation. If the answers are no, consider coaching.

When I want to change the way I work and the kind of work I do, I take classes and often use some form of coaching. I’m not talking about full-time in person coaching. Often, that’s not necessary. But, guided learning? Helping to see more options? Yes, that kind of helping works. That might be part of coaching.

Categories: Blogs

Neo4j: Find the midpoint between two lat/longs

Mark Needham - Mon, 10/31/2016 - 21:31

2016 10 31 06 06 00

Over the last couple of weekends I’ve been playing around with some transport data and I wanted to run the A* algorithm to find the quickest route between two stations.

The A* algorithm takes an estimateEvaluator as one of its parameters and the evaluator looks at lat/longs of nodes to work out whether a path is worth following or not. I therefore needed to add lat/longs for each station and I found it surprisingly hard to find this location date for all the points in the dataset.

Luckily I tend to have the lat/longs for two points either side of a station so I can work out the midpoint as an approximation for the missing one.

I found an article which defines a formula we can use to do this and there’s a StackOverflow post which has some Java code that implements the formula.

I wanted to find the midpoint between Surrey Quays station (51.4931963543,-0.0475185810) and a point further south on the train line (51.47908,-0.05393950). I wrote the following Cypher query to calculate this point:

WITH 51.4931963543 AS lat1, -0.0475185810 AS lon1, 
     51.47908 AS lat2 , -0.05393950 AS lon2
 
WITH radians(lat1) AS rlat1, radians(lon1) AS rlon1, 
     radians(lat2) AS rlat2, radians(lon2) AS rlon2, 
     radians(lon2 - lon1) AS dLon
 
WITH rlat1, rlon1, rlat2, rlon2, 
     cos(rlat2) * cos(dLon) AS Bx, 
     cos(rlat2) * sin(dLon) AS By
 
WITH atan2(sin(rlat1) + sin(rlat2), 
           sqrt( (cos(rlat1) + Bx) * (cos(rlat1) + Bx) + By * By )) AS lat3,
     rlon1 + atan2(By, cos(rlat1) + Bx) AS lon3
 
RETURN degrees(lat3) AS midLat, degrees(lon3) AS midLon
╒═════════════════╤═════════════════════╕
│midLat           │midLon               │
╞═════════════════╪═════════════════════╡
│51.48613822097523│-0.050729537454086385│
└─────────────────┴─────────────────────┘

The Google Maps screenshot on the right hand side shows the initial points at the top and bottom and the midpoint in between. It’s not perfect; ideally I’d like the midpoint to be on the track, but I think it’s good enough for the purposes of the algorithm.

Now I need to go and fill in the lat/longs for my location-less stations!

Categories: Blogs

Dockerfile Configuration Cheatsheets

Derick Bailey - new ThoughtStream - Mon, 10/31/2016 - 13:45

Building your own Docker image is just about the easiest thing you can imagine doing with a command-line tool. It’s only 3 “words” to build the image, after all.

But getting the Dockerfile right, so that these three words will run correctly and produce the results that you want? Well… that’s a bit of a different story.

Dockerfile configuration has dozens of options.

And several of which seem to do the something similar or the same (ADD vs COPY, and ENTRYPOINT vs CMD, for example).

Then when you put all of these option in a single “page” of endless scrolling for the official Dockerfile reference, it’s easy to see how this can make Dockerfile configuration frustrating – especially if it’s not something you do on a regular basis. 

To combat this problem, I created the Dockerfile Configuration and Advanced Dockerfile cheatsheets.

They represent the most common and useful Dockerfile configuration items, allowing you to quickly and easily be reminded of what options you should be using, when.

And like the Docker Management cheatsheet I created, they are free!

Download the Docker Cheatsheets

You can grab the cheatsheets individually:

 Or you can grab them all at once, with the cheatsheet collection

Get The Complete Docker Cheatsheet Collection

Docker cheatsheet stack

The post Dockerfile Configuration Cheatsheets appeared first on DerickBailey.com.

Categories: Blogs

Links for 2016-10-30 [del.icio.us]

Zachariah Young - Mon, 10/31/2016 - 09:00
Categories: Blogs

Building an Agile Culture of Learning

Does your Agile education begin and end with barely a touch of training?  A number of colleagues have told me that in their companies, Agile training ranged from 1 hour to 1 day.  Some people received 2 days of Scrum Master training. With this limited training, they were expected to implement and master the topic.  Agile isn’t simply a process or skill that can be memorized and applied. It is a culture shift. Will this suffice for a transformation to Agile?
Education is an investment in your people.  A shift in culture requires an incremental learning approach that spans time.  What works in one company doesn’t work in another. A learning culture should be an intrinsic part of your Agile transformation that includes skills, roles, process, culture and behavior education with room to experience and experiment.
An Agile transformation requires a shift toward a continuous learning culture which will give you wings to soar!  You need a combination of training, mentoring, coaching, experimenting, reflecting, and giving back. These education elements can help you become a learning enterprise.  Let's take a closer look at each:
Training is applied when an enterprise wants to build employee skills, educate employees in their role, or roll out a process. It is often event driven and a one-way transfer of knowledge. What was learned can be undone when you move back into your existing culture.
Coaching helps a team put the knowledge into action and lays the groundwork for transforming the culture. Coaching provides a two-way communication process so that questions can be asked along the way. A coach can help you course-correct and promote right behaviors for the culture you want.
Mentoring focuses on relationships and building confidence and self-awareness. The mentee invests time by proposing topics to be discussed with the mentor in the relationship. In this two-way communication, deep learning can occur.
Experimentingfocuses on trying out the new skills, roles, and mindset in a real world setting.  This allows first-hand knowledge of what you’ve learned and allows for a better understanding of Agile.
Reflecting focuses on taking the time to consider what you learned whether it is a skill, process, role, or culture, and determine what you can do better and what else you need on your learning journey. 
Giving back occurs when the employee has gained enough knowledge, skills, experience, to start giving back to their community to make the learning circle complete. Helping others highlight a feeling of ownership to the transformation and the learning journey.
It takes a repertoire of educational elements to achieve an Agile culture and becoming a Learning enterprise. When you have people willing to give back is when the learning enterprise has become full circle and your enterprise can soar.

-------------------

For more Agile related Learning and Education articles, consider reading:



Categories: Blogs