Skip to content

Feed aggregator

Encouraging healthy conflict

Growing Agile - Wed, 03/22/2017 - 10:40
We were recently at a client who wanted to encourage healthy conflict within their teams. This is a common desire amongst teams who have become very easy going with their process. Usually some signs are: everyone agrees to everything or says nothing and just goes along with the group. Very little change or innovation or […]
Categories: Companies

Docker container secrets on AWS ECS

Xebia Blog - Wed, 03/22/2017 - 09:42

Almost every application needs some kind of a secret or secrets to do it's work. There are all kind of ways to provide this to the containers but it all comes down to the following five: Save the secrets inside the image Provide the secrets trough ENV variables Provide the secrets trough volume mounts Use a secrets […]

The post Docker container secrets on AWS ECS appeared first on Xebia Blog.

Categories: Companies

Deprecating private impediments

TargetProcess - Edge of Chaos Blog - Tue, 03/21/2017 - 17:01

Good day everyone!

In our efforts to continuously improve the Targetprocess experience for you, we're analyzing the performance of some core features, such as visualizing your data on dozens of different views or accessing that data through our API. It's a well-known fact in the software engineering industry that every feature comes with a cost. Unfortunately, sometimes the features we build become obsolete or just don't fire off at all. In a perfect world, such features would be free or extremely cheap to maintain and we could simply ignore them. However, the real world is much more cruel, and quite often there is a cost associated with the ongoing support of these features.

Our "private impediments" feature is a good example of this. According to our analysis, its usage is close to none, but it adds a significant performance overhead to our data querying operations, most notably for inbound/outbound relations lookup. Therefore, we'd like to remove private impediments from Targetprocess in our upcoming release.

So, what does this mean for you?

If you don't use impediments at all, then nothing changes for you. If you use impediments but don't use the "Private" flag on them, then once again nothing changes for you. If you have private impediments, they will be deleted from your Targetprocess account, unless you make them public before the new release.

Wait, what? Are you really going to delete my private impediments?!

Well, yeah, but we've thought this through. There are basically 2 options: either delete them, or make them public. We assumed that it would be terrible to make someone's private data publicly visible. Also, given the fact that the private impediment usage is quite low, and we also continuously make backups for our on-demand instances, we'd be able to restore the data for individual customers if you ask us to.

Hopefully, this all makes sense for you. Don't hesitate to get in touch and contact our support if you have any questions!

Categories: Companies

TDD is not about unit tests

Xebia Blog - Tue, 03/21/2017 - 15:42

-- Dave Farley & Arjan Molenaar On many occasions when we come at a customer, we're told the development team is doing TDD. Often, though, a team is writing unit tests, but it's not doing TDD. This is an important distinction. Unit tests are useful things. Unit testing though says nothing about how to create […]

The post TDD is not about unit tests appeared first on Xebia Blog.

Categories: Companies

Scrum Days Poland, Warsaw, Poland, June 5-6 2017

Scrum Expert - Tue, 03/21/2017 - 10:00
Scrum Days Poland is a two-day conference focused on Scrum and Agile project management approaches. It aims to create an environment where people can meet, build social networks, do business and have...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Final Details for Agile Fluency Coaching Workshop

James Shore - Tue, 03/21/2017 - 10:00
21 Mar 2017 James Shore/Blog

Our Agile Fluency™ Game coaching workshop is coming up fast! Signups close on March 28th. Don't wait!

We've been hard at work finalizing everything for the workshop. We hired Eric Wahlquist to do the graphic design and he did a great job.

Diana Larsen and I have also finalized the agenda for the workshop. It's so much more than just the game. The workshop is really a series of mini-workshops that you can use to coach your teams. Check 'em out:

  1. The Agile Fluency Game: Discover interrelationships between practices and explore the tradeoffs between learning and delivery
  2. Your Path through the Agile Fluency Model: Understand fluency zone tradeoffs and choose your teams' targets
  3. Zone Zoom: Understand how practices enable different kinds of fluency
  4. Trading Cards: Explore tradeoffs between practices
  5. Up for Adoption: See how practices depend on each other and which ones your teams could adopt
  6. Fluency Timeline: Understand the effort and time required for various practices
  7. Perfect Your Agile Adoption: Decide which practices are best for your teams and how to adopt them

These are all hands-on, experiential workshops that you'll learn how to conduct with your own teams. I think they're fantastic. You can sign up here.

Categories: Blogs

A Very Short Course in SNAP

Agile Estimator - Tue, 03/21/2017 - 04:18

Software Non-Functional Assessment Process

This is a very short course in the International Function Point Users Group’s (IFPUG’s) Software Non-Functional Assessment Process (SNAP). It is offered in the same spirit as A Very Short Course in Function Point Analysis. As I begin this post, I am reminded of an experience that I had in graduate school about 30 years ago. I had been assigned to do some research and give a 3 minute presentation on it. I gave the presentation, but I took about 5 minutes. The professor said, “Your presentation was interesting. But, if you really knew what you were talking about, you could have summarized it in the 3 minutes that I assigned you.” We will see if I know what I am talking about when it comes to SNAP.

Obviously, there is a an introduction that should be given to SNAP. This is a story of committees and impact studies, rewrites and certifications. What you really need to understand is that SNAP is the result of about a quarter of a century of function point counters talking to unhappy project managers. Project managers would describe in painful detail how a batch job worked. The counter would ask, “Does anything cross the boundary?” If the answer was no, there was no function point credit given. Sometimes the counter would sugar coat it by saying that some other functional transaction really covered this functionality. Sometimes the counter would be told about some non-functional requirement and explain that the effort was covered by one of the General System Characteristics. The manager was seldom satisfied. SNAP addresses these concerns.

There are fourteen sub-categories of non-functionality that make up a SNAP assessment. Basically, you must understand each one. There is some consistency between them, but not as much as you might expect. In addition, several of them are applied differently when a functional or non-functional capability is being added to an application, than when changes are being made. Many of the sub-categories are are based on elementary processes. These are business processes that make sense to the user. People familiar with function points know that External Inputs (EIs), External Outputs (EOs) and External Inquiries (EQs) are all elementary processes. However, not all elementary processes are EIs, EOs or EQs. An example would be a business process that does not cause data to cross the application boundary.

Data Entry Validation and Logical and Mathematical Operations are two subcategories that coexist with functional transactions. An input screen would probably be an EI. In addition to the function points for an EI, the input might also have SNAP points for data entry validations. The number of SNAP points is a function of the nesting level and the number of data element types (DETs). Nesting level is the number of inter-field validations. For example, if county is validated based on state, there is a nesting level of 2. Logical and Mathematical Operations works in a similar fashion. Any report is likely to be an EO. However, if it requires complicated math, like the solution of simultaneous equations, then it may also be awarded SNAP points for that complexity.

Data Formatting and User Interfaces applies to elementary processing and appears that they should be like the above two sub-categories. They are not. Data Formatting is primarily involved with encryption. It may involve displaying sensitive personal information formatted with asterisks. It might also involve algroithms that encrypt data before writing or transmitting it. In any case, some people consider it a gray area in regards to functionality. If function points are awarded for the change, then SNAP points are not. When they are awarded, they are a function of the type of encryption and the number of DETs. User Interfaces is for situations where a screen is being changed without a functional change. For example, if the font size of screen elements is changing, there is no functional change to count as function points. This is when the Data Formatting sub-category comes into play. SNAP points are awarded based on the number of user interface elements and DETs. This means that Data Formatting does not come into play for new development or changes that have a functional component.

SNAP assessments use the same application boundaries as a function point count of the same application. However, SNAP introduces the concept of a partition. The partition is technical in nature. A client-server application may have two partitions: client and server. An application build using a three-tier architecture may have three partitions. A batch partition or a special server, like a fax server, that offloads some processing might be considered a partition. The Internal Data Movements sub-category captures the SNAP points associated with sending data from one partition to another. The complexity is a function of the number of File Types Referenced (FTRs) and the number of DETs. Remember that these internal data movements are not functional because they do not cross the application boundary. However, they certainly add to the effort required to implement the application.

Delivering Added Values to Users by Data Configuration is a subcategory that comes into play for applications that use configuration tables to add additional capabilities. For example, a company might want their application to be easily modified every time they entered a new market. The functionality for each new market may be the same, but there will be changes to things like addresses. There might also be promotions that are applicable to one market, but not the others. These might all be set up as tables. There can be no function points awarded for this because only new rows are being added to tables, not new columns. However, the new configurations have to be setup, entered and tested. SNAP points are awarded for this. The points are a function of the number of elementary processes that depend upon these configuration files, the number of records that are added the files and the number of attributes in each record.

Help Methods is a SNAP sub-category. It is evaluated at the application, but its complexity is calculated based on the type of help and the number of help items actually implemented. Type of help can range from user manuals to online text to context sensitive help. Some estimators are not satisfied with this. Help is basically writing content while the rest of the application involves writing software. Some organizations have a separate technical writing group that has their own approach to estimating and this content. For these people, help should be separately estimated.

Multiple Input Methods and Multiple Output Methods are two SNAP sub-categories that are strongly related to some function point concepts. Multiple Input / Output Interfaces is a sub-category that sounds like it should be related, but is different. First, Multiple Input Methods covers situation the where an input may be entered using multiple methods. For example, a product id might be entered into a keypad by someone doing inventory or scanned by a bar-code reader. The same is true for Multiple Output Methods. The same report might go to a printer or sent to a recipient by email. Here is the complication, from a function point perspective are these one elementary process or multiple ones. IFPUG allows either interpretation, as long as it is documented. Now, from a SNAP perspective, no SNAP points are awarded if function points have been. Otherwise, the SNAP points are a function of the number of DETs in the input or output as well as the number of additional inputs or outputs. The Multiple Input / Output Interfaces sub-category is for situations where additional inputs or outputs are added because of increased usage. For example, a company might have a joint marketing agreement with another company. It would accept a file of that company’s customers and perform some type of marketing with them. The company might enter into this same relationship with an additional company. The format and logic of the additional input would be the same, so there is no additional functionality. However, there is effort to incorporate this new partner into the system. This is where this sub-category comes into play. Like the multiple methods subcategories, the SNAP points are a function of the number of DETs, as well as the number of additional inputs and outputs.

Multiple Platforms sub-category awards SNAP credit when a duplication of effort is necessary to deliver functionality in more than one technical platform. Internet browsers is an obvious case of this. If the same website must work in Internet Explorer and Chrome, then SNAP points will be awarded for each elementary process that is delivered as such. The points are a function of the number of platforms and whether the platforms are in the same family. For example, delivering functionality in both an expert system shell and a procedural language would be a case where different families of software platform were used.

Database Technology sub-category basically serves two purposes. The first purpose is to award SNAP points for database related adds or changes that do not get functional credit. Examples include any database that is added or changed strictly for performance reasons or changing the order of fields in a file. Remember that changes that get function point credit do not get SNAP credit. Also this sub-category is captured at the elementary process level, not at the file level. Therefore, if 5 physical files are changed in order to increase the performance of the Create Order process, then the SNAP credit is a single value associated with Create Order, not 5 values associated with the 5 files. The amount of SNAP credit is a function of the number of maximum number of RETs in the files changed and the number of changes made. The other purpose of this sub-category is for code tables. Code tables are tables that capture abbreviations in an application. An example would be the codes associated with airports. EWR is the code for Newark Liberty International Airport. Using the abbreviation is simpler and cuts down on data entry errors. Conceptually, it does not change the application. For purposes of SNAP, all of the code data in an application is considered to be in a single logical file. The complexity of the code table is based on the number of functions that it performs. It might be used for substitution, to validate data entry or for static data. Again, the number of changes are considered.

Batch Processing is used to award credit for batch processing that was not awarded functional credit. Batch processing that accepted data from outside of the application boundary, or produced data that crossed the application boundary, were considered function point transactions and counted as such. These will not get SNAP points. However, there were often batch process that ran at the end of the day that would update logical files without crossing the application boundary. These are awarded SNAP points. The number of SNAP points is a function of the number of files that are read or updated by the batch process and the number of DETs that are processed.

Component Based Software is a sub-category to assign SNAP points for components that are used in software development. This is common in websites. A page my require that a form be filled out to allow email addresses to be captured and maintained. This functionality may be supplied by a third party plugin. There would be an elementary process for the underlying page as well as the non-functional credit for using a component in the development. The complexity of this sub-category is based on whether the component was developed in house or is a third party component. The latter are considered to be more complex.

Categories: Blogs

10 Benefits of Lean

Enjoy this excerpt from the Lean Business Report. Download the full report here.
The data makes it...

The post 10 Benefits of Lean appeared first on Blog | LeanKit.

Categories: Companies

Becoming an Agile Leader, Part 5: Learning to Learn

Johanna Rothman - Mon, 03/20/2017 - 20:38

To summarize: your agile transformation is stuck. You’ve thought about your why, as in Becoming an Agile Leader, Part 1: Define Your Why. You’ve started to measure possibilities. You have an idea of who you might talk with as in Becoming an Agile Leader, Part 2: Who to Approach. You’ve considered who you need as allies and how to enlist them in Becoming an Agile Leader, Part 3: How to Create Allies. In Becoming an Agile Leader, Part 4: Determining Next Steps, you thought about creating win-wins with influence. Now, it’s time to think about how you and the people involved (or not involved!) learn.

As an agile leader, you learn in at least two ways: observing and measuring what happens in the organization. (I have any number of posts about qualitative and quantitative measurement.) Just as importantly, you learn by thinking, discussing with others, and working with others. The people in the organization learn in these ways, too.

The Satir Change Model is a great way of showing what happens when people learn. (Learning is a form of change.) Here’s the quick intro to the Change Model: We start off in Old Status Quo, what we did before. Along comes a Foreign Element, where someone introduced some kind of change into the environment. We have uneven performance until we discover our Transforming Idea. Once we have an idea that works, we can continue with Practice and Integration until we have more even performance in New Status Quo.

In the Influential Agile Leader, you have a chance to think alone with your pre-work, by discussing together such as when you draw your map in Part 1, and by working together as in coaching and influence and all the other parts of the day. One of the most important things we do is to debrief all the activities just after you finish them. That way, people have a chance to articulate what they learned and any confusions they still have.

Every person learns in their own way, at their own pace. With interactions, simulations, and some thinking time, people learn in the way they need to learn.

We don’t tell people what to do or how to think. We suggest options we’ve seen work before (in coaching). We might help supply some options for people who don’t know of alternatives. And, the participants work together. Each person’s situation is a little different. That means each person has experiences that enrich the entire room.

Learn to be an agile leader and help your agile transformation progress. Please join us at the next Influential Agile Leader, May 9-10, 2017 in Toronto.

Categories: Blogs

Retromat – Random Scrum Retrospectives Plan Generator

Scrum Expert - Mon, 03/20/2017 - 19:44
Retromat is a free online website that allows to generate random plans for Agile and Scrum retrospectives. Out of a pool of more than 100 activities, it selects one for each of the five phases (stage...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Learning Tools beginning Exponential Growth

Agile Complexification Inverter - Mon, 03/20/2017 - 18:33
Learning tools are beginning to benefit from the exponential growth process of knowledge creation and transference.  Here is Seeing Theory an example of this.  Did you do well in Probability and Stats in school?  Funny enough I can predict with astonishing accuracy that a majority of you said "NO!"  I also struggled with those courses, may have repeated it a time or two.  But now with some age I find it much more fascination to study this subject.

Seeing Theory Leaning Site"By 2030 students will be learning from robot teachers 10 times faster than today" by World Economic Forum.

This is what happens when humans debate ethics in front of a super intelligent learning AI.

See Also:

TED Radio Hour : NPR : Open Source World  Tim Berners-Lee tells the story of how Gopher killed it's user base progress and CERN declared their WWW open source in 1993 April 30th, insuring it would continue to prosper.  And was it's growth exponential?

Categories: Blogs

Tired Of Waiting For ‘npm install’ To Finish Every Time You Touch Docker?

Derick Bailey - new ThoughtStream - Mon, 03/20/2017 - 13:30

In February, I launched the first of my WatchMeCode: Live! sessions on Docker. This is a series where I do a live webinar-style session of talking about code, providing commentary and getting live Q&A from the audience at the end.

For March 2017, I’m preparing another session on the ever-so-frustrating npm delay in Docker.

What is the npm delay?

This tweet from Sergio Rodrigo sums it up best:

Now throw Docker into the mix with “RUN npm install” in your Dockerfile and a host-mounted volume to edit code, and things get really ugly, fast.

What was once a 5 minute install is now more than 10 minutes. And worse, it seems every time you touch anything in Docker or your project, you incur yet another round of “npm install”.

Fortunately, there is a solution.

I’ve recently been using some tools – built in to Docker – to cut this constant time sink from my Docker projects.

Instead of having to deal with the npm delay by playing video games, watching netflix or generally slacking in my work, I’m only running “npm install” when I actually need a new dependency. I no longer have Docker running it for every single build of my Docker image, or when touch anything in my project.

The best news, though, is that these are simple tools and techniques and they have a huge impact. And I want to show you how to use them in the WatchMeCode: Live! session on March 27th.

Join me for this event and I’ll show you how to eliminate the npm delay in your Docker project.

 I look forward to seeing you at this live session!

   – Derick

The post Tired Of Waiting For ‘npm install’ To Finish Every Time You Touch Docker? appeared first on

Categories: Blogs

The Gift of Feedback (in a Booklet) - Sun, 03/19/2017 - 20:00

Receiving timely relevant feedback is an important element of how people grow. Sports coaches do not wait until the new year starts to start giving feedback to sportspeople, so why should people working in organisations wait until their annual review to receive feedback? Leaders are responsible for creating the right atmosphere for feedback, and to ensure that individuals receive useful feedback that helps them amplify their effectiveness.

I have given many talks on the topic and written a number of articles on this topic to help you.

However today, I want to share some brilliant work from some colleagues of mine, Karen Willis and Sara Michelazzo (@saramichelazzo) who have put together a printable guide to help people collect feedback and to help structure witting effective feedback for others.

Feedback Booklet

The booklet is intended to be printed in an A4 format, and I personally love the hand-drawn style. You can download the current version of the booklet here. Use this booklet to collect effective feedback more often, and share this booklet to help others benefit too.

Categories: Blogs

Python 3: TypeError: Object of type ‘dict_values’ is not JSON serializable

Mark Needham - Sun, 03/19/2017 - 18:40

I’ve recently upgraded to Python 3 (I know, took me a while!) and realised that one of my scripts that writes JSON to a file no longer works!

This is a simplified version of what I’m doing:

>>> import json
>>> x = {"mark": {"name": "Mark"}, "michael": {"name": "Michael"}  } 
>>> json.dumps(x.values())
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/", line 180, in default
TypeError: Object of type 'dict_values' is not JSON serializable

Python 2.7 would be perfectly happy:

>>> json.dumps(x.values())
'[{"name": "Michael"}, {"name": "Mark"}]'

The difference is in the results returned by the values method:

# Python 2.7.10
>>> x.values()
[{'name': 'Michael'}, {'name': 'Mark'}]

# Python 3.6.0
>>> x.values()
dict_values([{'name': 'Mark'}, {'name': 'Michael'}])

Python 3 no longer returns an array, instead we have a dict_values wrapper around the data.

Luckily this is easy to resolve – we just need to wrap the call to values with a call to list:

>>> json.dumps(list(x.values()))
'[{"name": "Mark"}, {"name": "Michael"}]'

This versions works with Python 2.7 as well so if I accidentally run the script with an old version the world isn’t going to explode.

The post Python 3: TypeError: Object of type ‘dict_values’ is not JSON serializable appeared first on Mark Needham.

Categories: Blogs

Velocity Calculus - The mathematical study of the changing software development effort by a team

Agile Complexification Inverter - Sat, 03/18/2017 - 20:43

In the practice of Scrum many people appear to have their favorite method of calculating the team's velocity. For many, this exercise appears very academic. Yet when you get three people and ask them you will invariability get more answers than you have belly-buttons.

Velocity—the rate of change in the position of an object; a vector quantity, with both magnitude and direction. “Calculus is the mathematical study of change.” — Donald Latorre 
This pamphlet describes the method I use to teach beginning teams this one very important Scrum concept via a photo journal simulation.

Some of the basic reasons many teams are "doing it wrong"... (from my comment on Doc Norton's FB question: Hey social media friends, I am curious to hear about dysfunctions on agile teams related to use of velocity. What have you seen?

  • mgmt not understanding purpose of Velocity empirical measure;
  • teams using some bogus statistical manipulation called an average without the understanding of the constrains that an average is valid within;
  • SM allowing teams to carry over stories and get credit for multiple sprints within one measurement (lack of understanding of empirical);
  • pressure to give "credit" for effort but zero results - culture dynamic viscous feedback loop;
  • lack of understanding of the virtuous cycle that can be built with empirical measurement and understanding of trends;
  • no action to embrace the virtuous benefits of a measure-respond-adapt model (specifically story slicing to appropriate size)
... there's 6 - but saving the best for last:
  • breaking the basic tenants of the scrum estimation model - allow me to expand for those who have already condemned me for violating written (or suggesting unwritten) dogma...
    • a PBL item has a "size" before being Ready (a gate action) for planning;
    • the team adjusts the PBL item size any/ever time they touch the item and learn more about it (like at planning/grooming);
    • each item is sized based on effort/etc. from NOW (or start of sprint - a point in time) to DONE (never on past sunk cost effort);
    • empirical evidence and updated estimates are a good way to plan;
  • therefore carryover stories are resized before being brought into the next sprint - also reprioritized - and crying over spilt milk or lost effort credit is not allowed in baseball (or sprint planning)

Day 1 - Sprint Planning
A simulated sprint plan with four stories is developed. The team forecast they will do 26 points in this sprint.

Day 2
The team really gets to work.

Day 3
Little progress is visible, concern starts to show.

Day 4Do you feel the sprint progress starting to slide out of control?

Day 5About one half of the schedule is spent, but only one story is done.

Day 6The team has started work on all four stories, will this amount of ‘WIP’ come back to hurt them?

Day 7
Although two stories are now done, the time box is quickly expiring.

Day 8
The team is mired in the largest story.

Day 9The output of the sprint is quite fuzzy. What will be done for the demo, what do we do with the partially completed work?

Day 10
The Sprint Demo day. Three stories done (A, B, & D) get demoed to the PO and accepted.

Close the SprintCalculate the Velocity - a simple arithmetic sum.

Story C is resized given its known state and the effort to get it from here to done. 

What is done with the unfinished story? It goes back into the backlog and is ordered and resized.

Backlog grooming (refinement) is done to prepare for the next sprint planning session.

Trophies of accomplishments help motivation and release planning. Yesterday’s weather (pattern) predicts the next sprints velocity.

Sprint 2 Begins with Sprint PlanningDay 1Three stories are selected by the team.  Including the resized (now 8 points) story C.

Day 2
Work begins on yet another sprint.

Day 3
Work progresses on story tasks.

The cycles of days repeats and the next sprint completes.

Close Sprint 2Calculate the Velocity - a simple arithmetic sum.

In an alternative world we may do more complex calculus. But will it lead us to better predictability?

In this alternative world one wishes to receive partial credit for work attempted.  Yet the story was resized based upon the known state and getting it to done.

Simplicity is the ultimate sophistication. — Leonardo di Vinci 
Now let’s move from the empirical world of measurement and into the realm of lies.

Simply graphing the empirical results and using the human eye & mind to predict is more accurate than many peoples math.

Velocity is an optimistic measure. An early objective is to have a predictable team.

Velocity may be a good predictor of release duration. Yet it is always an optimistic predictor.

Variance Graphed: Pessimistic projection (red line) & optimistic projection (green line) of release duration.

While in the realm of fabrication of information — let’s better describe the summary average with it’s variance.

Categories: Blogs

Beyond the case studies: short stories from enterprises using SAFe

Agile Product Owner - Fri, 03/17/2017 - 22:29

About a year ago we made the decision to work a little harder to help SAFe enterprises input case their studies. We wanted to meet the growing demand for more studies, more industries, and more fully developed narratives.

The effort has paid off. If you go to, you’ll find a greater diversity of industries and implementations (34 so far). The newer studies tell a more complete story with personal observations, results, shared best practices for implementation, as well as candid insights into missteps, struggles, and successes. It’s a great resource for people looking to compare and contrast their own business environments, as well as get help with decision support when they are selling the idea of SAFe within their organization.

In addition to these in-depth studies, we’ve also been keeping track of SAFe short stories on the web. They come to us from customers, partners, and good old-fashioned Google alerts, but until now, we haven’t had a home for those third party links.

Go to and you’ll find a new section, “Short Stories.” That’s where we feature links to publications that reference SAFe implementations. The links are organized by industry and enterprise, and come in all forms and sizes: blog posts, trade show presentations, press releases, videos, and more.

We’ll continue to grow and refine both the case studies and short stories sections. If you have a SAFe story that would be a good candidate for an in-depth study, we’d love to hear from you, and have an online form just for that purpose,

Stay SAFe!

Categories: Blogs

Becoming an Agile Leader, Part 4: Determining Next Steps

Johanna Rothman - Fri, 03/17/2017 - 14:04

To summarize: your agile transformation is stuck. You’ve thought about your why, as in Becoming an Agile Leader, Part 1: Define Your Why. You’ve started to measure possibilities. You have an idea of who you might talk with as in Becoming an Agile Leader, Part 2: Who to Approach. You’ve considered who you need as allies and how to enlist them in Becoming an Agile Leader, Part 3: How to Create Allies.

Now, it’s time to think about what you will do next.

You might be thinking, “I know what to do next. I have a roadmap, I know where we want to be. What are you talking about?”

Influence. I’m talking about thinking about discovering the short-term and longer-term actions that will help your agile transformation succeed with the people who hold the keys to your transformation.

Here’s an example. Patrick (not his real name) wanted to help his organization’s agile transformation. When he came to the Influential Agile Leader, that was his goal: help the transformation. That’s one big goal. By the time we got to the influence section, he realized his goal was too big.

What did he want, right now? He was working with one team who wrote technical stories, had trouble getting to done, didn’t demo or retrospect, and wanted to increase the length of their iteration to four weeks from two weeks. He knew that was probably going in the wrong direction. (There are times when it’s okay to increase the length of the iteration. This team had so much change and push for more delivery, increasing the time was not a good option.)

He thought he had problems in the management. He did, but those weren’t the problems in the team. When he reviewed his why and his map, as in Part 1, he realized that the organization needed an agile approach for frequent delivery of customer value. If this team (and several others) could release value on a regular basis, the pressure from the customers and management would lessen. He could work with the managers on the project portfolio and other management problems. But, he was sure that the way to make this happen was to help this team deliver frequently.

He realized he had two influential people to work with: the architect and the QA lead. Both of those people looked as if they were “resisting.” In reality, the architect wanted the developers to refactor to patterns to keep the code base clean. The QA lead thought they needed plans before creating tests and was looking for the “perfect” test automation tool.

He decided that his specific goal was to “Help this team deliver value at least as often as every two weeks. Sustain that delivery over six months.” That goal—a subset of “go agile”—allowed him to work first with the architect and then with the QA lead and then both (yes, he practiced all three conversations in the workshop) to achieve his small goal.

Patrick practiced exploring the short-term and long-term deliverables in conversations in the workshop. While the conversations didn’t go precisely the same way back at work, he had enough practice to move between influence and coaching to see what he could do with the people in his context.

It took the team three more iterations to start delivering small stories, but they did. He spent time enlisting the architect in working in the team with the team members to deliver small stories that kept the code base clean. He asked the architect for help in how to work with the QA lead. The architect showed the lead how to start automation and refactor so the testers could test even before the developers had completed the code.

It took that team three more months to be able to regularly deliver value every week, without waiting for the end of the iteration.

Patrick’s original roadmap was great. And, once he started working with teams and management, he needed to adjust the deliverables he and the other coaches had originally planned. The influence conversations allowed him to see the other people’s concerns, and consider what small deliverables all along the way would help this team succeed.

Some of what he learned with this team helped the other teams. And, the other teams had different problems. He used different coaching and influence conversations with different people.

If you want to experience learning how to influence and who, in the context of helping your agile transformation continue, join us at the next Influential Agile Leader, May 9-10, 2017 in Toronto.

My next post is about our participants learn.

Categories: Blogs

Works on my Machine

Leading Agile - Mike Cottmeyer - Fri, 03/17/2017 - 13:00

One of the most insidious obstacles to continuous delivery (and to continuous flow in software delivery generally) is the works-on-my-machine phenomenon. Anyone who has worked on a software development team or an infrastructure support team has experienced it. Anyone who works with such teams has heard the phrase spoken during (attempted) demos. The issue is so common there’s even a badge for it:

Perhaps you have earned this badge yourself. I have several. You should see my trophy room.

There’s a longstanding tradition on Agile teams that may have originated at ThoughtWorks around the turn of the century. It goes like this: When someone violates the ancient engineering principle, “Don’t do anything stupid on purpose,” they have to pay a penalty. The penalty might be to drop a dollar into the team snack jar, or something much worse (for an introverted technical type), like standing in front of the team and singing a song. To explain a failed demo with a glib “<shrug>Works on my machine!</shrug>” qualifies.

It may not be possible to avoid the problem in all situations. As Forrest Gump said…well, you know what he said. But we can minimize the problem by paying attention to a few obvious things. (Yes, I understand “obvious” is a word to be used advisedly.)

Pitfall 1: Leftover configuration

Problem: Leftover configuration from previous work enables the code to work on the development environment (and maybe the test environment, too) while it fails on other environments.

Pitfall 2: Development/test configuration differs from production

The solutions to this pitfall are so similar to those for Pitfall 1 that I’m going to group the two.

Solution (tl;dr): Don’t reuse environments.

Common situation: Many developers set up an environment they like on their laptop/desktop or on the team’s shared development environment. The environment grows from project to project, as more libraries are added and more configuration options are set. Sometimes the configurations conflict with one another, and teams/individuals often make manual configuration adjustments depending on which project is active at the moment. It doesn’t take long for the development configuration to become very different from the configuration of the target production environment. Libraries that are present on the development system may not exist on the production system. You may run your local tests assuming you’ve configured things the same as production only to discover later that you’ve been using a different version of a key library than the one in production. Subtle and unpredictable differences in behavior occur across development, test, and production environments. The situation creates challenges not only during development, but also during production support work when we’re trying to reproduce reported behavior.

Solution (long): Create an isolated, dedicated development environment for each project

There’s more than one practical approach. You can probably think of several. Here are a few possibilities:

  • Provision a new VM (locally, on your machine) for each project. (I had to add “locally, on your machine” because I’ve learned that in many larger organizations, developers must jump through bureaucratic hoops to get access to a VM, and VMs are managed solely by a separate functional silo. Go figure.)
  • Do your development in an isolated environment (including testing in the lower levels of the test automation pyramid), like Docker or similar.
  • Do your development on a cloud-based development environment that is provisioned by the cloud provider when you define a new project.
  • Set up your continuous integration (CI) pipeline to provision a fresh VM for each build/test run, to ensure nothing will be left over from the last build that might pollute the results of the current build.
  • Set up your continuous delivery (CD) pipeline to provision a fresh execution environment for higher-level testing and for production, rather than promoting code and configuration files into an existing environment (for the same reason). Note that this approach also gives you the advantage of linting, style-checking, and validating the provisioning scripts in the normal course of a build/deploy cycle. Convenient.

All those options won’t be feasible for every conceivable platform or stack. Pick and choose, and roll your own as appropriate. In general, all these things are pretty easy to do if you’re working on Linux. All of them can be done for other *nix systems with some effort. Most of them are reasonably easy to do with Windows; the only issue there is licensing, and if your company has an enterprise license, you’re all set. For other platforms, such as IBM zOS or HP NonStop, expect to do some hand-rolling of tools.

Anything that’s feasible in your situation and that helps you isolate your development and test environments will be helpful. If you can’t do all these things in your situation, don’t worry about it. Just do what you can do.

Provision a new VM locally

If you’re working on a desktop, laptop, or shared development server running Linux, FreeBSD, Solaris, Windows, or OSX, then you’re in good shape. You can use virtualization software such as VirtualBox or VMware to stand up and tear down local VMs at will. For the less-mainstream platforms, you may have to build the virtualization tool from source.

One thing I usually recommend is that developers cultivate an attitude of laziness in themselves. Well, the right kind of laziness, that is. You shouldn’t feel perfectly happy provisioning a server manually more than once. Take the time during that first provisioning exercise to script the things you discover along the way. Then you won’t have to remember them and repeat the same mis-steps again. (Well, unless you enjoy that sort of thing, of course.)

For example, here are a few provisioning scripts that I’ve come up with when I needed to set up development environments. These are all based on Ubuntu Linux and written in Bash. I don’t know if they’ll help you, but they work on my machine.

If your company is running RedHat Linux in production, you’ll probably want to adjust these scripts to run on CentOS or Fedora, so that your development environments will be reasonably close to the target environments. No big deal.

If you want to be even lazier, you can use a tool like Vagrant to simplify the configuration definitions for your VMs.

One more thing: Whatever scripts you write and whatever definition files you write for provisioning tools, keep them under version control along with each project. Make sure whatever is in version control for a given project is everything necessary to work on that project…code, tests, documentation, scripts…everything. This is rather important, I think.

Do your development in a container

One way of isolating your development environment is to run it in a container. Most of the tools you’ll read about when you search for information about containers are really orchestration tools intended to help us manage multiple containers, typically in a production environment. For local development purposes, you really don’t need that much functionality. There are a couple of practical containers for this purpose:

These are Linux-based. Whether it’s practical for you to containerize your development environment depends on what technologies you need. To containerize a development environment for another OS, such as Windows, may not be worth the effort over just running a full-blown VM. For other platforms, it’s probably impossible to containerize a development environment.

Develop in the cloud

This is a relatively new option, and it’s feasible for a limited set of technologies. The advantage over building a local development environment is that you can stand up a fresh environment for each project, guaranteeing you won’t have any components or configuration settings left over from previous work. Here are a couple of options:

Expect to see these environments improve, and expect to see more players in this market. Check which technologies and languages are supported so see whether one of these will be a fit for your needs. Because of the rapid pace of change, there’s no sense in listing what’s available as of the date of this article.

Generate test environments on the fly as part of your CI build

Once you have a script that spins up a VM or configures a container, it’s easy to add it to your CI build. The advantage is that your tests will run on a pristine environment, with no chance of false positives due to leftover configuration from previous versions of the application or from other applications that had previously shared the same static test environment, or because of test data modified in a previous test run.

Many people have scripts that they’ve hacked up to simplify their lives, but they may not be suitable for unattended execution. Your scripts (or the tools you use to interpret declarative configuration specifications) have to be able to run without issuing any prompts (such as prompting for an administrator password). They also need to be idempotent (that is, it won’t do any harm to run them multiple times, in case of restarts). Any runtime values that must be provided to the script have to be obtainable by the script as it runs, and not require any manual “tweaking” prior to each run.

The idea of “generating an environment” may sound infeasible for some stacks. Take the suggestion broadly. For a Linux environment, it’s pretty common to create a VM whenever you need one. For other environments, you may not be able to do exactly that, but there may be some steps you can take based on the general notion of creating an environment on the fly.

For example, a team working on a CICS application on an IBM mainframe can define and start a CICS environment any time by running it as a standard job. In the early 1980s, we used to do that routinely. As the 1980s dragged on (and continued through the 1990s and 2000s, in some organizations), the world of corporate IT became increasingly bureaucratized until this capability was taken out of developers’ hands.

Strangely, as of 2017 very few development teams have the option to run their own CICS environments for experimentation, development, and initial testing. I say “strangely” because so many other aspects of our working lives have improved dramatically, while that aspect seems to have moved in retrograde. We don’t have such problems working on the front end of our applications, but when we move to the back end we fall through a sort of time warp.

From a purely technical point of view, there’s nothing to stop a development team from doing this. It qualifies as “generating an environment,” in my view. You can’t run a CICS system “in the cloud” or “on a VM” (at least, not as of 2017), but you can apply “cloud thinking” to the challenge of managing your resources.

Similarly, you can apply “cloud thinking” to other resources in your environment, as well. Use your imagination and creativity. Isn’t that why you chose this field of work, after all?

Generate production environments on the fly as part of your CD pipeline

This suggestion is pretty much the same as the previous one, except that it occurs later in the CI/CD pipeline. Once you have some form of automated deployment in place, you can extend that process to include automatically spinning up VMs or automatically reloading and provisioning hardware servers as part of the deployment process. At that point, “deployment” really means creating and provisioning the target environment, as opposed to moving code into an existing environment.

This approach solves a number of problems beyond simple configuration differences. For instance, if a hacker has introduced anything to the production environment, rebuilding that environment out of source that you control eliminates that malware. People are discovering there’s value in rebuilding production machines and VMs frequently even if there are no changes to “deploy,” for that reason as well as to avoid “configuration drift” that occurs when we apply changes over time to a long-running instance.

Many organizations run Windows servers in production, mainly to support third-party packages that require that OS. An issue with deploying to an existing Windows server is that many applications require an installer to be present on the target instance. Generally, information security people frown on having installers available on any production instance. (FWIW, I agree with them.)

If you create a Windows VM or provision a Windows server on the fly from controlled sources, then you don’t need the installer once the provisioning is complete. You won’t re-install an application; if a change is necessary, you’ll rebuild the entire instance. You can prepare the environment before it’s accessible in production, and then delete any installers that were used to provision it. So, this approach addresses more than just the works-on-my-machine problem.

When it comes to back end systems like zOS, you won’t be spinning up your own CICS regions and LPARs for production deployment. The “cloud thinking” in that case is to have two identical production environments. Deployment then becomes a matter of switching traffic between the two environments, rather than migrating code. This makes it easier to implement production releases without impacting customers. It also helps alleviate the works-on-my-machine problem, as testing late in the delivery cycle occurs on a real production environment (even if customers aren’t pointed to it yet).

The usual objection to this is the cost (that is, fees paid to IBM) to support twin environments. This objection is usually raised by people who have not fully analyzed the costs of all the delay and rework inherent in doing things the “old way.”

Pitfall 3: Unpleasant surprises when code is merged

Problem: Different teams and individuals handle code check-out and check-in in various ways. Some check out code once and modify it throughout the course of a project, possibly over a period of weeks or months. Others commit small changes frequently, updating their local copy and committing changes many times per day. Most teams fall somewhere between those extremes.

Generally, the longer you keep code checked out and the more changes you make to it, the greater the chances of a collision when you merge. It’s also likely that you will have forgotten exactly why you made every little change, and so will the other people who have modified the same chunks of code. Merges can be a hassle.

During these merge events, all other value-add work stops. Everyone is trying to figure out how to merge the changes. Tempers flare. Everyone can claim, accurately, that the system works on their machine.

Solution: A simple way to avoid this sort of thing is to commit small changes frequently, run the test suite with everyone’s changes in place, and deal with minor collisions quickly before memory fades. It’s substantially less stressful.

The best part is you don’t need any special tooling to do this. It’s just a question of self-discipline. On the other hand, it only takes one individual who keeps code checked out for a long time to mess everyone else up. Be aware of that, and kindly help your colleagues establish good habits.

Pitfall 4: Integration errors discovered late

Problem: This problem is similar to Pitfall 3, but one level of abstraction higher. Even if a team commits small changes frequently and runs a comprehensive suite of automated tests with every commit, they may experience significant issues integrating their code with other components of the solution, or interacting with other applications in context.

The code may work on my machine, as well as on my team’s integration test environment, but as soon as we take the next step forward, all hell breaks loose.

Solution: There are a couple of solutions to this problem. The first is static code analysis. It’s becoming the norm for a continuous integration pipeline to include static code analysis as part of every build. This occurs before the code is compiled. Static code analysis tools examine the source code as text, looking for patterns that are known to result in integration errors (among other things).

Static code analysis can detect structural problems in the code such as cyclic dependencies and high cyclomatic complexity, as well as other basic problems like dead code and violations of coding standards that tend to increase cruft in a codebase. It’s just the sort of cruft that causes merge hassles, too.

A related suggestion is to take any warning-level errors from static code analysis tools and from compilers as real errors. Accumulating warning-level errors is a great way to end up with mysterious, unexpected behaviors at runtime.

The second solution is to integrate components and run automated integration test suites frequently. Set up the CI pipeline so that when all unit-level checks pass, then integration-level checks are executed automatically. Let failures at that level break the build, just as you do with the unit-level checks.

With these two methods, you can detect integration errors as early as possible in the delivery pipeline. The earlier you detect a problem, the easier it is to fix.

Pitfall 5: Deployments are nightmarish all-night marathons

Problem: Circa 2017 it’s still common to find organizations where people have “release parties” whenever they deploy code to production. Release parties are just like all-night frat parties, only without the fun.

The problem is that the first time applications are executed in a production-like environment is when they are executed in the real production environment. Many issues only become visible when the team tries to deploy to production.

Of course, there’s no time or budget allocated for that. People working in a rush may get the system up and running somehow, but often at the cost of regressions that pop up later in the form of production support issues.

And it’s all because at each stage of the delivery pipeline, the system “worked on my machine,” whether a developer’s laptop, a shared test environment configured differently from production, or some other unreliable environment.

Solution: The solution is to configure every environment throughout the delivery pipeline as close to production as possible. The following are general guidelines that you may need to modify depending on local circumstances.

If you have a staging environment, rather than twin production environments, it should be configured with all internal interfaces live and external interfaces stubbed, mocked, or virtualized. Even if this is as far as you take the idea, it will probably eliminate the need for release parties. But if you can, it’s good to continue upstream in the pipeline, to reduce unexpected delays in promoting code along.

Test environments between development and staging should be running the same version of the OS and libraries as production. They should be isolated at the appropriate boundary based on the scope of testing to be performed.

At the beginning of the pipeline, if it’s possible develop on the same OS and same general configuration as production. It’s likely you will not have as much memory or as many processors as in the production environment. The development environment also will not have any live interfaces; all dependencies external to the application will be faked.

At a minimum, match the OS and release level to production as closely as you can. For instance, if you’ll be deploying to Windows Server 2016, then use a Windows Server 2016 VM to run your quick CI build and unit test suite. Windows Server 2016 is based on NT 10, so do your development work on Windows 10 because it’s also based on NT 10. Similarly, if the production environment is Windows Server 2008 R2 (based on NT 6.1) then develop on Windows 7 (also based on NT 6.1). You won’t be able to eliminate every single configuration difference, but you will be able to avoid the majority of incompatibilities.

Follow the same rule of thumb for Linux targets and development systems. For instance, if you will deploy to RHEL 7.3 (kernel version 3.10.x), then run unit tests on the same OS if possible. Otherwise, look for (or build) a version of CentOS based on the same kernel version as your production RHEL (don’t assume). At a minimum, run unit tests on a Linux distro based on the same kernel version as the target production instance. Do your development on CentOS or a Fedora-based distro to minimize inconsistencies with RHEL.

If you’re using a dynamic infrastructure management approach that includes building OS instances from source, then this problem becomes much easier to control. You can build your development, test, and production environments from the same sources, assuring version consistency throughout the delivery pipeline. But the reality is that very few organizations are managing infrastructure in this way as of 2017. It’s more likely that you’ll configure and provision OS instances based on a published ISO, and then install packages from a private or public repo. You’ll have to pay close attention to versions.

If you’re doing development work on your own laptop or desktop, and you’re using a cross-platform language (Ruby, Python, Java, etc.), you might think it doesn’t matter which OS you use. You might have a nice development stack on Windows or OSX (or whatever) that you’re comfortable with. Even so, it’s a good idea to spin up a local VM running an OS that’s closer to the production environment, just to avoid unexpected surprises.

For embedded development where the development processor is different from the target processor, include a compile step in your low-level TDD cycle with the compiler options set for the target platform. This can expose errors that don’t occur when you compile for the development platform. Sometimes the same version of the same library will exhibit different behaviors when executed on different processors.

Another suggestion for embedded development is to constrain your development environment to have the same memory limits and other resource constraints as the target platform. You can catch certain types of errors early by doing this.

For some of the older back end platforms, it’s possible to do development and unit testing off-platform for convenience. Fairly early in the delivery pipeline, you’ll want to upload your source to an environment on the target platform and buld/test there.

For instance, for a C++ application on, say, HP NonStop, it’s convenient to do TDD on whatever local environment you like (assuming that’s feasible for the type of application), using any compiler and a unit testing framework like CppUnit.

Similarly, it’s convenient to do COBOL development and unit testing on a Linux instance using GnuCOBOL; much faster and easier than using OEDIT on-platform for fine-grained TDD.

But in these cases the target execution environment is very different from the development environment. You’ll want to exercise the code on-platform early in the delivery pipeline to eliminate works-on-my-machine surprises.


The author’s observation is that the works-on-my-machine problem is one of the leading causes of developer stress and lost time. The author further observes that the main cause of the works-on-my-machine problem is differences in configuration across development, test, and production environments.

The basic advice is to avoid configuration differences to the extent possible. Take pains to ensure all environments are as similar to production as is practical. Pay attention to OS kernel versions, library versions, API versions, compiler versions, and the versions of any home-grown utilities and libraries. When differences can’t be avoided, then make note of them and treat them as risks. Wrap them in test cases to provide early warning of any issues.

The second suggestion is to automate as much testing as possible at different levels of abstraction, merge code frequently, build the application frequently, run the automated test suites frequently, deploy frequently, and (where feasible) build the execution environment frequently. This will help you detect problems early, while the most recent changes are still fresh in your mind, and while the issues are still minor.

Let’s fix the world so that the next generation of software developers doesn’t understand the phrase, “Works on my machine.”

The post Works on my Machine appeared first on LeadingAgile.

Categories: Blogs

The Container Monitoring Problem

Xebia Blog - Thu, 03/16/2017 - 22:16

This post is part 1 in a 4-part series about Docker, Kubernetes and Mesos monitoring. This article dives into some of the new challenges containers and microservices create and the metrics you should focus on. Containers are a solution to the problem of how to get software to run reliably when moved from one environment […]

The post The Container Monitoring Problem appeared first on Xebia Blog.

Categories: Companies

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.