Skip to content

Feed aggregator

How Kanban Saved Agile

Learn more about transforming people, process and culture with the Real Agility Program

For my working definition of Kanban, please refer to my previous article 14 Things Every Agilist Should Know About Kanban (this contains links to the Kanban body of knowledge, including Essential Kanban Condensed by David J. Anderson and Andy Carmichael).

For my working definition of Agile, please refer to The Manifesto for Agile Software Development.

In reality, Kanban isn’t actually saving Agile nor is it intended to, nor is any thoughtful and responsible Kanban practitioner motivated by this agenda. What I’m really trying to convey is how human thinking about the business of professional services (including software development) has evolved since “Agile” as many of us know it was conceived around 20 or so years ago. The manifesto is the collective statement of a group of software development thought leaders that captured some of their ideas at the time about how the software industry needed to improve. Essentially, it was about the iterative and incremental delivery of high-quality software products. For 2001, this was pretty heady stuff. You could even say that it spawned a movement.

Since the publication of the manifesto in 2001, a lot of other people have had a lot of other good ideas about how the business of delivering professional services can improve. This has been well documented in well known sources too numerous to mention for the scope of this article.

Substantial contributions to the discourse have been generated by and through the LeanKanban community. The aim of Kanban is to foster environments in which knowledge workers can thrive and create innovative, valuable and viable solutions for improving the world. Kanban has three agendas: survivability (primarily but not exclusively for the business executives), service-orientation (primarily but not exclusively for managers) and sustainability (primarily but not exclusively for knowledge workers). Kanban provides pragmatic, actionable, evidence-based guidance for improving along these three agendas.

Evolutionary Theory is one of the key conceptual underpinnings of the Kanban Method, most notably the dynamic of punctuated equilibrium. Evolution is natural, perpetual and fundamental to life. Long periods of equilibrium are punctuated by relatively short periods of “transformation”—apparent total and irreversible change. An extinction event is a kind of punctuation, so too is the rapid explosion of new forms. Evolutionary theory is not only a scientifically proven body of knowledge for understanding the nature of life. It can be also applied to the way we think about ideas, methods and movements.

For example, science has more or less established that the extinction of the dinosaurs, triggered by a meteor impact and subsequent dramatic atmospheric and climate change, was in fact a key punctuation point in the evolution of birds. In other words, dinosaurs didn’t become extinct, rather they evolved into birds. That is, something along the lines of the small dinosaurs with large feathers hanging around after Armageddon learned to fly over generations in order to escape predators, find food and raise their young. Dinosaurs evolved into birds. Birds saved the dinosaurs.

There has been a lot of social media chatter and buzz lately about how Agile is dead. It is a movement that has run its course, or so the narrative goes. After all, 20 years is more or less the established pattern for the rise and fall of management fads. But too much emphasis on the rise and fall of fads can blind us to larger, broader (deeper) over-arching trends.

The agile movement historically has been about high-performing teams. More recently, market demand has lead to the profusion of “scaling” approaches and frameworks. Scaling emerged out of the reality of systemic interdependence in which most Agile teams find themselves. Most agile teams are responsible for aspects of workflows—stages of value creation—as contributors to the delivery of a service or multiple services. Agile teams capable of independently taking requests directly from and delivering directly to customers are extremely rare. For the rest, classical Agile or Scrum is not enough. The feathers just aren’t big enough. Agile teams attempting to function independently (pure Scrum) in an interdependent environment are vulnerable to the antibodies of the system, especially when such interdependencies are merely denounced as impediments to agility.

Some organizations find themselves in a state of evolutionary punctuation (the proverbial sky is falling) that can trigger rapid adaptations and the emergence of local conditions in which independent service delivery teams can thrive. Most large, established organizations seem to be more or less in a state of equilibrium. Whether real or imagined, this is what change agents have to work with. However, more often than not, the typical Agile change agent seems adamant that the sky is always falling and that everyone accepting that the sky is falling is the first step to real and meaningful change. This is not an attitude held by Agile change agents alone. This is a standard feature of traditional 20th Century change management methods, the key selling point for change management consulting.

Naturally, most self-identifying “Agilists” see themselves as change agents. Many of them find themselves in the position of change management consultants. But the motivation for change can quickly become misaligned: Change needs to happen in order for Agile to work. If you are passionate about Agile, you will seek to bring about the environmental changes that will allow for Agile to thrive. We don’t need to follow this path too far until Agile becomes an end in itself. It is understandable then that for some, Agile appears to be a dead end, or just dead.

But if there is a larger, over-arching historical process playing out, what might that be? Perhaps it has something to do with the evolution of human organization. Perhaps we are living in a period of punctuation.

 

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post How Kanban Saved Agile appeared first on Agile Advice.

Categories: Blogs

Go: Multi-threaded writing to a CSV file

Mark Needham - Tue, 01/31/2017 - 07:57

As part of a Go script I’ve been working on I wanted to write to a CSV file from multiple Go routines, but realised that the built in CSV Writer isn’t thread safe.

My first attempt at writing to the CSV file looked like this:

package main


import (
	"encoding/csv"
	"os"
	"log"
	"strconv"
)

func main() {

	csvFile, err := os.Create("/tmp/foo.csv")
	if err != nil {
		log.Panic(err)
	}

	w := csv.NewWriter(csvFile)
	w.Write([]string{"id1","id2","id3"})

	count := 100
	done := make(chan bool, count)

	for i := 0; i < count; i++ {
		go func(i int) {
			w.Write([]string {strconv.Itoa(i), strconv.Itoa(i), strconv.Itoa(i)})
			done <- true
		}(i)
	}

	for i:=0; i < count; i++ {
		<- done
	}
	w.Flush()
}

This script should output the numbers from 0-99 three times on each line. Some rows in the file are written correctly, but as we can see below, some aren't:

40,40,40
37,37,37
38,38,38
18,18,39
^@,39,39
...
67,67,70,^@70,70
65,65,65
73,73,73
66,66,66
72,72,72
75,74,75,74,75
74
7779^@,79,77
...

One way that we can make our script safe is to use a mutex whenever we're calling any methods on the CSV writer. I wrote the following code to do this:

type CsvWriter struct {
	mutex *sync.Mutex
	csvWriter *csv.Writer
}

func NewCsvWriter(fileName string) (*CsvWriter, error) {
	csvFile, err := os.Create(fileName)
	if err != nil {
		return nil, err
	}
	w := csv.NewWriter(csvFile)
	return &CsvWriter{csvWriter:w, mutex: &sync.Mutex{}}, nil
}

func (w *CsvWriter) Write(row []string) {
	w.mutex.Lock()
	w.csvWriter.Write(row)
	w.mutex.Unlock()
}

func (w *CsvWriter) Flush() {
	w.mutex.Lock()
	w.csvWriter.Flush()
	w.mutex.Unlock()
}

We create a mutex when NewCsvWriter instantiates CsvWriter and then use it in the Write and Flush functions so that only one go routine at a time can access the underlying CsvWriter. We then tweak the initial script to call this class instead of calling CsvWriter directly:

func main() {
	w, err := NewCsvWriter("/tmp/foo-safe.csv")
	if err != nil {
		log.Panic(err)
	}

	w.Write([]string{"id1","id2","id3"})

	count := 100
	done := make(chan bool, count)

	for i := 0; i < count; i++ {
		go func(i int) {
			w.Write([]string {strconv.Itoa(i), strconv.Itoa(i), strconv.Itoa(i)})
			done <- true
		}(i)
	}

	for i:=0; i < count; i++ {
		<- done
	}
	w.Flush()
}

And now if we inspect the CSV file all lines have been written successfully:

...
25,25,25
13,13,13
29,29,29
32,32,32
26,26,26
30,30,30
27,27,27
31,31,31
28,28,28
34,34,34
35,35,35
33,33,33
37,37,37
36,36,36
...

That's all for now. If you have any suggestions for a better way to do this do let me know in the comments or on twitter - I'm @markhneedham

The post Go: Multi-threaded writing to a CSV file appeared first on Mark Needham.

Categories: Blogs

Verbal Turn Indicators For Intercultural Product Owners

Xebia Blog - Mon, 01/30/2017 - 20:30
Jujutsu exams are coming up. One of the things that examiners want to see in jujutsu is the use of go-no-sen, sen-no-sen and tai-no-sen. Go-no-sen means that you respond to an action of your opponent, tai-no-sen means you act simultaneously and sen-no-sen means you take the initiative and act before the opponent has a chance.
Categories: Companies

A better way (and script) to add a Service Principal in Azure for VSTS

Xebia Blog - Mon, 01/30/2017 - 17:53
From Visual Studio Team Services (VSTS) it’s possible to deploy to an Azure Subscription using an Active Directory Service Principal. The Microsoft documentation refers to a blog post which describes a 3-clicks and a manual way to setup this principal. Although the information on the blog post for the 3-clicks setup is still actual, the script link
Categories: Companies

Reasons Why Scrum Can Fail

Scrum Expert - Mon, 01/30/2017 - 17:38
If Scrum and Agile approaches are supposed to increase the chances of success for software development projects, not all the projects that want to use Scrum are successful. In this article, John Yorke shares his opinion on why Agile projects might fail because of the confusion between the roles (ScrumMaster, Product Owner, Developer) of a Scrum Team and the required Agile mindsets. Author : John Yorke, Agile Coach, WWT Asynchrony Labs, http://www.asynchrony.com/ I should start out by saying that I am a big fan of Scrum. I think those that devised the framework possessed an agile mindset but also were mindful of human nature. They created a framework that had built-in checks and balances and solutions to many of the most common problems. They also had an understanding of system level thinking – I’ll come back to that later. The core of the system though are the key roles: Scrum Master, Product Owner and Development Team. This triad is what makes Scrum so successful (when it works) and in my opinion it is the absence of this triad that is the root cause of the majority of the unsuccessful adoptions. It’s All About the Mindset However, I don’t think it is the role that defines this triad but the perceived mindset behind the role.  For example, having a team that possesses a strong member with an Agile mindset, along with the knowledge and skills to support it and the opportunity to focus on it all help achieve a proper mindset. Furthermore, [...]
Categories: Communities

Running Powershell Pester unit test in a VSTS build pipeline

Xebia Blog - Mon, 01/30/2017 - 17:34
When you are developing Powershell scripts, creating some unit tests will help you in monitoring the quality of the scripts. Writing some tests will give you some assurance that your code still works after you make some changes. Writing Powershell unit tests can be done with Pester. Pester will enable you to test your Powershell scripts from
Categories: Companies

10 Myths About Docker That Stop Developers Cold

Derick Bailey - new ThoughtStream - Mon, 01/30/2017 - 14:30

A curious thing happened in a recent conversation.

I was discussing the growth of Docker and I kept hearing bits of information that didn’t quite seem right in my mind.

“Docker is just inherently more enterprise”

“it’s only tentatively working on OSx, barely on Windows”

“I’m not confident I can get it running locally without a bunch of hassle”

… and more

There are tiny bits of truth in these statements (see #3 and #5, below, for example), but tiny bits of truth often make it easy to overlook what isn’t true, or is no longer true.

And with articles that do nothing more than toss around jargon, require inordinate numbers of frameworks, and discuss how to manage 10thousand-billion requests per second with only 30thousand containers, automating 5thousand microservices hosted in 6hundred cloud based server instances…

Well, it’s easy to see why Docker has a grand mythology surrounding it.

It’s unfortunate that the myths and misinformation persist, though. They rarely do more than stop developers from trying Docker. 

So, let’s look at the most common myths – some that I’ve seen, and some I’ve previously believed – and try to find the truth in them, as well as solutions if there are any to be found.

 

Myth #10:  I can’t develop with Docker… because I can’t edit the Dockerfile

As a developer, I have specific needs for tools and environment configuration, when working. I’ve also been told (rightfully so) that I can’t edit the production Dockerfile to add the things I need.

The production Docker image should be configured for production purposes, only.

So, how do I handle my development needs, with Docker? If I can’t edit the Dockerfile to add my tools and configuration, how am I supposed to develop apps in Docker, at all?

I could copy & paste the production Dockerfile into my own, and then modify that file for my needs. But, we all know that duplication is the root of all evil. And, we all know that duplication is the root of all evil. Because duplication is the root of all evil.

The Solution

Rather than duplicating the Dockerfile and potentially causing more problems, a better solution is to use the Docker model of building images from images.

I’m already building my production application image from a base like “node:6”. So, why not create a “dev.dockerfile” and have it build from my application’s production image as its base?

Now I can modify the dev.dockerfile to suit my development needs, knowing that it will use the exact configuration from the production image.

Want to See a Dev Image in Action?

Check out the WatchMeCode episode on Creating a Development Container – part of the Guide to Building Node.js Apps in Docker

 

Myth #9: I can’t see anything in this container because I can’t see into my container, at all!

Docker is application virtualization (containerization), not a full virtual machine to be used for general computing purposes. 

But a developer often needs to treat a container as if it were a virtual machine.

I need to get logs (beyond the simple console output of my app), examine debug output, and ensure all of my needs are being met by the file and system changes I’ve put into the container.

If a container isn’t a virtual machine, though, how do I know what’s going on? How do I see the files, the environment variables, and the other bits that I need, inside the container?

The Solution

While a Docker container may not technically be a full virtual machine, it does run a Linux distribution under the hood.

Yes, this distribution may be a slimmed down, minimal distribution such as Alpine Linux, but it will still have basic shell access among other things. And having a Linux distribution as the base of a container gives me options for diving into the container.

There are two basic methods of doing this, depending on the circumstances.

Method 1: Shell Into A Running Container

If I have a container up and running already, I can use the “docker exec” command to enter that container, with full shell access.

Once I’ve done this, I’ll be inside the container as if I were shelled into any Linux distribution.

Method 2: Run A Shell as the Container’s Command

If I don’t have a container up and running – and can’t get one running – I can run a new container from an image, with the Linux shell as the command to start.

Now I have a new container that runs with a shell, allowing me to look around, easily.

Want to See a Shell in Action?

Check out these two episodes from WatchMeCode’s Guide to Learning Docker and Guide to Building Node.js Apps in Docker.

 

Myth #8: I have to code inside the Docker container? and I can’t use my favorite editor?!

When I first looked at a Docker container, running my Node.js code, I was excited about the possibilities.

But that excitement quickly diminished as I wondered how I was supposed to move edited code into the container, after building an image.

Was I supposed to re-build the image every time? That would be painfully slow… and not really an option.

Ok, should I shell into the container to edit the code with vim?

That works. 

But, if I wanted to use a better IDE / editor, I wouldn’t be able to. I’d have to use something like vim all the time (and not my preferred version of vim).

If I only have command-line / shell access to my container, how can I use my favorite editor?

The Solution

Docker allows me to mount a folder from my host system into a target container, using the “volume mount” options. 

With this, the container’s “/var/app” folder will point to the local “/dev/my-app” folder. Editing code in “/dev/my-app” – with my favorite editor, of course – will change the code that the container sees and uses.

Want to See Editing in a Mounted Volume, in Action?

Check out the WatchMeCode episode on editing code in a container – part of the Guide to Building Node.js Apps in Docker.

 

Myth #7: I have to use a command-line debugger… and I vastly prefer my IDE’s debugger

With the ability to edit code and have it reflected in a container, plus the ability to shell into a container, debugging code is only a step away.

I only need to run the debugger in the container, after editing the code in question, right?

While this is certainly true – I can use the command-line debugger of my programming language from inside a Docker container – it is not the only option.

How is it possible, then, to use the debugger from my favorite IDE / editor, with code in a container?

The Solution

The short answer is “remote debugging”.

The long answer, however, is very dependent on which language and runtime is used for development.

With Node.js, for example, I can do remote debugging over a TCP/IP port (5858). To debug through a Docker container, then, I only need to expose that port from my Docker image (the “dev.dockerfile” image, of course).

With this port exposed, I can shell into the container and use any of the typical methods of starting the Node.js debugging service before attaching my favorite debugger.

Want to See Visual Studio Code Debug a Node.js Container?

Check out the WatchMeCode episode on debugging in a container with Visual Studio Code – part of the Guide to Building Node.js apps in Docker.

 

Myth #6: I have to “docker run” every time and I can’t remember all those “docker run” options…

There is no question that Docker has an enormous number of command-line options. Looking through the Docker help pages can be like reading an ancient tome of mythology from an extinct civilization.

When it comes time to “run” a container, then, it’s no surprise that I’m often confused or downright frustrated, never getting the options right the first time. 

What’s more, every call to “docker run” creates a new container instance from an image.

If I need a new container, this is great. 

If, however, I want to run a container that I had previously created, I’m not going to like the result of “docker run”… which is yet another new container instance.

The Solution

I don’t need to “docker run” a new container every time I need one.

Instead, I can “stop” and “start” the container in question.

Doing this allows my container to be stopped and started, as expected.

This also persists the state of the container between runs, meaning I will be able to restart a container where it left off. If I’ve modified any files in the container, those changes will be intact when the container is started again.

Want to See Start and Stop in Action?

There are many episodes of WatchMeCode’s Guide to Learning Docker and Guide to Building Node.js Apps in Docker that use this technique. 

If you’re new to the idea, however, I recommend watching the episode on basic image and container management, which covers stopping and re-starting a single container instance.

 

Myth #5: Docker hardly works on macOS and Windows and I use a Mac / Windows

Until a few months ago, this was largely true.

In the past, Docker on Mac and Windows required the use of a full virtual machine with a “docker-machine” utility and a layer of additional software proxying the work into / out of the vm.

It worked… but it introduced a tremendous amount of overhead while limiting (or excluding) certain features. 

The Solution 

Fortunately, Docker understands the need to support more than just Linux for a host operating system. 

In the second half of 2016, Docker released the official Docker for Mac and Docker for Windows software packages.

This made it incredibly simple to install and use Docker on both of these operating systems. With regular updates, the features and functionality are nearly at parity with the Linux variant, as well. There’s hardly a difference anymore, and I can’t remember the last time I needed an option or feature that was not available in these versions.

Want to Install Docker for Mac or Windows?

WatchMeCode has free installation episodes for both (as well as Ubuntu Linux!)

 

Myth #4: Docker is command-line only and I am significantly more efficient with visual tools

With it’s birthplace in Linux, it’s no surprise that Docker prefers command-line tooling. 

The abundance of commands and options, however, can be overwhelming. And for a developer that does not spend a regular amount of time in a console / terminal window, this can be a source of frustration and lost productivity.

The Solution

As the community around Docker grows, there are more and more tools that fit the preferences of more and more developers – including visual tools.

Docker for Mac and Windows include basic integration with Kitematic, for example – a GUI for managing Docker images and containers, on my machine.

With Kitematic, it’s easy to search for images in Docker repositories, create containers and manage the various options of my installed and running containers. 

Want to See Kitematic in Action?

Check out the Kitematic episode in WatchMeCode’s Guide to Learning Docker

 

Myth #3: I can’t run my database in a container. It won’t scale properly… and I’ll lose my data!

Containers are meant to be ephemeral – they should be destroyed and re-created as needed, without a moment’s hesitation. But if I’m storing data from a database in my container, deleting the container will delete my data.

Furthermore, database systems have very specific methods in which they can scale – both up (larger server) and out (more servers).

Docker, it seems, is specialized in scaling out – creating more instances of things, when more processing power is required. While most database systems, on the other hand, require specific and specialized configuration and maintenance to scale out.

So… yes… it’s true. It’s not a good idea to run a production database in a Docker container.

However, my first real success with Docker was with a database.

Oracle, to be specific.

I had tried and failed to install Oracle into a virtual machine, for my development needs. I spent nearly 2 weeks (off and on) working on it, and never even came close.

Within 30 minutes of learning that there is an Oracle XE image for Docker, however, I had Oracle up and running and working.

In my development environment.

The Solution

Docker may not be great for running a database in a production environment, but it works wonders for development.

I’ve been running MongoDB, MySQL, Oracle, Redis and other data / persistence systems for quite some time now, and I couldn’t be happier about it.

And, when it comes to the “ephemeral” nature of a Docker container? Volume mounts.

Like the code editing myth, a volume mount provides a convenient way of storing data on my local system and using it in a container. 

Now I can destroy a container and re-create it, as needed, knowing I’ll pick up right where I left off.

 

Myth #2: I can’t use Docker on my project because Docker is all-or-nothing

When I first looked at Docker, I thought this was true – you either develop, debug, deploy and “devops” everything with Docker (and two-hundreds extra tools and frameworks, to make it all work automagically), or you don’t Docker at all.

My experience with installing and running a database, as my first success with Docker, showed me otherwise.

Any tool or technology that demands all-or-nothing should be re-evaluated with an extreme microscope. It’s rare (beyond rare) that this is true. And when it is, it may not be something into which time and money should be invested.

The Solution

Docker, like most development tools, can be added piece by piece. 

Start small. 

Run a development database in a container.

Then build a single library inside a docker container and learn how it works.

Build the next microservice – the one that only needs a few lines of code – in a container, after that.

Move on to a larger project with multiple team members actively developing within it, from there.

There is no need to go all-or-nothing.

  

Myth #1: I won’t benefit from Docker… At all… because Docker is “enterprise”, and “devops”

This was the single largest mental hurdle I had to remove, when I first looked at Docker.

Docker, in my mind, was this grand thing that only the most advanced of teams with scalability concerns that I would never see, had to deal with.

It’s no surprise that I thought this way, either.

When I look around at all the buzz and hype in the blog world and conference talks, I see nothing but “How Big-Name-Company Automated 10,000,000 Microservices with Docker, Kubernetes, and Shiny-New-Netflix-Scale-Toolset”.

Docker may excel at “enterprise” and “devops”, but the average, everyday developer – like you and I – can take advantage of what Docker has to offer. 

The Solution

Give docker a try.

Again, start small.

I run a single virtual machine with 12GB of RAM, to host 3 web projects for a single client. It’s a meager server, to say the least. But I’m looking at Docker – just plain old Docker, by itself – as a way to more effectively use that server.

I have a second client – with a total of 5 part time developers (covering a total of less than 1 full time person worth of hours, every week) that is already using Docker to automate their build and deployment process.

I build most of my open source libraries for Node.js apps, with Docker, at this point.

I am finding new and better ways to manage the software and services that I need to install on my laptop, using Docker, every day.

And remember …

 

Don’t Buy The Hype or Believe The Myths

The mythology around Docker exists for good reason. 

It has, historically, been difficult to play with outside of Linux. And it is, to this day and moving forward, a tremendous benefit to enterprise and devops work. 

But the mythology, unfortunately, does little to help the developer that could benefit the most: You.

If you find yourself looking at this list of myths, truths and solutions, still saying, “Yeah, but …”, I ask you to take some time and re-evaluate what you think about Docker, and why.

If you still have questions or concerns about how a development environment can take advantage of Docker, get in touch. I’d love to hear your questions and see if there’s anything I can do to help.

And if you want to learn the basics of Docker or how to develop apps within it, but don’t know where to start, check out WatchMeCode’s Guide to Learning Docker (from the ground up) and the Guide to Building Node.js Apps in Docker.

 

The post 10 Myths About Docker That Stop Developers Cold appeared first on DerickBailey.com.

Categories: Blogs

3 key ingredients that make you a better developer

Xebia Blog - Mon, 01/30/2017 - 11:21
IT is a booming business, but that doesn’t mean everyone who’s drawn to it will become a great developer. Many students sign up for an IT education for the wrong reasons. I've had classmates who enrolled in IT-related degree programs because they liked gaming or working with computers. Maybe they created a website for a
Categories: Companies

Article 4 in SAFe Implementation Roadmap series: Lean-Agile Center for Excellence (LACE)

Agile Product Owner - Mon, 01/30/2017 - 02:07
Click to enlarge.Click to enlarge.

Changing the fundamental behavior and culture of a large development organization is no small task. In a SAFe rollout, one of the signature attributes of a successful implementation is the organization’s commitment to developing a dedicated change management team. They go by various names, so in order to describe one, we simply picked a descriptive general purpose term, the “Lean-Agile Center of Excellence” (LACE).

The LACE is a small team of people dedicated to driving the change, and it is often a key differentiator between companies practicing Agile in name only, and those fully committed to adopting Lean-Agile practices and getting the best business outcomes.

Creating a LACE is the fourth ‘critical move’ in the SAFe Implementation Roadmap, and the subject of our latest guidance article in this series. The article outlines the mission and responsibilities of the LACE, and provides guidance for size, structure, and operation of the LACE team.  It is based in on our own experience, as well as others, working directly in the field.

Read the full article here.

As always, we welcome your thoughts so if you’d like to provide some feedback on this new series of articles, you’re invited to leave your comments here.

Stay SAFe!
—Dean and the Framework team

Categories: Blogs

Check out the latest edition of the Scaled Agile Insider

Agile Product Owner - Mon, 01/30/2017 - 00:04

insider_blog_jan_2017It’s been over two years since we launched the Scaled Agile Insider with just a few thousand readers. Today, that number has grown to nearly 90,000 subscribers, and that number continues to grow at a dramatic rate, keeping pace with the uptake of SAFe.

If you haven’t had a chance, give it look. This almost-monthly email is the best resource for getting all the latest news from the SAFe universe in one place. And there are things you can get in the Insider (such as industry news, enterprise videos, new class offerings, etc. ) that don’t necessarily make it into this blog.

This month’s edition includes:

  • SAFe Implementation Roadmap identifies 12 ‘critical moves’ for success
  • Invitation-based approach to implementing SAFe
  • Are you reading Mark Richards’ blog, The Art of SAFe?:
    • Bringing your SAFe PI Plan to life during execution
    • Want a supercharged ART? Don’t settle for a Proxy Product Manager!
    • Team Backlog Evolution in SAFe – from 3 words on a sticky to a Ready to Play story
    • Revamping SAFe’s Program Level PI Metrics, a multi-part series
  • Learnings from the SPC4 course & how it helped me
  • How does SAFe benefit the enterprise architect?
  • Recorded webinar: How to Use SAFe to Deliver Value at Enterprise Scale
  • Case study: Royal Philips: medical technology giant cuts release cycle time by two-thirds with SAFe
  • Case study: LEGO finds the sweet spot
  • Case study: Agile Transformation in a Highly Regulated Environment: UK National Health Service (NHS)

The Insider is intended to provide you with the wider range of information—including things like commercial aspects, case studies, and third party opinions—that will help you do your job and get the most out of SAFe. To that end, you’ll find a broad range of topics and resources, including: links to new downloads, ‘must-read’ articles, books, videos, the lastest case studies, webinar announcements, classes and event news, and more.

If you’ve attended a SAFe class, you are likely already subscribed. If not, go here to subscribe and read the latest editions.

As we work to improve its value to the community, we’d love your thoughts. Are we covering what’s important to you? Should we provide less or more? Turn it into an online publication? All ideas are welcome.

Stay SAFe!
—Dean

 

 

 

 

Categories: Blogs

5 tips for using Retrospectives as a tool for dissent

thekua.com@work - Sun, 01/29/2017 - 18:10

I recently shared this article on twitter from HBR, True Leaders Believe Dissent is an Obligation – the spirit of which I wholeheartedly agree. Effective leaders should not be surrounding themselves with yes-people because you need a diverse set of opinions, perspectives, skills and experiences to effectively problem solve. You can read more about How Diversity Makes Us Smarter, Research on how a Diverse group is the best solution for problem-solving tasks and Kellogs’ perspectives on Better Decisions Through Diversity.

Celebrate Dissent PhotoPhoto from Vipez’s Flickr photostream

A challenge with many leaders is creating the right environment to allow dissent. I draw upon Retrospectives as a useful tool and here are some tips if you are a leader looking to use it effectively.

  1. Be clear about your motives – I can see some types of leaders who want to use retrospectives as a way to get to blame (which is definitely not the point). It helps to be explicit upfront about what you expect from people and to let people know if there will be consequences. If people feel like retrospectives are being used to “find dirt” or for blame, people will refuse to actively participate in future sessions or simply lie.
  2. Find an independent facilitator – I address a number of the trade-offs of an independent facilitator in The Retrospective Handbook and when you’re a leader running a session, there will be times you will want to participate. Playing dual roles (participant + facilitator) can be really confusing for those simply participating, so I recommend at least starting retrospectives with an independent facilitator.
  3. Allows others to talk first – Leaders often come with a level of explicit or implicit level of authority. Different cultures treat authority differently and it pays for a leader to be aware of the significance that is automatically added to your words by holding back and allowing others to speak. Focus on listening first and foremost, and ask clarifying questions rather than being the first to put your opinion on the table.
  4. Pick a topic that affects all participants – When choosing participants, make sure that the topic is relevant and that everyone can contribute different perspectives for. Although outside opinions about a particular topic are often welcomed, retrospectives are best when people can share their experiences. If, as a leader, you are gathering a group of people who don’t regularly work together around a common topic, reconsider if a focused retrospective is a good solution.
  5. Keep an open mind – There is no point in gathering a group of people if the leader is going to follow through on an action they thought of previously to a retrospective. Consider scheduling a retrospective early on, very focused on information gathering and generating insights as a first part, and then a second part with a smaller, focused group on the next steps. By having time to digest the new information, you may find you end up with very different solutions than what you first had in mind.

When used well, retrospectives can create a safe space to invite people to dissent and create an ongoing culture of challenging the status quo.

Categories: Blogs

Understanding serverless cloud and clear

Xebia Blog - Sun, 01/29/2017 - 14:57
Serverless is considered the containers’ successor. But although it’s promoted heavily, it still isn’t the best fit for every use case. By knowing what its pitfalls and disadvantages are, it becomes quite easy to find the use cases which do fit the pattern. This post gives some technology perspectives on the maturity of serverless today.
Categories: Companies

Thinking About PMO Productivity

Johanna Rothman - Fri, 01/27/2017 - 16:15

In Manage Your Project Portfolio, I’m agnostic about who manages the project portfolio. I prefer that the managers responsible for the strategy make the project portfolio decisions. And, I recognize that the PMO often makes those decisions.

I am doing a series of webinars with TransparentChoice. The first one is live. See How many “points” does your PMO score? We spoke about how you might know if you need a project portfolio and the major measure of successful decisions:

It doesn’t matter how many projects you start. It matters how many you finish.

Hope you enjoy it!

Categories: Blogs

The Messy Coherence of X-Matrix Correlations

AvailAgility - Karl Scotland - Fri, 01/27/2017 - 14:25

I promised to say more about correlations in my last post on how to TASTE Success with the X-Matrix .

One of the things I like about the X-Matrix is that it allows clarity of alignment, without relying on an overly analytical structure. Rather than consisting of simple hierarchical parent-child relationships, it allows more elaborate many-to-many relationships of varying types. This creates a messy coherence – everything fits together, but without too much neatness or precision.

This works through the shaded matrices in the corners of the X-Matrix – the ones that together form an X and give this A3 its name! Each cell in the matrices represents a correlation between two of the numbered elements. Its important to emphasise that we are representing correlation, and not causation. There may be a contribution of one to the other, but it is unlikely to be exclusive or immediate. Thus implementing Tactics collectively contribute towards applying Strategies and exhibiting Evidence. Similarly applying Strategies and exhibiting Evidence both collectively contribute towards meeting Aspirations. What we are looking for is a messy coherence across all the pieces.

There are a few approaches I have used to describe different types of correlation.

  • Directness – Can a direct correlation be explained, or is the correlation indirect via another factor (i.e. it is oblique). This tends to be easier to be objective about.
  • Strength – Is there a strong correlation between the elements, or is the correlation weak. This tends to be harder to describe because strong and weak are more subjective.
  • Likelihood – Is the correlation probable, possible or plausible. This adds a third option, and therefore another level of complexity, but the language can be useful.

Whatever the language, there is always the option of none. An X-Matrix where everything correlates with everything is usually too convenient and can be a sign of post-hoc justification.

Having decided on an approach, a symbol is used in each cell to visualise the nature of each correlation. I have tried letters and colours, and have recently settled on filled and empty circles, as in the example below. Filled circles represent direct or strong correlations, while empty circles represent indirect or weak correlations. (If using likelihood, a third variant would be needed, such as a circle with a dot in the middle).

Here we can see that there is a direct or strong correlation between “Increase Revenue +10%” (Aspiration 1) and “Global Domination” (Strategy 1). In other words this suggests that Strategy 1 contributes directly or strongly to Aspiration 1. As do all the Strategies, which indicates high coherence. Similarly, Strategy 1 has a direct/strong correlation with Aspiration 2, but Strategy 2 has no correlation, and Strategy 3 only has indirect/weak correlation.

Remember, this is just a hypothesis, and by looking at the patterns of correlations around the X-Matrix we can see and discuss the overall coherence. For example we might question why Strategy 3 only has Tactic 2 with an indirect/weak correlation. Or whether Tactic 2 is the best investment given its relatively poor correlations with both Strategies and Evidence. Or whether Evidence 4 is relevant given its relatively poor correlations with both Tactics and Aspiration.

Its visualising and discussing these correlations that is often where the magic happens, as it exposes differences in understandings and perspectives on what all the pieces mean and how relate to each other. This leads to refinement of X-Matrix, more coherence and stronger alignment.

Categories: Blogs

De future fit organisatie - praktijkervaringen deel 1: De kracht en waarde van interne Agile Coaches

Xebia Blog - Fri, 01/27/2017 - 12:59
Een succesvolle transformatie naar een wendbare, future fit organisatie begint bij het neerzetten van de basis voor de borging. Een organisatie die start met heldere en begrijpelijke cultuurwaarden die het fundament vormen waarop de organisatie steunt. Niet alleen IT en/of Business los van elkaar maar samen met een gemeenschappelijke “purpose” gericht op (klant)waarde. De Agile
Categories: Companies

Let Operational Analytics improve your business

Xebia Blog - Fri, 01/27/2017 - 12:52
Products and services are getting smarter. The Google Car can drive itself. Your phone knows how to take the best selfie and it even tells you when to leave to be on time for that important meeting. The systems that run these services are able to use and understand data in a very smart way.
Categories: Companies

Use VSTS to deploy Functions as Infrastructure as Code

Xebia Blog - Fri, 01/27/2017 - 10:10
Azure Functions enable you to easily run small pieces of code in the cloud. To do this right, you need to setup continuous delivery of the infrastructure and the code involved. Otherwise you will end with an uncontrolled environment where nobody knows what code is actually running. In this blog post I’ll describe how to
Categories: Companies

Religion: An Apt Analogy for Agile

Powers of Two - Rob Myers - Thu, 01/26/2017 - 23:21
There are very few topics of exploration that have unceasingly retained my curiosity throughout life. One is software development, another is religious thought.* But in my teens and 20s, I didn't really expect them to overlap...

You've likely heard the word "religion" used to describe the Agile movement (or, now more aptly, the Agile industry), and usually it's meant unkindly.

But it's an apt analogy, and I've often used it myself.  In a way, Agile is indeed an umbrella term for a collection of "religions of software development." I don't think that's a bad thing.

A religion, as a topic of study, circumscribes a set of beliefs, ethics, and practices.  Every religion in history and today has its share of dedicated practitioners, crackpots, expert teachers, and charlatans. Every one started out as a new, challenging idea, appropriate to historical context, and usually rejected by the status quo. Every one provided value to its adherents. Every religion has been abused for personal gain. Every single one.

Over time, some ideas become the status quo, some practices that once had value are performed mechanically, and some ethical systems are paid mere lip-service. Like in a "cargo cult", people will continue to perform the practices though they get none of the value.  Often because they've confused cause and effect.

Agile is no different.  In the late 1990s, we were considered rebellious. Now you will encounter Agile frameworks that recommend ritualized adherence to well-regarded practices, and are packaged in a way that's palatable to folks with Theory X management styles.

Just last week I met with a team resisting Scrum as "too heavyweight" and "requiring attendance at too many meetings." That's sad, because Scrum is intended to be an extremely lightweight approach, with minimal overhead, oversight, or waste. "Where do such misinterpretations come from?!" you may ask. Human nature, my friend.

With Scrum and Extreme Programming (XP) in the 90s, we were looking for a "middle way" (if you'll pardon the personally-biased reference) between basement hacking and the Rational Unified Process (RUP).

We are still, today, looking to improve on the way we build software. The search for greater fluency doesn't ever end.

In my experience, the chosen system of thought, to continue to provide value and comfort, needs to (a) be self-reflective, and adapt to changing context (i.e., it has to "change with the times"), (b) adapt to regional realities and native cultures, while still providing the original, foundational values, and (c) be easy to describe and to learn the basics (though there are always higher and higher levels of fluency to be explored over a lifetime).

"Is he still talking about scaling Agile in a complex corporate setting?!" Of course.

* There are others, and you may note significant overlap, but they are not as relevant to this post: Science, and fitness. Ask me about those; I'll pedantically talk your ear off. You've been warned. ;-)
Categories: Blogs

Why Continuous Integration is like Broccoli

NetObjectives - Thu, 01/26/2017 - 16:17
I think Continuous Integration is the 'Broccoli' of the Agile world… everyone knows it is good for them but few want to really consume it.   Continuous Integration is one of the most valuable practices in the agile development universe! Despite that, it is quickly clear in most shops "doing agile" that no one there really wants to implement Continuous Integration (CI). Why not? What forces are...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Scrum Police Game now in French!

Growing Agile - Thu, 01/26/2017 - 13:10
Are you ready for the French Scrum Police? We have great news! Thanks to Frederic Vandaele from Trasys the game is now available in French. See the original blog post about the game for more details. You can get the files by clicking on the button below.
Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.