Skip to content

Feed aggregator

How to integrate Jmeter-maven printing HTML reports project with DevOps CI/CD pipeline in 4 easy steps

Xebia Blog - Tue, 12/27/2016 - 09:36
(1) Fork the below github repo having pom.xml file which contains all the dependencies of Jmeter and printing the Jmeter reports in HTML format https://github.com/nishantguptaxe/JmeterMavenHtmlReports.git (2) Add your Jmeter file under the src/test/jmeter folder and checkin your code into the github (3) Install the go server, start the go server ,hit the go url in
Categories: Companies

The Simple Leader: Align and Eliminate

Evolving Excellence - Sun, 12/25/2016 - 11:49

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


My success, part of it certainly,
is that I have focused in on a few things.
– Bill Gates

Once you have a hoshin plan detailing what your organization’s priorities are, it’s time to face the reality of all the other projects you and your group are currently working on. This is often a “come-to-Jesus” time when organizational politics can reach a fever pitch, as project owners pitch why their projects, perhaps their raison d’être, deserve survival. It is also a great time to demonstrate the power of the hoshin plan as well as your leadership commitment to a new, defined path forward.

Compile a list of all current projects and significant activities. (This in itself will probably be an eye-opening experience.) Then, as a team, map that list against your principles, mission, why?, and hoshin plan. The hoshin plan will not list all the company’s appropriate or valuable projects, but it should contain the highest- priority objectives. All other projects must align to the principles, why?, and mission, and support and not conflict with the plan.

Project managers and teams on projects that no longer align with the organization’s future path should not be fearful. If done correctly, the projects on the hoshin plan will stretch the organization and need experienced project managers and teams to work on them. Think about how much easier your leadership role will be when all projects are identified and aligned with a hoshin plan that the organization owns and supports, not to mention the resources that are being saved or better invested.

Once again, consider doing the same for you personally. What are you working on that isn’t giving you value or contributing to your own plan? Eliminating the nonessentials in your life will give you more time and focus to create something you want even more.

Categories: Blogs

Lessons Learned from our “Fail Faire”

Agile Ottawa - Sat, 12/24/2016 - 17:22
On a cold December evening, several of us gathered to share our latest stories of Agile woe. One of the outcomes of a “Fail Faire” event is to gather a set of lessons learned. The session we hosted three years ago … Continue reading →
Categories: Communities

Go: First attempt at channels

Mark Needham - Sat, 12/24/2016 - 12:45

In a previous blog post I mentioned that I wanted to extract blips from The ThoughtWorks Radar into a CSV file and I thought this would be a good mini project for me to practice using Go.

In particular I wanted to try using channels and this seemed like a good chance to do that.

I watched a talk by Rob Pike on designing concurrent applications where he uses the following definition of concurrency:

Concurrency is a way to structure a program by breaking it into pieces that can be executed independently.

He then demonstrates this with the following diagram:

2016 12 23 19 52 30

I broke the scraping application down into four parts:

  1. Find the links of blips to download ->
  2. Download the blips ->
  3. Scrape the data from each page ->
  4. Write the data into a CSV file

I don’t think we gain much by parallelising steps 1) or 4) but steps 2) and 3) seem easily parallelisable. Therefore we’ll use a single goroutine for steps 1) and 4) and multiple goroutines for steps 2) and 3).

We’ll create two channels:

  • filesToScrape
  • filesScraped

And they will interact with our components like this:

  • 2) will write the path of the downloaded files into filesToScape
  • 3) will read from filesToScrape and write the scraped content into filesScraped
  • 4) will read from filesScraped and put that information into a CSV file.


I decided to write a completely serial version of the scraping application first so that I could compare it to the parallel version. I had the following common code:

scrape/scrape.go

package scrape

import (
	"github.com/PuerkitoBio/goquery"
	"os"
	"bufio"
	"fmt"
	"log"
	"strings"
	"net/http"
	"io"
)

func checkError(err error) {
	if err != nil {
		fmt.Println(err)
		log.Fatal(err)
	}
}

type Blip struct {
	Link  string
	Title string
}

func (blip Blip) Download() File {
	parts := strings.Split(blip.Link, "/")
	fileName := "rawData/items/" + parts[len(parts) - 1]

	if _, err := os.Stat(fileName); os.IsNotExist(err) {
		resp, err := http.Get("http://www.thoughtworks.com" + blip.Link)
		checkError(err)
		body := resp.Body

		file, err := os.Create(fileName)
		checkError(err)

		io.Copy(bufio.NewWriter(file), body)
		file.Close()
		body.Close()
	}

	return File{Title: blip.Title, Path: fileName }
}

type File struct {
	Title string
	Path  string
}

func (fileToScrape File ) Scrape() ScrapedFile {
	file, err := os.Open(fileToScrape.Path)
	checkError(err)

	doc, err := goquery.NewDocumentFromReader(bufio.NewReader(file))
	checkError(err)
	file.Close()

	var entries []map[string]string
	doc.Find("div.blip-timeline-item").Each(func(i int, s *goquery.Selection) {
		entry := make(map[string]string, 0)
		entry["time"] = s.Find("div.blip-timeline-item__time").First().Text()
		entry["outcome"] = strings.Trim(s.Find("div.blip-timeline-item__ring span").First().Text(), " ")
		entry["description"] = s.Find("div.blip-timeline-item__lead").First().Text()
		entries = append(entries, entry)
	})

	return ScrapedFile{File:fileToScrape, Entries:entries}
}

type ScrapedFile struct {
	File    File
	Entries []map[string]string
}

func FindBlips(pathToRadar string) []Blip {
	blips := make([]Blip, 0)

	file, err := os.Open(pathToRadar)
	checkError(err)

	doc, err := goquery.NewDocumentFromReader(bufio.NewReader(file))
	checkError(err)

	doc.Find(".blip").Each(func(i int, s *goquery.Selection) {
		item := s.Find("a")
		title := item.Text()
		link, _ := item.Attr("href")
		blips = append(blips, Blip{Title: title, Link: link })
	})

	return blips
}

Note that we’re using the goquery library to scrape the HTML files that we download.

A Blip is used to represent an item that appears on the radar e.g. .NET Core. A File is a representation of that blip on my local file system and a ScrapedFile contains the local representation of a blip and has an array containing every appearance the blip has made in radars over time.

Let’s have a look at the single threaded version of the scraper:

cmd/single/main.go

package main

import (
	"fmt"
	"encoding/csv"
	"os"
	"github.com/mneedham/neo4j-thoughtworks-radar/scrape"
)


func main() {
	var filesCompleted chan scrape.ScrapedFile = make(chan scrape.ScrapedFile)
	defer close(filesCompleted)

	blips := scrape.FindBlips("rawData/twRadar.html")

	var filesToScrape []scrape.File
	for _, blip := range blips {
		filesToScrape = append(filesToScrape, blip.Download())
	}

	var filesScraped []scrape.ScrapedFile
	for _, file := range filesToScrape {
		filesScraped = append(filesScraped, file.Scrape())
	}

	blipsCsvFile, _ := os.Create("import/blipsSingle.csv")
	writer := csv.NewWriter(blipsCsvFile)
	defer blipsCsvFile.Close()

	writer.Write([]string{"technology", "date", "suggestion" })
	for _, scrapedFile := range filesScraped {
		fmt.Println(scrapedFile.File.Title)
		for _, blip := range scrapedFile.Entries {
			writer.Write([]string{scrapedFile.File.Title, blip["time"], blip["outcome"] })
		}
	}
	writer.Flush()
}

rawData/twRadar.html is a local copy of the A-Z page which contains all the blips. This version is reasonably simple: we create an array containing all the blips, scrape them into another array, and then that array into a CSV file. And if we run it:

$ time go run cmd/single/main.go 

real	3m10.354s
user	0m1.140s
sys	0m0.586s

$ head -n10 import/blipsSingle.csv 
technology,date,suggestion
.NET Core,Nov 2016,Assess
.NET Core,Nov 2015,Assess
.NET Core,May 2015,Assess
A single CI instance for all teams,Nov 2016,Hold
A single CI instance for all teams,Apr 2016,Hold
Acceptance test of journeys,Mar 2012,Trial
Acceptance test of journeys,Jul 2011,Trial
Acceptance test of journeys,Jan 2011,Trial
Accumulate-only data,Nov 2015,Assess

It takes a few minutes and most of the time will be taken in the blip.Download() function – work which is easily parallelisable. Let’s have a look at the parallel version where goroutines use channels to communicate with each other:

cmd/parallel/main.go

package main

import (
	"os"
	"encoding/csv"
	"github.com/mneedham/neo4j-thoughtworks-radar/scrape"
)

func main() {
	var filesToScrape chan scrape.File = make(chan scrape.File)
	var filesScraped chan scrape.ScrapedFile = make(chan scrape.ScrapedFile)
	defer close(filesToScrape)
	defer close(filesScraped)

	blips := scrape.FindBlips("rawData/twRadar.html")

	for _, blip := range blips {
		go func(blip scrape.Blip) { filesToScrape <- blip.Download() }(blip)
	}

	for i := 0; i < len(blips); i++ {
		select {
		case file := <-filesToScrape:
			go func(file scrape.File) { filesScraped <- file.Scrape() }(file)
		}
	}

	blipsCsvFile, _ := os.Create("import/blips.csv")
	writer := csv.NewWriter(blipsCsvFile)
	defer blipsCsvFile.Close()

	writer.Write([]string{"technology", "date", "suggestion" })
	for i := 0; i < len(blips); i++ {
		select {
		case scrapedFile := <-filesScraped:
			for _, blip := range scrapedFile.Entries {
				writer.Write([]string{scrapedFile.File.Title, blip["time"], blip["outcome"] })
			}
		}
	}
	writer.Flush()
}

Let's remove the files we just downloaded and give this version a try.

$ rm rawData/items/*

$ time go run cmd/parallel/main.go 

real	0m6.689s
user	0m2.544s
sys	0m0.904s

$ head -n10 import/blips.csv 
technology,date,suggestion
Zucchini,Oct 2012,Assess
Reactive Extensions for .Net,May 2013,Assess
Manual infrastructure management,Mar 2012,Hold
Manual infrastructure management,Jul 2011,Hold
JavaScript micro frameworks,Oct 2012,Trial
JavaScript micro frameworks,Mar 2012,Trial
NPM for all the things,Apr 2016,Trial
NPM for all the things,Nov 2015,Trial
PowerShell,Mar 2012,Trial

So we're down from 190 seconds to 7 seconds, pretty cool! One interesting thing is that the order of the values in the CSV file will be different since the goroutines won't necessarily come back in the same order that they were launched. We do end up with the same number of values:

$ wc -l import/blips.csv 
    1361 import/blips.csv

$ wc -l import/blipsSingle.csv 
    1361 import/blipsSingle.csv

And we can check that the contents are identical:

$ cat import/blipsSingle.csv  | sort > /tmp/blipsSingle.csv

$ cat import/blips.csv  | sort > /tmp/blips.csv

$ diff /tmp/blips.csv /tmp/blipsSingle.csv 


The code in this post is all on github. I'm sure I've made some mistakes/there are ways that this could be done better so do let me know in the comments or I'm @markhneedham on twitter.

Categories: Blogs

Go: cannot execute binary file: Exec format error

Mark Needham - Fri, 12/23/2016 - 20:24

In an earlier blog post I mentioned that I’d been building an internal application to learn a bit of Go and I wanted to deploy it to AWS.

Since the application was only going to live for a couple of days I didn’t want to spend a long time build up anything fancy so my plan was just to build the executable, SSH it to my AWS instance, and then run it.

My initial (somewhat naive) approach was to just build the project on my Mac and upload and run it:

$ go build

$ scp myapp ubuntu@aws...

$ ssh ubuntu@aws...

$ ./myapp
-bash: ./myapp: cannot execute binary file: Exec format error

That didn’t go so well! By reading Ask Ubuntu and Dave Cheney’s blog post on cross compilation I realised that I just needed to set the appropriate environment variables before running go build.

The following did the trick:

env GOOS=linux GOARCH=amd64 GOARM=7 go build

And that’s it! I’m sure there’s more sophisticated ways of doing this that I’ll come to learn about but for now this worked for me.

Categories: Blogs

Neo4j: Graphing the ThoughtWorks Technology Radar

Mark Needham - Fri, 12/23/2016 - 19:40

For a bit of Christmas holiday fun I thought it’d be cool to create a graph of the different blips on the ThoughtWorks Technology Radar and how the recommendations have changed over time.

I wrote a script to extract each blip (e.g. .NET Core) and the recommendation made in each radar that it appeared in. I ended up with a CSV file:

|----------------------------------------------+----------+-------------|
|  technology                                  | date     | suggestion  |
|----------------------------------------------+----------+-------------|
|  AppHarbor                                   | Mar 2012 | Trial       |
|  Accumulate-only data                        | Nov 2015 | Assess      |
|  Accumulate-only data                        | May 2015 | Assess      |
|  Accumulate-only data                        | Jan 2015 | Assess      |
|  Buying solutions you can only afford one of | Mar 2012 | Hold        |
|----------------------------------------------+----------+-------------|

I then wrote a Cypher script to create the following graph model:

2016 12 23 16 52 08

WITH ["Hold", "Assess", "Trial", "Adopt"] AS positions
UNWIND RANGE (0, size(positions) - 2) AS index
WITH positions[index] AS pos1, positions[index + 1] AS pos2
MERGE (position1:Position {value: pos1})
MERGE (position2:Position {value: pos2})
MERGE (position1)-[:NEXT]->(position2);

load csv with headers from "file:///blips.csv" AS row
MATCH (position:Position {value:  row.suggestion })
MERGE (tech:Technology {name:  row.technology })
MERGE (date:Date {value: row.date})
MERGE (recommendation:Recommendation {
  id: tech.name + "_" + date.value + "_" + position.value})
MERGE (recommendation)-[:ON_DATE]->(date)
MERGE (recommendation)-[:POSITION]->(position)
MERGE (recommendation)-[:TECHNOLOGY]->(tech);

match (date:Date)
SET date.timestamp = apoc.date.parse(date.value, "ms", "MMM yyyy");

MATCH (date:Date)
WITH date
ORDER BY date.timestamp
WITH COLLECT(date) AS dates
UNWIND range(0, size(dates)-2) AS index
WITH dates[index] as month1, dates[index+1] AS month2
MERGE (month1)-[:NEXT]->(month2);

MATCH (tech)<-[:TECHNOLOGY]-(reco:Recommendation)-[:ON_DATE]->(date)
WITH tech, reco, date
ORDER BY tech.name, date.timestamp
WITH tech, COLLECT(reco) AS recos
UNWIND range(0, size(recos)-2) AS index
WITH recos[index] AS reco1, recos[index+1] AS reco2
MERGE (reco1)-[:NEXT]->(reco2);

Note that I installed the APOC procedures library so that I could convert the string representation of a date into a timestamp using the apoc.date.parse function. The blips.csv file needs to go in the import directory of Neo4j.

Now we’re reading to write some queries.

The Technology Radar has 4 positions that can be taken for a given technology: Hold, Assess, Trial, and Adopt:

  • Hold: Process with Caution
  • Assess: Worth exploring with the goal of understanding how it will affect your enterprise.
  • Trial: Worth pursuing. It is important to understand how to build up this capability. Enterprises should try this technology on a project that can handle the risk.
  • Adopt: We feel strongly that the industry should be adopting these items. We use them when appropriate on our projects.

I was curious whether there had ever been a technology where the advice was initially to ‘Hold’ but had later changed to ‘Assess’. I wrote the following query to find out:

MATCH (pos1:Position {value:"Hold"})<-[:POSITION]-(reco)-[:TECHNOLOGY]->(tech),
      (pos2:Position {value:"Assess"})<-[:POSITION]-(otherReco)-[:TECHNOLOGY]->(tech),
      (reco)-[:ON_DATE]->(recoDate),
      (otherReco)-[:ON_DATE]->(otherRecoDate)
WHERE (reco)-[:NEXT]->(otherReco)
RETURN tech.name AS technology, otherRecoDate.value AS dateOfChange;

╒════════════╤══════════════╕
│"technology"│"dateOfChange"│
╞════════════╪══════════════╡
│"Azure"     │"Aug 2010"    │
└────────────┴──────────────┘

Only Azure! The page doesn’t have any explanation for the initial ‘Hold’ advice in April 2010 which was presumably just before ‘the cloud’ became prominent. What about the other way around? Are there any technologies where the suggestion was initially to ‘Assess’ but later to ‘Hold’?

MATCH (pos1:Position {value:"Assess"})<-[:POSITION]-(reco)-[:TECHNOLOGY]->(tech),
      (pos2:Position {value:"Hold"})<-[:POSITION]-(otherReco)-[:TECHNOLOGY]->(tech),
      (reco)-[:ON_DATE]->(recoDate),
      (otherReco)-[:ON_DATE]->(otherRecoDate)
WHERE (reco)-[:NEXT]->(otherReco)
RETURN tech.name AS technology, otherRecoDate.value AS dateOfChange;

╒═══════════════════════════════════╤══════════════╕
│"technology"                       │"dateOfChange"│
╞═══════════════════════════════════╪══════════════╡
│"RIA"                              │"Apr 2010"    │
├───────────────────────────────────┼──────────────┤
│"Backbone.js"                      │"Oct 2012"    │
├───────────────────────────────────┼──────────────┤
│"Pace-layered Application Strategy"│"Nov 2015"    │
├───────────────────────────────────┼──────────────┤
│"SPDY"                             │"May 2015"    │
├───────────────────────────────────┼──────────────┤
│"AngularJS"                        │"Nov 2016"    │
└───────────────────────────────────┴──────────────┘

A couple of these are Javascript libraries/frameworks so presumably the advice is now to use React instead. Let’s check:

MATCH (t:Technology)<-[:TECHNOLOGY]-(reco)-[:ON_DATE]->(date), (reco)-[:POSITION]->(pos)
WHERE t.name contains "React.js"
RETURN pos.value, date.value 
ORDER BY date.timestamp

╒═══════════╤════════════╕
│"pos.value"│"date.value"│
╞═══════════╪════════════╡
│"Assess"   │"Jan 2015"  │
├───────────┼────────────┤
│"Trial"    │"May 2015"  │
├───────────┼────────────┤
│"Trial"    │"Nov 2015"  │
├───────────┼────────────┤
│"Adopt"    │"Apr 2016"  │
├───────────┼────────────┤
│"Adopt"    │"Nov 2016"  │
└───────────┴────────────┘

Ember is also popular:

MATCH (t:Technology)<-[:TECHNOLOGY]-(reco)-[:ON_DATE]->(date), (reco)-[:POSITION]->(pos)
WHERE t.name contains "Ember"
RETURN pos.value, date.value 
ORDER BY date.timestamp

╒═══════════╤════════════╕
│"pos.value"│"date.value"│
╞═══════════╪════════════╡
│"Assess"   │"May 2015"  │
├───────────┼────────────┤
│"Assess"   │"Nov 2015"  │
├───────────┼────────────┤
│"Trial"    │"Apr 2016"  │
├───────────┼────────────┤
│"Adopt"    │"Nov 2016"  │
└───────────┴────────────┘

Let’s go on a different tangent and look at how many technologies were introduced in the most recent radar?

MATCH (date:Date {value: "Nov 2016"})<-[:ON_DATE]-(reco)
WHERE NOT (reco)<-[:NEXT]-()
RETURN COUNT(*) 

╒══════════╕
│"COUNT(*)"│
╞══════════╡
│"45"      │
└──────────┘

Wow, 45 new things! How were they spread across the different positions?

MATCH (date:Date {value: "Nov 2016"})<-[:ON_DATE]-(reco)-[:TECHNOLOGY]->(tech), 
      (reco)-[:POSITION]->(position)
WHERE NOT (reco)<-[:NEXT]-()
WITH position, COUNT(*) AS count, COLLECT(tech.name) AS technologies
ORDER BY LENGTH((position)-[:NEXT*]->()) DESC
RETURN position.value, count, technologies

╒════════════════╤═══════╤══════════════════════════════════════════════╕
│"position.value"│"count"│"technologies"                                │
╞════════════════╪═══════╪══════════════════════════════════════════════╡
│"Hold"          │"1"    │["Anemic REST"]                               │
├────────────────┼───────┼──────────────────────────────────────────────┤
│"Assess"        │"28"   │["Nuance Mix","Micro frontends","Three.js","Sc│
│                │       │ikit-learn","WebRTC","ReSwift","Vue.js","Elect│
│                │       │ron","Container security scanning","wit.ai","D│
│                │       │ifferential privacy","Rapidoid","OpenVR","AWS │
│                │       │Application Load Balancer","Tarantool","IndiaS│
│                │       │tack","Ethereum","axios","Bottled Water","Cass│
│                │       │andra carefully","ECMAScript 2017","FBSnapshot│
│                │       │Testcase","Client-directed query","JuMP","Cloj│
│                │       │ure.spec","HoloLens","Android-x86","Physical W│
│                │       │eb"]                                          │
├────────────────┼───────┼──────────────────────────────────────────────┤
│"Trial"         │"13"   │["tmate","Lightweight Architecture Decision Re│
│                │       │cords","APIs as a product","JSONassert","Unity│
│                │       │ beyond gaming","Galen","Enzyme","Quick and Ni│
│                │       │mble","Talisman","fastlane","Auth0","Pa11y","P│
│                │       │hoenix"]                                      │
├────────────────┼───────┼──────────────────────────────────────────────┤
│"Adopt"         │"3"    │["Grafana","Babel","Pipelines as code"]       │
└────────────────┴───────┴──────────────────────────────────────────────┘

Lots of new things to explore over the holidays! The CSV files, import script, and queries used in this post are all available on github if you want to play around with them.

Categories: Blogs

Go: Templating with the Gin Web Framework

Mark Needham - Fri, 12/23/2016 - 16:30

I spent a bit of time over the last week building a little internal web application using Go and the Gin Web Framework and it took me a while to get the hang of the templating language so I thought I’d write up some examples.

Before we get started, I’ve got my GOPATH set to the following path:

$ echo $GOPATH
/Users/markneedham/projects/gocode

And the project containing the examples sits inside the src directory:

$ pwd
/Users/markneedham/projects/gocode/src/github.com/mneedham/golang-gin-templating-demo

Let’s first install Gin:

$ go get gopkg.in/gin-gonic/gin.v1

It gets installed here:

$ ls -lh $GOPATH/src/gopkg.in
total 0
drwxr-xr-x   3 markneedham  staff   102B 23 Dec 10:55 gin-gonic

Now let’s create a main function to launch our web application:

demo.go

package main

import (
	"github.com/gin-gonic/gin"
	"net/http"
)

func main() {
	router := gin.Default()
	router.LoadHTMLGlob("templates/*")

	// our handlers will go here

	router.Run("0.0.0.0:9090")
}

We’re launching our application on port 9090 and the templates live in the templates directory which is located relative to the file containing the main function:

$ ls -lh
total 8
-rw-r--r--  1 markneedham  staff   570B 23 Dec 13:34 demo.go
drwxr-xr-x  4 markneedham  staff   136B 23 Dec 13:34 templates
Arrays

Let’s create a route which will display the values of an array in an unordered list:

	router.GET("/array", func(c *gin.Context) {
		var values []int
		for i := 0; i < 5; i++ {
			values = append(values, i)
		}

		c.HTML(http.StatusOK, "array.tmpl", gin.H{"values": values})
	})
    {{ range .values }}
  • {{ . }}
  • {{ end }}

And now we'll cURL our application to see what we get back:

$ curl http://localhost:9090/array
  • 0
  • 1
  • 2
  • 3
  • 4

What about if we have an array of structs instead of just strings?

import "strconv"

type Foo struct {
	value1 int
	value2 string
}

	router.GET("/arrayStruct", func(c *gin.Context) {
		var values []Foo
		for i := 0; i < 5; i++ {
			values = append(values, Foo{Value1: i, Value2: "value " + strconv.Itoa(i)})
		}

		c.HTML(http.StatusOK, "arrayStruct.tmpl", gin.H{"values": values})
	})

    {{ range .values }}
  • {{ .Value1 }} -> {{ .Value2 }}
  • {{ end }}

cURL time:

$ curl http://localhost:9090/arrayStruct
  • 0 -> value 0
  • 1 -> value 1
  • 2 -> value 2
  • 3 -> value 3
  • 4 -> value 4
Maps

Now let's do the same for maps.

	router.GET("/map", func(c *gin.Context) {
		values := make(map[string]string)
		values["language"] = "Go"
		values["version"] = "1.7.4"

		c.HTML(http.StatusOK, "map.tmpl", gin.H{"myMap": values})
	})
    {{ range .myMap }}
  • {{ . }}
  • {{ end }}

And cURL it:

$ curl http://localhost:9090/map
  • Go
  • 1.7.4

What if we want to see the keys as well?

	router.GET("/mapKeys", func(c *gin.Context) {
		values := make(map[string]string)
		values["language"] = "Go"
		values["version"] = "1.7.4"

		c.HTML(http.StatusOK, "mapKeys.tmpl", gin.H{"myMap": values})
	})
    {{ range $key, $value := .myMap }}
  • {{ $key }} -> {{ $value }}
  • {{ end }}
$ curl http://localhost:9090/mapKeys
  • language -> Go
  • version -> 1.7.4

And finally, what if we want to select specific values from the map?

	router.GET("/mapSelectKeys", func(c *gin.Context) {
		values := make(map[string]string)
		values["language"] = "Go"
		values["version"] = "1.7.4"

		c.HTML(http.StatusOK, "mapSelectKeys.tmpl", gin.H{"myMap": values})
	})
  • Language: {{ .myMap.language }}
  • Version: {{ .myMap.version }}
$ curl http://localhost:9090/mapSelectKeys
  • Language: Go
  • Version: 1.7.4

I've found the Hugo Go Template Primer helpful for figuring this out so that's a good reference if you get stuck. You can find a go file containing all the examples on github if you want to use that as a starting point.

Categories: Blogs

Happy Holidays from your Targetprocess team

TargetProcess - Edge of Chaos Blog - Fri, 12/23/2016 - 11:20

Hello friends,

As 2016 draws to a close, we want to thank our users and all the members of the Targetprocess community for your continued support. We're on a mission to create the best visual management software possible, and we couldn't do it without you.

We're leaving 2016 with a clear purpose in mind, and looking forward to the challenges of next year with renewed vigor. Until then, let's all make sure we set aside a few moments to relax with friends and family.

And remember: every success, both great and small, is made up of hundreds of small steps. As long as you keep moving forward, you can create something beautiful. Best wishes for the holidays, and Happy New Year!

If you need a little more holiday cheer, check out our videos from previous years:
Seasons Greetings 2016
Seasons Greetings 2015
Seasons Greetings 2013

Categories: Companies

Will Agile be trashed?

Xebia Blog - Fri, 12/23/2016 - 11:19
Agile is hot. Almost every Fortune 500 company is “Doing the Agile Thing”. But with the success also the critics are growing rapidly. The post “Agile is Dead” from Matthew Kern was extremely popular. Many of his arguments are dead right. For example, Agile has become a brand name and a hype and the original Agile Manifesto has
Categories: Companies

AGD Practice - The Silent Count

Agile Game Development - Thu, 12/22/2016 - 19:17
Daily Stand-ups have a frequent problem: the talkative coach (or Scrum Master).  There's nothing wrong with their speaking during the meeting, but if we want the team to take ownership of the work, the developers need to do most of the talking.

Coaches usually mean well, but they often come from a management background in an organization where work is assigned to developers and answers mainly come from management.  This creates a pattern of developers expecting problems to be solved by management.  We want to create a new pattern, where they solve most of the problems on their own.

Coaches need to coach developers through these pattern changes.  This requires emboldening them and sometimes creating a void that they must fill themselves.

A good practice for coaches is to ask questions - even questions they might know the answer to - and to wait for the answer.  The practice is to silently count to 10 seconds after you ask the question.  Don't be surprised if it takes 6-7 seconds before someone speaks up...long silences can be uncomfortable for a developer who knows the answer, but is a bit shy in speaking up.  If you get to 10 seconds and no one has spoken up, ask a bridge question; a question that is easier to answer and gets you halfway there.

Example

Coach: "Are we on track to hit our sprint goal this week?"

Silent count to 10.

Coach: "OK, are there any things that you might be worried about?"

After a few seconds a developer speaks up: "I'm not sure I'm creating the right animations for the melee".

Another developer speaks up: "I can sit with you after the meeting and go over what we need".

Benefits

Creating a pattern of solving problems among developers, without direct management supervision will give you one of the greatest benefits of self-organization.  Having eight people solving 90% of the problems is a lot more efficient and effective than you being the bottleneck.


Categories: Blogs

Dear Former Prime Minister

Notes from a Tool User - Mark Levison - Thu, 12/22/2016 - 17:23

Dear Rt. Hon. Kim Campbell,

In early October we met in the Toronto airport while lining up to board for Edmonton. I’m the Ottawa-based management consultant, who helps organizations become more effective.

You asked what it is that I do, so I’ve undertaken to explain it here briefly, in a way that’s clear regardless of what business someone is in. At the core of what I do, the type of individual industry or organization is irrelevant to the application and the benefits brought about.

What I Do

Software – more so than the robots we were warned about in the ‘60s – has taken over the world. From taxis being replaced by Uber, to newspapers ignored in favour of Google and Facebook, to video rental companies giving way to Netflix and YouTube. We’re moving from an age when software supported a business, to where software is the business.

I help organizations thrive in the Software Age.  I help leadership evolve from traditional methods and structures of, typically, management heavy, using carrot and stick employee motivation, to self-organizing teams and self-motivating teams who adapt quickly and effectively in the face of industry, culture, and economic changes.

When you and I chatted, I also mentioned my wife’s work, which is possibly more important than my own. I explained my encounter with you to her, and she wrote a blog post: “Dear Former Prime Minister, Here’s what I do for women and why I do it.”

My wife explains that she is “on a mission to help women grow their financial literacy and their confidence in order to create better options for themselves and their families, especially when life happens, as it inevitably does.” In that respect, we have very related goals, except mine is focused on people – and, more recently, leaders and entire organizations – growing group strength and confidence, so they have better business options and responses when industry and economic challenges happen.

I look forward to meeting you again and speaking with the students at your college. I’m already intrigued by your interdisciplinary team concept, as it echoes what we find in truly effective organizations.

Categories: Blogs

Four Easy Steps to Get Your Life in Order, Cut Stress, and Double Productivity

Kanbanery - Wed, 12/21/2016 - 20:53

Artykuł Four Easy Steps to Get Your Life in Order, Cut Stress, and Double Productivity pochodzi z serwisu Kanbanery.

A new year is just around the corner. For some of us, that just means a few weeks of writing the date wrong. For others, it means an opportunity to become the person we dream of being who’s living the life such a person would deserve. Many find the new year a perfect time to make a more compelling commitment than the ones their teachers, parents or bosses ask them to make every day. If that’s you, here’s an idea for a new year’s resolution that would make a difference in your life in both the short and long term. Get your life in order in 2017. Or if that seems too big, get your life in order on January 2nd, 2017, and then keep it that way.

Here’s how.

I’ve been a huge fan of organizational systems for decades, from Franklin planners and Eisenhower matrixes to Getting Things Done (GTD) and the personal kanban. They all address the same basic problems of how to know what to do when. We can deconstruct them and find some common elements that most every productivity guru agrees your system has to have to work.

Get it out of your head Productivity that reduces stressSource: Giphy

There’s nothing like the stress of feeling there’s something important that you should be doing if only you could remember what it is, except perhaps the stress of that moment when your boss, spouse, child, or friend reminds you when they show up and tell you how you’ve just dropped the ball and let everyone down. Isn’t it crazy how at that moment, the memory of you making the commitment jumps so clearly into your mind that you can feel yourself right there making that promise with every intention of following through, and then all the time between then and now is suddenly compressed into a blur of poor judgements. The best solution I’ve found to ensure that ideas and commitments don’t get lost is to have a small set of places where everything goes. I’ve tried to simplify this down to one place and failed. It’s just not convenient enough. But too many turns into clutter. So I have five “dump and forget” places for tasks:

Google Calendar

I put every event here. Meetings, birthdays, deadlines, vacations, travel, and conferences. If there’s something I need to do to be prepared, I add it as an email reminder. For example, my brother’s birthday is in November, and when I created the recurring event in Google Calendar, I added a reminder that emails me two weeks in advance to tell me to buy and send a card. If I have a status meeting in my calendar, I might have a reminder to update the status report the day before. That way, I can commit to a task weeks, months or years in advance and then forget about it, confident that I’ll know what I need to know when I need to know it and not before.

My whiteboard

I have a little magnetic whiteboard designed to look like a sheet of notebook paper stuck to the inside of the front door of my home. My family has learned that if they ask me to do something and I reply “sure, no problem” that they shouldn’t expect much. If I’m up to my elbows in cookie dough, they can scribble a reminder on the little whiteboard, and know it will get done. I also use this for ideas that pop up anytime I’m in the house and just want to capture them. Once it’s on the board, I don’t have to waste one iota of brainpower on remembering it.

A physical inbox

I have a regular plastic inbox on my desk at work and another at home. It starts every week empty and as things come up, like mail or bills, that don’t have to be dealt with immediately, they go into the inbox for later processing. I also scribble notes on Post-Its or scraps of paper throughout the day and dump anything I don’t have to take action on immediately into the inbox so I can, you guessed it, not waste one iota of brainpower remembering it.

Kanbanery

Everything I have to do, which comes to me by email, mail, inspiration, or conversation gets captured in my physical inbox or whiteboard and moved once a week into Kanbanery. I can also create new tasks on my personal kanban board either from the board itself, if I’m logged in, using hotkeys to enter several things quickly. If I’m not logged in, I can add task cards to Kanbanery by email or use the Kanbanery Chrome extension. Most things, though, get added on Monday morning when I empty my various inboxes into Kanbanery so that I have one list of things I could be doing all in one place, and all my other inboxes start the week empty.

8db4202bfc292328d95f28664f18929d

Make the abstract concrete

Unless it’s a bill to pay or a toy that to fix, most things we do don’t have much of a physical reality. They build up without taking up space. No one can see how busy we are, including ourselves, unless your job is sorting mail. So now that you have all those things written down someplace, find a way to see what you’re working on, what you’ve done, and what’s yet to do. Most people find “to do” lists with check marks uninspiring and even depressing. Sure, striking things off feels good, but the list itself just keeps growing and feels like an insatiable beast. That’s why I prefer a Kanbanery board.

I have a column for ideas, one for things I want to get done this week, and another for things I want to do today. Then, as I work through my day, I pull from the “To Do Today” column into “Doing” and finally “Done.” If I empty the today column and still feel motivated to work, I pull more stuff in from the “This Week” column. But once I’ve decided what to do this week and today, I collapse any columns I’m not using so I don’t have to look at what I’ve decided not to think about today. They’ll be there when I need to think about them tomorrow.

Prioritize

There are several tools for prioritization. The Eisenhower matrix divides things into a Important/Unimportant and Urgent/Not Urgent. I find that a useful idea, especially the awareness that there are things that are urgent and unimportant and that there are other things which are important but not urgent. That’s a reminder to plan your life, because if you don’t, your time will be filled with urgent trivialities. Most of what most people do most of the time could be left undone and nothing bad would ever happen as a result.

use-eisnehower-with-kanban-board-1

I like the DSDM prioritization practice called MoSCoW. This method divides things into Must Do, Should Do, Could Do, and Won’t Do. But it’s still more complex than I find I need.

I use a simplified version of the Kanban Method’s “cost of delay” metric to plan my days. I first pull into my “To Do Today” column anything that if not done, bad things would happen. How bad, I leave to my discretion. An easy task that would be very costly if left undone will always make the cut, but if I see something hard to do which won’t hurt much if I put it off, then it might not make it in today, or ever. For me, the “very bad things will happen if these things aren’t done today” list is extremely short. It rarely has more than one or two items in it and is often empty. That’s my Must Do list. Once my Must Do list is complete, the rest of my day is discretionary, so I look for three sets of things to do next:

Stuff I want to do because I feel like it. If it’s valuable, but not urgent, and I’m in the mood to do it, then why not? I might not be in the mood when it does become urgent, and I’ll do a better job of it and have a happier day if I do it when I’m feeling motivated.

Stuff that’s likely to become tomorrow’s “Must Do or Very Bad Things Will Happen.” That’s how you keep the list short, which gives you more options every day.

Stuff that is super important, but will never be urgent. For example, writing a letter to my old mentor at my last job will never be urgent. If I never do it, no one will even know. Our relationship will grow more distant. We’ll eventually forget about each other. Nothing bad will ever happen. I’ll just grow old with one less friend in the world. And I’ll never know if I miss a great opportunity because he wasn’t thinking of me when he was looking for an investor or business partner.

Execute

Most productivity systems have little to say about the most important aspect of being productive, which is producing stuff. They seem to assume that once you know what needs to be done, you’ll just do it. My kit includes three tools from the productivity literature that all work to help with this critical component.

The two-minute rule

It’s amazing how many important tasks take a trivial amount of time. Invite a person to a meeting. Make a decision. Make a dental appointment. Sort the mail. Floss. Truely life-altering stuff. Keep your “to do” list uncluttered by not putting this stuff on it. If you are making your plan for the day, or just have a sudden inspiration to do something, and it’s something you can do right now in two minutes or less, just do it. That’s half your life’s problems, solved now and in real time. You’re welcome.

The Pomodoro Technique

For everything else, there’s the Pomodoro Technique. Why it works is worth a book, not a blog post, and I encourage you to read it. How it works is simple. Decide what to do next, set a timer for twenty-five minutes, then work on only that thing until it’s done or until the timer stops. Take a break; grab a cup of tea, jog around the block or watch a Louis C.K. video on YouTube. Have a laugh, or a sweat, or a sweet. Whatever takes your mind off work for five minutes. Then set the timer again and get back to work. Do one thing at a time. Finish it before moving on to the next thing. Kindness gratifies; Love prevails, and focus gets things done. The most powerful ideas are often the most simple.

Just get started

I hate washing dishes, so when I walk into the kitchen to wash dishes, I never plan to wash all of them. I only commit to washing ten of them. I count them as I go. But for some reason, once the first ten dishes are washed and my hands are wet and the sponge has soap on it and the sink’s already full of warm water I think, what the hell, it’s almost done. Might as well finish.

I rarely sit down to write a blog post. That takes a lot of time. I could always find a reason to do something else rather than spend a couple of hours writing. Usually, I sit down to write for five minutes. Maybe I’ll knock out an introduction and chisel away at the task so it’s not so big tomorrow. But usually, when I sit down to write for five minutes, half an hour or more passes before I look up and realize that I’ve just accidentally finished a draft of a blog post. A sentence turns into a paragraph. The paragraph turns into an idea. One idea leads to another. I might not have felt like writing a whole blog post when I opened my laptop half an hour ago, but I’m up to 2163 words already and still enjoying myself.

The scariest tasks always look far less scary when you’re five minutes into them. They can even start to look kind of fun, or at least satisfying. So if I’m putting something off because it’s too big, I just give it five or ten minutes. No big deal. No commitment. No repercussions if I don’t finish it in one sitting. And I’m usually pleasantly surprised by how much I get done that way.

Capture-Visualize-Commit-Execute

So there you are. Get your life in order. It seems like a lot, but there are only a few components that all play together to get your life in order. Capture everything, remember nothing. Visualize the work on a Kanbanery board. Make small commitments to a week, a day, and the next twenty-five minutes. Then focus and dig in using the two-minute rule and the Pomodoro Technique. Life, organized. Unicorns and rainbows. Happy New Year!

 

 

giphy-4Source: Giphy

Artykuł Four Easy Steps to Get Your Life in Order, Cut Stress, and Double Productivity pochodzi z serwisu Kanbanery.

Categories: Companies

Agile DNA Webinar

Leading Answers - Mike Griffiths - Wed, 12/21/2016 - 20:11
I am excited to announce a free webinar with RMC Learning Solutions entitled “Agile DNA: The People and Process Elements of Successful Agile Projects” that will be taking place on January 11th 2017 at 12:00pm Central Time. This is an... Mike Griffiths
Categories: Blogs

Traditional Agile Estimating

Agile Estimator - Wed, 12/21/2016 - 19:08


First, let me tell you that I love this book. It covers the techniques that agile development teams use to estimate and plan their projects. It has an interesting and well worked out case study. Second, let me tell you that this post is NOT a book review. If you are familiar with this book, it is probably unnecessary for you to read this post at all. Anyone involved in agile estimating should be aware of these techniques. However, most of the information found in other posts will not involve them directly. Thanks to material like this book, those techniques are well understood. I will be covering other material on software estimating. Some of the material in this blog will be useful to estimators outside of the agile development team. This might include product managers or independent estimators. Independent estimators are usually internal or external consultants asked to take a look at the development project and report about how it is progressing. They usually must render an opinion regarding whether and when it will be completed. By the way, I should also mention how much I respect the author of this book. Mike Cohn has brought much to the agile community between his books, consulting and lecturing. He was gracious enough to acknowledge my contributions to this book, despite the fact that I only made a few comments about the case study. There is one other disclaimer. Some of the information below may be slightly different from the above book. The information below is based on my experience. However, any deviations are not significant with respect to the reader’s understanting of these concepts.

User stories and story points are the primary planning artifacts of an agile project. For example, if a stock trading system were being developed, then there might be a user story like, “as a trader, I will be able to see 15 minute candle stick charts with formations where I expect price changes to be identified.”  Another story might be, “as a trader. I will be able to see my account information.” Each story will be assigned story points. The first one might be 8 story points, and the second one might be 4. There are no units on story points. The first might be implemented in 8 days or 8 hours. The only thing we are estimating is that implementing the first story will take twice as long as the second one. Another thing to realize about story points is that there are only a descrete number of values that can be used. Usually, theses are 0 (for extremely small), 1, 2, 3, 5, 8, 13, 20, 40 and 100. What about 4 from my example above? Some practitioners allow the use of 4 but do not allow 3 or 5. Otherwise, 3 or 5 would have had to be chosen for the second example story. In any case, the point is to limit the number of values that story points can take. These are rough estimates. It makes no sense to debate between 16 and 20 when the estimates are so imprecise at this point. There are three things to keep in mind about story points. First, they are estimates of effort, not size. They are not alternatives to function points or feature points. Second, they are relative, but non-standard. Across town, someone can be developing a trading system with the same two user stories. They might assign the first one 40 story points and the second one 20. They are saying the exact same thing, but it looks much different. Third, story points should not be mapped into real time. Even when you think you can implement 20 story points in a week, do not say that a story point is 2 hours. Practitioners agree on this and it will be presented without proof for now.

We talked about story points being assigned, but not how. One common way is through the use of a collaborative technique called planning poker. Planning poker sessions should be informal, like a friendly poker game. If this is the first time story points are being applied to this project, then the team must reach consensus on a story that is roughly in te middle with respect to implementation time. That story should be assigned 5 or 8 story points. The objective is to get a good distribution of story points. Otherwise, we are likely to have a collection of user stories with story point values of only 1, 2 or 3. This would be no better for estimating than a collection of user stories with story point values of 40 or 100. The wider distribution gives more, but not too much, percision to the size estimates. After this, someone reads each story in turn. This is often the product owner. There can be questions or discussions. After this, the team member assign a number of story points to it relative to the middle story. Some companies have decks of cards with each of the possible story points printed on them. The team members can place one card in front of them face down. This is so other people are not influenced by the values already chosen. When everyone is ready, the cards are turned face up. If everyone agrees, then that value is assigned as the story points. Otherwise, the people with the extreme values, lowest and highest, explain why they chose them. There can be questions and discussions. Then another round is played in hopes of reaching consensus. Sometimes, someone will need to expedite the assignment of story points. If the discussions are running too long, then a 2 minute limit should be imposed. This could be done by anyone on the team. If the reader sees that the team is boggng down on whether the value should be 2, 3 or 5 then a value of 3 might be imposed. Alternately, the reader might decide to go with the majority view. In any case, the objective is to get though the stories in a reasonable amount of time. All of the estimates will be continually changing as the project progresses. The best way to learn how much effort is required to implement any user story is to implement user stories. In agile, developing adequate software is more important than developing a perfect plan!

Agile development is performed through a series of iterations. In the Scrum mentodology, these are called sprints. Organizations set the size of these iterations. They can be anything from one week to one month, but organizations are moving toward the one or two week time frames. For each iteration, the project owner identifies the user stories that should be worked on. Remember that the lenth of the iteration is fixed. If the team implements the assigned user stories early, then the product owner looks at the user stories that are not yet implemented and selects some more for the team to work on. If there are unfinished user stories at the end of the iteration, then they are moved to story backlog and tackled in a later iteration. There is an additional planning step associated with iterations. The team must take the user stories and decompose them into technical tasks. These tasks are often only recognizable to developers. There may be a task to create a database. There may be tasks to create API stubs to allow for implementation of screens before some interfaces are ready. They are often assigned to a single developer, but not always. Traditional project managers often think of tasks as work items that will take between one day and one week. Agile teams usually identify tasks that will take less than a day to complete. Tasks are usually estimated using ideal time, which will be discussed next. Some people utilize a measure called task points. Obviously, task points are much like story points in that they do not have specific time associated with it. My experience is that by the time you are at the task level, it is much more natural to start to talk about hours required to deliver something.

Ideal time was a term coined by Kent Beck. It was the amount of time development would take “without distractions or disastors.” At one time, it was used instead of story points to estimate development time for an entire application. There were problems with this. When management was told that a development project would take 10,000 hours of ideal time, they assumed that 5 developers would be done in a year. They assumed the team would bridge the gap between ideal time and calendar time with some unpaid overtime. This was never realistic. The gap was simply too large. Many agile developers only accomplish the development 16 to 24 ideal hours of work in a one week sprint. This would mean that an 80 hour work week would be necessary to deliver 40 ideal hours. This is not sustainable! People begin accomplish less with each additioal hour that they work. The project is doomed. Some teams try to compensate for this by inflating their ideal hour estimates. At that point, they are no longer estimating, they are negotiating. Management responds by negotiating for faster delivery. No one can remember where the estimating stopped and the negotiating for increased time and resources began. This is why story points, which are not tied to delivery time, became the preferred method for estimating projects, releases and iterations. Ideal time still makes sense as a way to estimate the tasks that make up a sprint. The sprint will take a week or whatever time frame the organization has committed to. Management cannot attempt to negotiate an early end to the sprint.

Velocity is the key to traditional agile estimating. If I am planning to drive to a city that is 100 miles away, and my average velocity is 50 MPH, then I should be there in 2 hours. Likewise, if a project has 100 story points left to implement, and the average development velocity is 5 story points per sprint, then it should take 20 more sprints to complete the project. If the sprints each take 1 week, then that will correspond to 20 weeks or about 5 months. Velocity is also calculated for the technical tasks that are done during an iteration. Here, each developer has his or her own velocity. It is calculated as the number of ideal hours that are accomplished during the iteration. Sometimes, it is shown as a percent. For example, if a developer accomplishes 24 ideal hours of work in a one week sprint, then the velocity might be expressed as 60%. Task related velocity is not used like iteration related velocity. There is no estimating of a sprint. If your organization has decided that sprints will take 5 days, then they will take exactly 5 days regardless of the velocity of any of the tasks. The task velocity can serve as a process improvement measure. The closer that ideal time gets to calendar time, the better. However, it is critical to understand the nature of the work people are involved in when evaluating task related velocity. An individual may only be accomplishing 2 days of ideal time work during a sprint because that individual is constantly being used to solve problems that other people are having. If this is the case, then this might be the best use of that person’s time and talents.

Categories: Blogs

From Developer to DevOps with Docker

Derick Bailey - new ThoughtStream - Wed, 12/21/2016 - 14:30

Once you learn the basic commands, building a Docker image is fairly simple.

There’s only 2 options required in a Dockerfile, after all:

FROM <base image> 

CMD [“<executable>”]

That’s it. Now you can “docker build .” and move on, right?

Building things

Technically, yes – you have a complete Dockerfile and you can build an image… but it might not run without further configuration (like copying the files that are to be run, into the image).

It’s that need for additional configuration that tripped me up, recently.

I wanted to deploy a Node.js app to server using Docker, so I grabbed my editor and started writing the Dockerfile for the image.

I set a base image that includes my preferred linux distribution, with the right version of Node.

I copied the code into the image and ran npm install.

I called the node runtime and told it which script to start.

Then I built the image, tried to run it, and … error.

Oh, wait… working directory… rebuild, and ERROR?

Oh, right… I forgot to set NODE_ENV environment variable… so I’ll just rebuild now, and … ERROR

ANOTHER ERROR?! BAH!!!

sigh… right, there’s also …

It took about an hour for me to get the image built and working correctly, but I did get it working.

When I saw the working Dockerfile, I realized something.

My desire to deploy a Node.js application got me to copy some code and tell node which script file to run.

But that wasn’t sufficient.

I had to do more, to configure working directory, and install dependencies, and setup port numbers, and …

Dockerfile I had sitting in front of me resembled something more than just a script to start a node application.

It looked like an automated server configuration script – something my devops friends would use to build and deploy a new server.

I don’t know why I didn’t realize this before, but it was plain to see now.

Docker forced me to improve my understanding of Devops.

Clearly, there’s more to DevOps than just configuring servers or Docker images.

But even if this is just the first step into what has previously been a mystic realm (to me, at least), it’s an important step.

Understanding how servers are configured and what it takes to run an application in production is a necessary part of our job as software developers.

Knowledge of production configuration and constraints inform our decisions around file system organization, runtime dependencies and other forms of software architecture.

Docker can help you understand those requirements and constraints.

And WatchMeCode can help you understand Docker.

Take the first steps toward DevOps and better development tooling. 

With the Guide to Learning Docker, you’ll be up and running with Docker in no time. 

With the (FREE!) Docker Cheatsheets, you won’t have to memorize arcane command-line options and configurations.

And with the Guide to Developing Node.js Apps in Docker, you’ll take your new skills and apply your existing development knowledge and tooling. 

Solve the “works on my machine” problem, permanently.

The post From Developer to DevOps with Docker appeared first on DerickBailey.com.

Categories: Blogs

The Simple Leader: Observe the Now

Evolving Excellence - Wed, 12/21/2016 - 11:47

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


You can observe a lot by just watching.
– Yogi Berra

Mindful observation takes effort and practice, but it is very valuable if you want to be a leader. It allows you to watch processes in action and look for small nuances and opportunities for improvement. For example, the wait staff at top-tier hotels do this every day. One waiter is always watching, looking for a shift in a customer’s eyes that says she might need something, detecting a growing line of people waiting to be seated, or checking on food that needs to be delivered. This allows the staff to anticipate and resolve problems, often before the customers are aware they exist.

Being able to closely observe a situation allows things to flow much more smoothly.

The benefits of observation extend to the manufacturing setting as well. Taiichi Ohno had an exercise for his engineers and students where he’d draw a circle on the factory floor and tell them to stand in it and simply observe for a half hour. If they came back and reported that they didn’t see anything to improve, he’d send them back out.

The Ohno Circle exercise is very powerful and can be used on the factory floor, in the finance department, or even at home with the kids. In fact, it’s probably even more powerful in areas where processes are not visible or visibly defined. Just stand and watch. Resist the temptation to immediately jump into action. Think about and record what you’ve observed. Then improve it. In the Lean world, this is genchi genbutsu—go, see, and observe.

High-end hotels generally have observation down to a science. It is a core component of how they deliver great service. Several years ago, I was having a quiet breakfast at the Four Seasons in Bangkok after arriving late the previous evening. My table was at the side of an open atrium, so I was able to watch the staff in action. I’ve always been amazed by how the Four Seasons staff, whether at the restaurants or elsewhere, will be at your side exactly the instant you need them, but are also never annoyingly intrusive. Now I know how they do it.

Amidst the flurry of wait staff running around, I noticed there was always at least one person just standing and watching. It was not always the same person, but there was always one just looking around the room at the customers and the rest of the staff. If a customer looked up and around, indicating they needed something, the observing wait person immediately went over to that customer, while another staff member took over the watching and looking. If a line started to form at the front of the restaurant, the observer would head over and help with the seating. If another member of the wait staff needed help, he or she would have it within seconds and someone else would take over the watching. Someone was always standing, observing, and watching.

To test my own observation, I looked up and to the side, as if I needed something. Instantly, a waiter was at my side. I asked what he was watching for, and his response? “Just observing, sir.” Yes, “just” observing. There was no “just” about it. Observation is a key to their exceptional customer service. I wanted to ask if process improvements were identified and acted on, but the language barrier between my server and me hindered our conversation.

When observing a process, be it on the factory floor or in the accounting office, it is important to mindfully observe without prejudice, staying in the present, without trying to identify solutions. Simply watch, look for details, and, when appropriate, document them.

Categories: Blogs

Docker: Unknown – Unable to query docker version: x509: certificate is valid for

Mark Needham - Wed, 12/21/2016 - 09:11

I was playing around with Docker locally and somehow ended up with this error when I tried to list my docker machines:

$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default   -        virtualbox   Running   tcp://192.168.99.101:2376           Unknown   Unable to query docker version: Get https://192.168.99.101:2376/v1.15/version: x509: certificate is valid for 192.168.99.100, not 192.168.99.101

My Google Fu was weak I couldn’t find any suggestions for what this might mean so I tried shutting it down and starting it again!

On the restart I actually got some helpful advice:

$ docker-machine stop
Stopping "default"...
Machine "default" was stopped.
$ docker-machine start
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.

So I tried that:

$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.101:2376": x509: certificate is valid for 192.168.99.100, not 192.168.99.101
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.

And then regenerates my certificates:

$ docker-machine regenerate-certs
Regenerate TLS machine certs?  Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

And now everything is happy again!

$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER   ERRORS
default   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.9.0
Categories: Blogs

A Group of Geographically Distributed Staff is NOT a Scrum Team

Learn more about transforming people, process and culture with the Real Agility Program

It’s my opinion, and I think the opinion of the authors of Scrum, that a Scrum team must be collocated. A collection of geographically distributed staff is NOT a Scrum team.

If you work in a “distributed team”, please consider the following question.

Do the members of this group have authority to decide (if they wanted to) to relocate and work in the same physical space?

  • If you answer “Yes” with regard to your coworkers: then I’d encourage you to advise your colleagues toward collocating, even if only as an experiment for a few Sprints, so they can decide for themselves whether to remain remote.
  • If you answer “No”, the members do not have authority to decide to relocate:
    • then clearly it is not a self-organizing team;
    • clearly there are others in the organization telling those members how to perform their work;
    • and clearly they have dependencies upon others who hold authority (probably budgets as well) which have imposed constraints upon communication between team members.
    • CLEARLY, THEREFORE, IT IS NOT A SCRUM TEAM.
Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post A Group of Geographically Distributed Staff is NOT a Scrum Team appeared first on Agile Advice.

Categories: Blogs

A look at Six Years of Blogging Stats

Agile Complexification Inverter - Tue, 12/20/2016 - 21:16
What do you get from six years of blogging about Agile/Scrum and your continued learning experiences?

Stats from Agile Complexification Inverter blog site

Well the stats are just one insignificant measure of what one gets from writing about their experience.

The bad artist imitate, the great artist steal.The more meaningful measures have been seeing some of these articles and resources put into practice by other colleagues, discussion that have happened (off line & sometimes in comments or twitter, etc.) with readers that require me to refine my thinking and messaging of my thinking.  Interestingly some times seeing a resource that you have created being "borrowed" and used in another persons or companies artifact without attribution is both rewarding and a bit infuriating.  I like that the concept has resonated well with someone else and they have gone to the trouble of borrowing the concept, and repeating or improving or repurposing the concept.

Let me borrow someone else's concept:  "The Bad Artist Imitate, the GREAT Artists Steal." -- Banksy


Most of all the collection of articles are a repository of resources that I do not need to carry around in my 3-4 lbs of white & grey matter.  I can off-load the storage of concepts, research pointers and questions to a semi-perminate storage.  This is a great benefit.

Categories: Blogs

Consider Rolling Wave Roadmap and Backlog Planning

Johanna Rothman - Tue, 12/20/2016 - 19:57

Many agile teams attempt to plan for an entire quarter at a time.

Sometimes, that works quite well. You have deliverables, and everyone understands the order in which you need to deliver them. You use agile because you can receive feedback about the work as you proceed.

You might make small adjustments, and you manage to stay on track with the work. In fact, you often complete what you thought you could complete in a quarter. (Congratulations to you!)

I rarely meet teams like that.

Instead, I meet and work with teams who discover something in the first or second iteration that means the entire rest of the quarter is suspect. As they proceed through those first few features/deliverables, they, including the PO, realize they don’t know what they thought they knew. They discovered something important.

Sometimes, the managers in the organization realize they want this team to work on a different project sometime in the quarter. Or, they want the team to alternate features (in flow) or projects (in iterations) so the managers can re-assess the project portfolio. Or, something occurs outside the organization and the managers need different deliverables.

If you’re like me, you then view all the planning you did for the rest of the quarter as waste. I don’t want to spend time planning for work I’m not going to do. I might need to know something about where the product is headed, but I don’t want to write stories or refine backlogs or even estimate work I’m not going to do.

If you are like me, we have alternatives if we use rolling wave, deliverable-based planning with decreased specific plans.

In this one-quarter roadmap example, you can see how the teams completed the first iteration. That completion changes the color from pink to white. Notice how the last month of the quarter is grayed out. That’s what we think will happen, and we’re not sure.

We only have specific plans for two iterations. As the team completes this iteration, the PO and the team will refine/plan for what goes into the 3 iteration from here (the end of the second month). As the team completes work, the PO (and the PO Value team) can reassess what should go into the last part of this quarter and the final month.

If you work in flow, it’s the same idea if you keep your demos on a cadence.

What if you need a shorter planning horizon? Maybe you don’t need one-quarter plans. You can do the same thing with two-month plans or one-month plans.

This is exactly what happened with a team I’m working with. They tried to plan for a quarter at a time. And, often, it was the PO who needed to change things partway through the quarter. Or, the PO Value team realized they did not have a perfect crystal ball and needed to change the order of the features partway through the quarter.

They tried to move to two-month horizons, and that didn’t help. They moved to one-month horizons, and almost always change the contents for the last half of the second month. In the example above, notice how the Text Transfer work moved to farther out, and the secure login work moved closer in.

You might have the same kind of problem. If so, don’t plan details for the quarter. Plan details as far out as you can see, and that might be only one or two iterations in duration. Then, take the principles of what you want (next part of the engine, or next part of search, or whatever it is that you need) and plan the details just in time.

Rolling wave deliverable-based planning works for agile. In fact, you might think it’s the way agile should work.

If you lie this approach to roadmapping, please join my Practical Product Owner workshop. All the details are on that page.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.