Skip to content

Feed aggregator

The Innovation Revolution (A Time of Radical Transformation)

J.D. Meier's Blog - Mon, 05/04/2015 - 16:44

It was the best of times, it was the worst of times …

It’s not A Tale of Two Cities.   It’s a tale of the Innovation Revolution.

We’ve got real problems worth solving.  The stakes are high.  Time is short.  And abstract answers are not good enough.

In the book, Ten Types of Innovation: The Discipline of building Breakthroughs, Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn explain how it is like A Tale of Two Cities in that it is the worst of time and it is the best of times.

But it is also like no other time in history.

It’s an Innovation Revolution … We have the technology and we can innovate our way through radical transformation.

The Worst of Times (Innovation Has Big Problems to Solve)

We’ve got some real problems to solve, whether it’s health issues, poverty, crime, or ignorance.  Duty calls.  Will innovation answer?

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“People expect very little good news about the wars being fought (whether in Iraq, Afghanistan, or on Terror, Drugs, Poverty, or Ignorance).  The promising Arab Spring has given way to a recurring pessimism about progress.  Gnarly health problems are on a tear the world over--diabetes now affects over eight percent of Americans--an other expensive disease conditions such as obesity, heart disease, and cancer are also now epidemic.  The cost of education rises like a runaway helium balloon, yet there is less and less evidence that it nets the students a real return on their investment.  Police have access to ever more elaborate statistical models of crime, but there is still way too much of it.  And global warming, steadily produces more extreme and more dangerous conditions the world over, yet according to about half of our elected 'leaders,' it is still, officially, only a theory that can conveniently be denied.”

The Best of Times (Innovation is Making Things Happen)

Innovation has been answering.  There have been amazing innovations heard round the world.  It’s only the beginning for an Innovation Revolution.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“And yet ...

We steadily expect more from our computers, our smartphones, apps, networks, and games.  We have grown to expect routine and wondrous stories of new ventures funded through crowdsourcing.  We hear constantly of lives around the world transformed because of Twitter or Kahn Academy or some breakthrough discovery in medicine.  Esther Duflo and her team at the Poverty Action Lab at MIT keep cracking tough problems that afflict the poor to arrive at solutions with demonstrated efficacy, and then, often the Gates Foundation or another philanthropic institution funds the transformational solution at unprecedented scale.

Storytelling is in a new golden age--whether in live events, on the radio, or in amazing new television series that can emerge anywhere in the world and be adapted for global tastes.  Experts are now everywhere, and shockingly easy and affordable to access.

Indeed, it seems clear that all the knowledge we've been struggling to amass is steadily being amplified and swiftly getting more organized, accessible, and affordable--whether through the magic of elegant little apps or big data managed in ever-smarter clouds or crowdfunding sites used to capitalize creative ideas in commerce or science.”

It’s a Time of Radical Transformation and New, More Agile Institutions

The pace of change and the size of change will accelerate exponentially as the forces of innovation rally together.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“One way to make sense of these opposing conditions is to see us as being in a time of radical transformation.  To see the old institutions as being challenged as a series of newer, more agile ones arise.  In history, such shifts have rarely been bloodless, but this one seems to be a radical transformation in the structure, sources, and nature of expertise.  Indeed, among innovation experts, this time in one like no other.  For the very first time in history, we are in a position to tackle tough problems with ground-breaking tools and techniques.”

It’s time to break some ground.

Join the Innovation Revolution and crack some problems worth solving.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Categories: Blogs

Hierarchy of Human Needs for the 21st Century

Agile Complexification Inverter - Mon, 05/04/2015 - 16:06
The new and revised edition of Maslow's Hierarchy of Human Needs for the 21st Century.



See Also:

The Starbuck's Test


Categories: Blogs

Neo4j: LOAD CSV – java.io.InputStreamReader there’s a field starting with a quote and whereas it ends that quote there seems to be character in that field after that ending quote. That isn’t supported.

Mark Needham - Mon, 05/04/2015 - 11:56

I recently came across the last.fm dataset via Ben Frederickson’s blog and thought it’d be an interesting one to load into Neo4j and explore.

I started with a simple query to parse the CSV file and count the number of rows:

LOAD CSV FROM "file:///Users/markneedham/projects/neo4j-recommendations/lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv" 
AS row FIELDTERMINATOR  "\t"
return COUNT(*)
 
At java.io.InputStreamReader@4d307fda:6484 there's a field starting with a quote and whereas it ends that quote there seems  to be character in that field after that ending quote. That isn't supported. This is what I read: 'weird al"'

This blows up because (as the message says) we’ve got a field which uses double quotes but then has other characters either side of the quotes.

A quick search through the file reveals one of the troublesome lines:

$ grep "\"weird" lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv  | head -n 1
0015371426d2cbef354b2f680340de38d0ebd2f0	7746d775-9550-4360-b8d5-c37bd448ce01	"weird al" yankovic	4099

I ran a file containing only that line through CSV Lint to see what it thought and indeed it is invalid:

2015 05 04 10 50 43

Let’s clean up our file to use single quotes instead of double quotes and try the query again:

$ tr "\"" "'" < lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv > lastfm-dataset-360K/clean.tsv
LOAD CSV FROM "file:///Users/markneedham/projects/neo4j-recommendations/lastfm-dataset-360K/clean.tsv" as row FIELDTERMINATOR  "\t"
return COUNT(*)
 
17559530

And we’re back in business! Interestingly Python’s CSV reader chooses to strip out the double quotes rather than throw an exception:

import csv
with open("smallWeird.tsv", "r") as file:
    reader = csv.reader(file, delimiter="\t")
 
    for row in reader:
        print row
$ python explore.py
['0015371426d2cbef354b2f680340de38d0ebd2f0', '7746d775-9550-4360-b8d5-c37bd448ce01', 'weird al yankovic', '4099']

I prefer LOAD CSV’s approach but it’s an interesting trade off I hadn’t considred before.

Categories: Blogs

Interview with Lisa Crispin

Growing Agile - Mon, 05/04/2015 - 11:22
We interviewed Lisa Crispin asking her some questions a […]
Categories: Companies

June 16th in Sydney, Australia, and June 19th in Melbourne: Coaching Workshop

James Shore - Mon, 05/04/2015 - 10:01
04 May 2015 James Shore/Calendar

While I'm at Agile Australia this June, I'll presenting full-day workshops in Sydney and Melbourne. Join me for some in-depth, small group learning about coaching for best-fit Agile:

  • Sydney: 16 June
  • Melbourne: 19 June
Sign up here and use the code AA15-SFND for a discount. Bringing Fluency to Your Agile Teams: Coaching for Best-Fit Agile

Learn how to tailor your Agile coaching efforts to best fit the needs of your teams and organization. As teams grow in their understanding of Agile, their perspective of Agile shifts and changes, and so do the challenges they must overcome. Whether it's team cohesion, technical skills, or organizational politics, the right investment to make in your teams depends on where they're at, what they need, and what your organization is willing to provide.

Learn how to tailor your coaching efforts to best fit your context. You'll learn how to evaluate your teams proficiencies, organizational needs, and how to balance benefits and investments. Using the Agile Fluency™ Model, a proven model for understanding team capabilities and opportunities, you'll be able to articulate what benefits to expect from your teams, what opportunities exist, and what efforts will be needed in order to take advantage of those opportunities.

Qualified participants will also be given the opportunity to license the Agile Fluency Diagnostic, a detailed tool for facilitating team self-assessments, at no additional charge.

See a detailed description and register at the Agile Australia sign up page. Be sure to use the code AA15-SFND for your discount.

Categories: Blogs

Play4Agile North America

Notes from a Tool User - Mark Levison - Mon, 05/04/2015 - 09:57
For the first time ever, Play4Agile comes to North America!

 

Play4Agile logo

Play4Agile is an annual community un-conference focused on helping people, teams & organisations to transform through games and playful approaches.

And Agile Pain Relief is proud to announce that they are the exclusive Play4Agile Gold Sponsor.

Play4Agile North America will run from September 11th to 13th, 2015, just north of Toronto ON, at the YMCA Cedar Glen Outdoor Centre. This is an all-inclusive, highly interactive un-conference event.

Tickets are limited so register early! Prices are all-inclusive (accommodations, meals, and the event). Details and registration available HERE or through the official page.

Come share, learn, network, reconnect and play!

Categories: Blogs

Mir fällt nichts ein – ein Tipp zur Produktivität

Scrum 4 You - Mon, 05/04/2015 - 08:09

Da will man einen Blogpost schreiben und dann fällt einem nichts ein. Das ist nervig, denn es ist ja nicht so, als hätte ich alle Zeit der Welt und könnte es mir leisten, einfach nur mal so rumzusitzen. (Da fällt mir ein, ich könnte mal einen Tipp dazu schreiben, dass ich einfach nichts machen will. Es gibt sogar eine Feature in „Psychologie heute“, in dem das Nichtstun als wichtiger Aspekt der Produktivität erklärt wird.)

Aber es ist doch komisch: Man will sich Zeit nehmen, sitzt im Café seiner Wahl (in meinem Fall im McDonalds in einem Industriezentrum), hat den Double Espresso vor sich, den Laptop ausgepackt und möchte etwas schreiben. Für den Blogeintrag am Montag. Der soll besonders gut werden und daher sollte da auch etwas wirklich Sinnvolles stehen.

Aber wie es eben so ist: Genau dann hat man eine Blockade. Wer etwas zu sehr will, der verkrampft. Ein anderes Problem, und das geht mir immer wieder so: Ich bin einfach ständig auf der Suche nach der perfekten Umgebung fürs Schreiben, nach dem richtigen Editor. Word ist okay, wenn ich mit dem Verlag arbeiten muss, aber einfach so zum Schreiben gibt es mittlerweile Hunderte von kreativen Lösungen, die alle um einiges besser sind. Doch die meisten haben den gleichen Fehler: Sie überfordern den Schreibenden mit zu vielen Funktionalitäten. Produkte, die mehr können, als der Autor wirklich braucht. Produkte, die Listen erstellen, Kommentarfunktionen anbieten und die Dokumente für 1000 Devices gleichzeitig verfügbar machen. (Das könnte ein anderer Blogbeitrag: Wie einfach es wäre, wenn ein Product Owner die Funktionen auf das absolut Notwendige beschränken würde (siehe dazu: „Rework“ von Jason Fried).

Ich würde ja am liebsten alles in meinem Lieblingstool Evernote schreiben (darin entsteht auch gerade dieser Blog), doch die OS X-Variante stört beim Schreiben mehr als sie nützt. Der Webeditor ist großartig – einfach und ablenkungsfrei, aber noch nicht ideal. Kann aber alles, was man braucht.

Jetzt habe ich noch Medium entdeckt, eine Blogging-Plattform von Evan Williams, dem Gründer von Twitter und Blogger. Wirklich großartig für Blogger. Da überlege ich sofort, ob wir unsere eigenen Blogs auf www.medium.com übersiedeln. Das hätte den Vorteil, noch einfacher schreiben zu können und gleichzeitig noch einfacher eine Community bilden zu können. Ich bräuchte mich auch nicht mehr darüber zu ärgern, dass sich unsere Website derzeit nicht für das Lesen von Blogeinträgen auf Mobile Devices eignet. Fragen über Fragen, die mich vom eigentlichen Produzieren eines neuen Blogbeitrages abhalten.

Moment, da ist er ja: Das, was ich geschrieben habe. Ich wollte einen Tipp zur Produktivität schreiben und was habe ich gemacht: Einen geschrieben und gezeigt, wie man sogar dann, wenn man nicht weiß, was man schreiben will, einfach einen Blogeintrag schreiben kann. Das Mittel der Wahl dazu: Freewriting. Dazu gibt es das wunderbare Buch „Writing without teachers“ von Peter Elbow.

Beim Freewriting schreibt man völlig wertungsfrei auf, was einem gerade durch den Kopf geht, ohne abzusetzen. Und wenn einem nichts mehr einfällt, schreibt man auf, dass einem gerade nichts einfällt und schon kommen die Ideen nach einiger Zeit wieder in Fluss. Um es in Anlehnung an die Open-Space-Methode zu sagen: Was auch immer dabei rauskommt – es ist das Einzige, das rauskommen konnte.

Das ist wahre Produktivität: in sich hineinzuhören und der eigenen Stimme zu folgen.

Evernote Snapshot 20150502 085225

Categories: Blogs

How To Get Innovation to Succeed Instead of Fail

J.D. Meier's Blog - Mon, 05/04/2015 - 02:44

“Because the purpose of business is to create a customer, the business enterprise has two–and only two–basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs. Marketing is the distinguishing, unique function of the business.” – Peter Drucker

I’m diving deeper into patterns and practices for innovation.

Along the way, I’m reading and re-reading some great books on the art and science of innovation.

One innovation book I’m seriously enjoying is Ten Types of Innovation: The Discipline of building Breakthroughs by Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn.

Right up front, Larry Keeley shares some insight into the journey to this book.  He says that this book really codifies, structures, and simplifies three decades of experience from Doblin, a consulting firm focused on innovation.

For more than three decades, Doblin tried to answer the following question:

“How do we get innovation to succeed instead of fail?” 

Along the journey, there were a few ideas that they used to bridge the gap in innovation between the state of the art and the state of the practice.

Here they are …

Balance 3 Dimensions of Innovation (Theoretical Side + Academic Side + Applied Side)

Larry Keeley and his business partner Jay Doblin, a design methodologist, always balanced three dimensions of innovation: a theoretical side, an academic side, and an applied side.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“Over the years we have kept three important dimensions in dynamic tension.  We have a theoretical side, where we ask and seek real answers to tough questions about innovation.  Simple but critical ones like, 'Does brainstorming work?' (it doesn't), along with deep and systemic ones like, 'How do you really know what a user wants when the user doesn't know either?'  We have an academic side, since many of us are adjunct professors at Chicago's Institute of Design and this demands that we explain our ideas to smart young professionals in disciplined, distinctive ways.  And third, we have an applied side, in that have been privileged to adapt our innovation methods to many of the world's leading global enterprises and start-ups that hanker to be future leading firms.”

Effective Innovation Needs a Blend of Analysis + Synthesis

Innovation is a balance and blend of analysis and synthesis.  Analysis involves tearing things down, while synthesis is building new things up.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“From the beginning, Doblin has itself been interdisciplinary, mixing social sciences, technology, strategy, library sciences, and design into a frothy admixture that has always tried to blend both analysis, breaking tough things down, with synthesis, building new things up.  Broadly, we think any effective innovation effort needs plenty of both, stitched together as a seamless whole.”

Orchestrate the Ten Types of Innovation to Make a Game-Changing Innovation

Game-changing innovation is an orchestration of the ten types of innovation.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“The heart of this book is built around a seminal Doblin discovery: that there are (and have always been) ten distinct types of innovation that need to be orchestrated with some care to make a game-changing innovation.“

The main idea is that innovation fails if you try to solve it with just one dimension.

You can’t just take a theoretical approach, and hope that it works in the real-world.

At the same time, innovation fails if you don’t leverage what we learn from the academic world and actually apply it.

And, if you know the ten types of innovation, you can focus your efforts more precisely.

You Might Also Like

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

No Slack = No Innovation

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Categories: Blogs

How to deploy an ElasticSearch cluster using CoreOS and Consul

Xebia Blog - Sun, 05/03/2015 - 14:39

The hot potato in the room of Containerized solutions is persistent services. Stateless applications are easy and trivial, but to deploy a persistent services like ElasticSearch is a totally different ball game. In this blog post we will show you how easy it is on this platform to create ElasticSearch clusters. The key to the easiness is the ability to lookup external ip addresses and port numbers of all cluster members in Consul and the reusable power of the CoreOS unit file templates. The presented solution is a ready-to-use ElasticSearch component for your application.

This solution:

  • uses empheral ports so that we can actually run multiple ElasticSearch nodes on the same host
  • mounts persistent storage under each node to prevent data loss on server crashes
  • uses the power of the CoreOS unit template files to deploy new ElasticSearch clusters.


In the previous blog posts we defined our A High Available Docker Container Platform using CoreOS and Consul and showed how we can add persistent storage to a Docker container

Once this platform is booted the only thing you need to do to deploy an ElasticSearch Cluster,  is to submit the following fleet unit system template file elasticsearch@.service  and start 3 or more instances.

Booting the platform

To see the ElasticSearch cluster in action, first boot up our CoreOS platform.

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh
Starting an ElasticSearch cluster

Once the platform is started, submit the elasticsearch unit file and start three instances:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/elasticsearch
fleetctl submit elasticsearch@.service
fleetctl start elasticsearch@{1..3}

Now wait until all elasticsearch instances are running by checking the unit status.

fleetctl list-units
...
UNIT            MACHINE             ACTIVE  SUB
elasticsearch@1.service f3337760.../172.17.8.102    active  running
elasticsearch@2.service ed181b87.../172.17.8.103    active  running
elasticsearch@3.service 9e37b320.../172.17.8.101    active  running
mnt-data.mount      9e37b320.../172.17.8.101    active  mounted
mnt-data.mount      ed181b87.../172.17.8.103    active  mounted
mnt-data.mount      f3337760.../172.17.8.102    active  mounted
Create an ElasticSearch index

Now that the ElasticSearch cluster is running, you can create an index to store data.

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/ -d \
     '{ "settings" : { "index" : { "number_of_shards" : 3, "number_of_replicas" : 2 } } }'
Insert a few documents
curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/1 -d@- <<!
{
    "first_name" : "John",
    "last_name" :  "Smith",
    "age" :        25,
    "about" :      "I love to go rock climbing",
    "interests": [ "sports", "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/2 -d@- <<!
{
    "first_name" :  "Jane",
    "last_name" :   "Smith",
    "age" :         32,
    "about" :       "I like to collect rock albums",
    "interests":  [ "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/3 -d@- <<!
{
    "first_name" :  "Douglas",
    "last_name" :   "Fir",
    "age" :         35,
    "about":        "I like to build cabinets",
    "interests":  [ "forestry" ]
}
!
And query the index
curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

restarting the cluster

Even when you restart the entire cluster, your data is persisted.

fleetctl stop elasticsearch@{1..3}
fleetctl list-units

fleetctl start elasticsearch@{1..3}
fleetctl list-units

curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

Open the console

Finally you can see the servers and the distribution of the index in the cluster by opening the console
http://elasticsearch.127.0.0.1.xip.io:8080/_plugin/head/.

elasticsearch head

Deploy other ElasticSearch clusters

Changing the name of the template file is the only thing you need to deploy another ElasticSearch cluster.

cp elasticsearch\@.service my-cluster\@.service
fleetctl submit my-cluster\@.service
fleetctl start my-cluster\@{1..3}
curl my-cluster.127.0.0.1.xip.io:8080
How does it work?

Starting a node in an ElasticSearch cluster is quite trivial, as shown in by the command line below:

exec gosu elasticsearch elasticsearch \
    --discovery.zen.ping.multicast.enabled=false \
    --discovery.zen.ping.unicast.hosts=$HOST_LIST \
    --transport.publish_host=$PUBLISH_HOST \
    --transport.publish_port=$PUBLISH_PORT \
     $@

We use the unicast protocol and specify our own publish host and port and list of ip address and port numbers of all the other nodes in the cluster.

Finding the other nodes in the cluster

But how do we find the other nodes in the cluster? That is quite easy. We query the Consul REST API for all entries with the same service name that are tagged as the "es-transport". This is the service exposed by ElasticSearch on port 9300.

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport

...
[
    {
        "Node": "core-03",
        "Address": "172.17.8.103",
        "ServiceID": "elasticsearch-1",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49170
    },
    {
        "Node": "core-01",
        "Address": "172.17.8.101",
        "ServiceID": "elasticsearch-2",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    },
    {
        "Node": "core-02",
        "Address": "172.17.8.102",
        "ServiceID": "elasticsearch-3",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    }
]

Turning this into a comma seperated list of network endpoints is done using the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r '[ .[] | [ .Address, .ServicePort | tostring ] | join(":")  ] | join(",")'
Finding your own network endpoint

As you can see in the above JSON output, each service entry has a unique ServiceID. To obtain our own endpoint, we use the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r ".[] | select(.ServiceID==\"$SERVICE_9300_ID\") | .Address, .ServicePort" 
Finding the number of node in the cluster

Finding the intended number of nodes in the cluster is determined by counting the number of fleet unit instance files in CoreOS on startup and passing this number as an environment variable.

TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l)

The %p refers to the part of the fleet unit file before the @ sign.

The Docker run command

The Docker run command is shown below. ElasticSearch exposes two ports: port 9200 exposes a REST api to the clients and port 9300 is used as the transport protocol between nodes in the cluster. Each port is a service and tagged appropriately.

ExecStart=/bin/sh -c "/usr/bin/docker run --rm \
    --name %p-%i \
    --env SERVICE_NAME=%p \
    --env SERVICE_9200_TAGS=http \
    --env SERVICE_9300_ID=%p-%i \
    --env SERVICE_9300_TAGS=es-transport \
    --env TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l) \
    -P \
    --dns $(ifconfig docker0 | grep 'inet ' | awk '{print $2}') \
    --dns-search=service.consul \
    cargonauts/consul-elasticsearch"

The options are explained in the table below:

option description --env SERVICE_NAME=%p The name of this service to be advertised in Consul, resulting in a FQDN of %p.service.consul and will be used as the cluster name. %p refers to the first part of the fleet unit template file up to the @. --env SERVICE_9200_TAGS=www The tag assigned to the service at port 9200. This is picked up by the http-router, so that any http traffic to the host elasticsearch is direct to this port. --env SERVICE_9300_ID=%p-%i The unique id of this service in Consul. This is used by the startup script to find it's external port and ip address in Consul and will be used as the node name for the ES server. %p refers to the first part of the fleet unit template file up to the @ %i refers to the second part of the fleet unit file upto the .service. --env SERVICE_9300_TAGS=es-transport The tag assigned to the service at port 9300. This is used by the startup script to find the other servers in the cluster. --env TOTAL_NR_OF_SERVERS=$(...) The number of submitted unit files is counted and passed in as the environment variable 'TOTAL_NR_OF_SERVERS'. The start script will wait until this number of servers is actually registered in Consul before starting the ElasticSearch Instance. --dns $(...) Set DNS to query on the docker0 interface, where Consul is bound on port 53. (The docker0 interface ip address is chosen at random from a specific range). -dns-search=service.consul The default DNS search domain. Sources

The sources for the ElasticSearch repository can be found on github.

source description start-elasticsearch-clustered.sh   complete startup script of elasticsearch elasticsearch CoreOS fleet unit files for elasticsearch cluster consul-elasticsearch Sources for the Consul ElasticSearch repository Conclusion

CoreOS fleet template unit files are a powerful way of deploying ready to use components for your platform. If you want to deploy cluster aware applications, a service registry like Consul is essential.

Categories: Companies

Coding: Visualising a bitmap

Mark Needham - Sun, 05/03/2015 - 02:19

Over the last month or so I’ve spent some time each day reading a new part of the Neo4j code base to get more familiar with it, and one of my favourite classes is the Bits class which does all things low level on the wire and to disk.

In particular I like its toString method which returns a binary representation of the values that we’re storing in bytes, ints and longs.

I thought it’d be a fun exercise to try and write my own function which takes in a 32 bit map and returns a string containing a 1 or 0 depending if a bit is set or not.

The key insight is that we need to iterate down from the highest order bit and then create a bit mask of that value and do a bitwise and with the full bitmap. If the result of that calculation is 0 then the bit isn’t set, otherwise it is.

For example, to check if the highest order bit (index 31) was set our bit mask would have the 32nd bit set and all of the others 0’d out.

java> (1 << 31) & 0x80000000
java.lang.Integer res5 = -2147483648

If we wanted to check if lowest order bit was set then we’d run this computation instead:

java> (1 << 0) & 0x00000001
java.lang.Integer res7 = 0
 
java> (1 << 0) & 0x00000001
java.lang.Integer res8 = 1

Now let’s put that into a function which checks all 32 bits of the bitmap rather than just the ones we define:

private String  asString( int bitmap )
{
    StringBuilder sb = new StringBuilder();
    sb.append( "[" );
    for ( int i = Integer.SIZE - 1; i >= 0; i-- )
    {
        int bitMask = 1 << i;
        boolean bitIsSet = (bitmap & bitMask) != 0;
        sb.append( bitIsSet ? "1" : "0" );
 
        if ( i > 0 &&  i % 8 == 0 )
        {
            sb.append( "," );
        }
    }
    sb.append( "]" );
    return sb.toString();
}

And a quick test to check it works:

@Test
public void shouldInspectBits()
{
    System.out.println(asString( 0x00000001 ));
    // [00000000,00000000,00000000,00000001]
 
    System.out.println(asString( 0x80000000 ));
    // [10000000,00000000,00000000,00000000]
 
    System.out.println(asString( 0xA0 ));
    // [00000000,00000000,00000000,10100000]
 
    System.out.println(asString( 0xFFFFFFFF ));
    // [11111111,11111111,11111111,11111111]
}

Neat!

Categories: Blogs

Where Have I Been?

Oh, hai internets. It’s been a while. Did you miss me?

Let me tell you what I’ve been up to.

In the fall of 2012 I shut down my consulting practice (Quality Tree Software) and my studio (Agilistry), and took a job with Pivotal. Actually, to be precise, I joined Pivotal Labs; Pivotal did not even exist in 2012.

Pivotal came into existence in April 2013 as a spin out from EMC and VMWare. Labs is part of Pivotal, but we are more of a product company focused on cloud and data than a services company. We work on Cloud Foundry, an open source Platform-as-a-Service (PaaS), and have our own distribution as well as our hosted service. The other side of our business is data. Our Big Data Suite includes GPDB (an MPP database), HAWQ (SQL on Hadoop), and Gemfire (an in-memory data grid).

My role at Pivotal has evolved in the time I’ve been there.

For the first couple years, I was the Director of Quality Engineering on Cloud Foundry. It’s a title I swore I would never take again. But my job was different than you might imagine. I did not direct the efforts of quality engineers. Rather, I paid attention to our feedback cycles. Teams own their tests, their CI pipelines, and ultimately the quality of their deliverables. I just helped connect the dots. By the way, if you want to know more about quality and testing on Cloud Foundry, I did get out of the building long enough to give a talk on it at the Wikimedia Foundation. I also gave a talk on the Care and Feeding of Feedback cycles at DOES2014.

In the last few months I moved over to our Data organization in Palo Alto. This changed my commute substantially, so my family and I are moving this summer. That will be an adventure. We’ve been in the same house for 17 years. So wish me luck with that.

Along with the move to our data org, my title changed. We removed the word “quality” from it since what I do does not look anything like traditional quality engineering. So I’m now a director of engineering. But the work I do on a daily basis with our Data teams looks a lot like what I did with Cloud Foundry: I’m deeply involved in hiring, cross-team coordination, improving our release practices, improving builds and CI to make the developer workflow better, and coordinating with our product organization to make sure teams have a steady stream of high value work.

I’m also doing my best to climb the steep learning curve of MPP databases and Hadoop. It helps that I worked at Sybase once upon a time. But that was 20 years ago. So between the fact that I was doing very different work 20 years ago, I’ve forgotten much of what I learned, and things have changed a bit in 2 decades, my prior database experience is only helping me a little in understanding my new context.

I have to say that I love working at Pivotal. I adore the people, am fascinated by the products, and am passionate about the way we work. Coming back to Pivotal was like coming home. (After all, Pivotal Labs is where I learned Agile over a decade ago.)

Some of you have noted that I don’t get out much anymore. I’m not at conferences and I don’t travel much. Since I’m in an inward facing role it’s difficult for me to carve out time to get out into the community. I’d like to see my industry friends more often and I am always honored to be invited to speak. But I turn down the vast majority of speaking invitations. My job takes up all my available time and brain cells.

So that’s what I’m up to and why I’ve been silent here for so long. I do have things to say though. I’ve learned a lot in the last 30 months. And I’m learning more every day. So I hope to carve out time to share what I’m learning here. But no promises about when, exactly, I’ll post.

Categories: Blogs

Where Have I Been?

Oh, hai internets. It’s been a while. Did you miss me?

Let me tell you what I’ve been up to.

In the fall of 2012 I shut down my consulting practice (Quality Tree Software) and my studio (Agilistry), and took a job with Pivotal. Actually, to be precise, I joined Pivotal Labs; Pivotal did not even exist in 2012.

Pivotal came into existence in April 2013 as a spin out from EMC and VMWare. Labs is part of Pivotal, but we are more of a product company focused on cloud and data than a services company. We work on Cloud Foundry, an open source Platform-as-a-Service (PaaS), and have our own distribution as well as our hosted service. The other side of our business is data. Our Big Data Suite includes GPDB (an MPP database), HAWQ (SQL on Hadoop), and Gemfire (an in-memory data grid).

My role at Pivotal has evolved in the time I’ve been there.

For the first couple years, I was the Director of Quality Engineering on Cloud Foundry. It’s a title I swore I would never take again. But my job was different than you might imagine. I did not direct the efforts of quality engineers. Rather, I paid attention to our feedback cycles. Teams own their tests, their CI pipelines, and ultimately the quality of their deliverables. I just helped connect the dots. By the way, if you want to know more about quality and testing on Cloud Foundry, I did get out of the building long enough to give a talk on it at the Wikimedia Foundation. I also gave a talk on the Care and Feeding of Feedback cycles at DOES2014.

In the last few months I moved over to our Data organization in Palo Alto. This changed my commute substantially, so my family and I are moving this summer. That will be an adventure. We’ve been in the same house for 17 years. So wish me luck with that.

Along with the move to our data org, my title changed. We removed the word “quality” from it since what I do does not look anything like traditional quality engineering. So I’m now a director of engineering. But the work I do on a daily basis with our Data teams looks a lot like what I did with Cloud Foundry: I’m deeply involved in hiring, cross-team coordination, improving our release practices, improving builds and CI to make the developer workflow better, and coordinating with our product organization to make sure teams have a steady stream of high value work.

I’m also doing my best to climb the steep learning curve of MPP databases and Hadoop. It helps that I worked at Sybase once upon a time. But that was 20 years ago. So between the fact that I was doing very different work 20 years ago, I’ve forgotten much of what I learned, and things have changed a bit in 2 decades, my prior database experience is only helping me a little in understanding my new context.

I have to say that I love working at Pivotal. I adore the people, am fascinated by the products, and am passionate about the way we work. Coming back to Pivotal was like coming home. (After all, Pivotal Labs is where I learned Agile over a decade ago.)

Some of you have noted that I don’t get out much anymore. I’m not at conferences and I don’t travel much. Since I’m in an inward facing role it’s difficult for me to carve out time to get out into the community. I’d like to see my industry friends more often and I am always honored to be invited to speak. But I turn down the vast majority of speaking invitations. My job takes up all my available time and brain cells.

So that’s what I’m up to and why I’ve been silent here for so long. I do have things to say though. I’ve learned a lot in the last 30 months. And I’m learning more every day. So I hope to carve out time to share what I’m learning here. But no promises about when, exactly, I’ll post.

Categories: Blogs

Next-gen Web Apps with Isomorphic JavaScript

Xebia Blog - Fri, 05/01/2015 - 21:54

The web application landscape has recently seen a big shift in application architecture. Nowadays we build so-called Single Page Applications. These are web applications which render and run in the browser, powered by JavaScript. They are called “Single Page” because in such an application the browser never actually switches between pages. All interaction takes place within a single HTML document. This is great because users will not see a ”flash of white” whenever they perform an action, so all interaction feels much more fluid and natural. The application seems to respond much quicker which has a positive effect on user experience and conversion of the site. Unfortunately Single Page Applications also have several big drawbacks, mostly concerning the initial loading time and poor rankings in search engines.

Continue reading on Medium »

Categories: Companies

People Over Process

Agilitrix - Michael Sahota - Fri, 05/01/2015 - 17:09

Here is the latest version of my “People over Process” slides that are about coming back to the heart of Agile: People – to unleash astonishing results.

It covers:

  1. Intro – People over Process.
  2. Agile = Culture. Whole Agile.
  3. Focus on People: Vulnerability, Authentic Connection, Safety & Trust (VAST)
  4. People-centric organizations (Laloux Culture Model)
  5. People-centric Change

People over Process (Agile & Beyond) from Michael Sahota

You can also see earlier version of slides and video summary.

The post People Over Process appeared first on Catalyst - Agile & Culture.

Related posts:

  1. People over Process – Win with People Success comes from Valuing People When we simplify the Agile...
  2. Whole Agile – Unleash People & Organizations Agile is incomplete. We need to augment it to create...
  3. Manager’s Journey: Awareness, Epiphany, & Choice Delighted to share the slides from my and Soo Kim’s...

YARPP powered by AdBistroPowered by

Categories: Blogs

The Estimates in #NoEstimates

lizkeogh.com - Elizabeth Keogh - Fri, 05/01/2015 - 16:34

A couple of weeks ago, I tweeted a paraphrase of something that David J. Anderson said at the London Lean Kanban Day: “Probabalistic forecasting will outperform estimation every time”. I added the conference hashtag, and, perhaps most controversially, the #NoEstimates one.

The conversation blew up, as conversations on Twitter are wont to do, with a number of people, perhaps better schooled in mathematics than I am, claiming that the tweet was ridiculous and meaningless. “Forecasting is a type of estimation!” they said. “You’re saying that estimation is better than estimation!”

That might be true in mathematics. Is it true in ordinary, everyday English? Apparently, so various arguments go, the way we’re using that #NoEstimates hashtag is confusing to newcomers and making people think we don’t do any estimation at all!

So I wanted to look at what we actually mean by “estimate”, when we’re using it in this context, and compare it to the “probabilistic forecasting” of David’s talk.

Defining “Estimate” in English

While it might be true that a probabilistic forecast is a type of estimate in maths and statistics, the commonly used English definitions are very different. Here’s what Wikipedia says about estimation:

Estimation (or estimating) is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

And here’s what it says about probabilistic forecasting:

Probabilistic forecasting summarises what is known, or opinions about, future events. In contrast to a single-valued forecasts … probabilistic forecasts assign a probability to each of a number of different outcomes, and the complete set of probabilities represents a probability forecast.

So an estimate is usually a single value, and a probabilistic forecast is a range.

Another way of phrasing that tweet might have been, “Providing a range of outcomes along with the likelihood of those outcomes will lead to better decision-making than providing a single value, every time.”

And that might have been enough to justify David’s assertion on its own… but it gets worse.

Defining “Estimate” in Agile Software Development

In the context of Software Development, estimation has all kinds of horrible connotations. It turns out that Wikipedia has a page on Software Development Estimation too! And here’s what it says:

Software development effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.

Again, we’re looking at a single value; but do notice the “high uncertainty” there. Here’s what the page says later on:

Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort.

The Lean / Kanban movement has emerged (and possibly diverged) from the Agile movement, in which this strategy really is dominant, mostly thanks to Scrum and Extreme Programming. Both of these suggest the use of story points and velocity to create the estimates. The idea of this is that you can then use previous data to provide a forecast; but again, that forecast is largely based on a single value. It isn’t probabilistic.

Then, too, the “expertise” of the various people performing the estimates can often be questionable. Scrum suggests that the whole team should estimate, while XP suggests that developers sign up to do the tasks, then estimate their own. XP, at least, provides some guidance for keeping the cost of change low, meaning that expertise remains relevant and velocity can be approximated from the velocity of previous sprints. I’d love to say that most Scrum teams are doing XP’s engineering practices for this reason, but a lot of them have some way to go.

I have a rough and ready scale that I use for estimating uncertainty, that helps me work out whether an estimate is even likely to be made based on expertise.  I use it to help me make decisions about whether to plan at all, or whether to give something a go and create a prototype or spike. Sometimes a whole project can be based on one small idea or piece of work that’s completely new and unproven, the effort of which can’t even be estimated using expertise (because there isn’t any), let alone historical metrics.

Even when we have expertise, the tendency is for experts to remember the mode, rather than the mean or median value. Since we often make discoveries that slow us down but rarely make discoveries which speed us up, we are almost inevitably over-optimistic. Our expertise is not merely inaccurate; it’s biased and therefore misleading. Decisions made on the basis of expert estimates have a horrible tendency to be wrong. Fortunately everyone knows this, so they include buffers. Unfortunately, work tends to expand to fill the time available… but at least that makes the estimates more accurate, right?

One of the people involved in the Twitter conversation suggested we should be using the word “guess” rather than “estimate”. And indeed, that might be mathematically more precise, and indeed, if we called them that, people might be looking for different ways to inform the decisions we need to make.

But they don’t. They’re called “estimates” in Scrum, in XP, and by just about everyone in Agile software development.

But it gets worse.

Defining “Estimate” in the context of #NoEstimates

Woody Zuill found this very early tweet from Aslak Hellesøy using the #NoEstimates hashtag, possibly the first:

@obie at #speakerconf: “Velocity is important for budgeting”. Disagree. Measuring cycle time is a richer metric. #kanban #noestimates

So the movement started with this concept of “estimate” as the familiar term from Scrum and XP. Twitter being what it is, it’s impossible to explain all the context of a concept in 140 characters, so a certain level of familiarity with the ideas around that tag is assumed. I would hope that newcomers to a movement would approach it with curiosity, and hopefully this post will make that easier.

Woody confessed to being one of the early proponents of the hashtag in the context of software development. In his post on the #NoEstimates hashtag, he defines it as:

#NoEstimates is a hashtag for the topic of exploring alternatives to estimates [of time, effort, cost] for making decisions in software development.  That is, ways to make decisions with “No Estimates”.

And later:

It’s important to ask ourselves questions such as: Do we really need estimates? Are they really that important?  Are there options? Are there other ways to do things? Are there BETTER ways to do thing? (sic)

Woody, and Neil Killick who is another proponent, both question the need for estimates in many of the decisions made in a lot of projects.

I can remember getting the Guardian’s galleries ready in time for the Oscars. Why on earth were we estimating how long things would take? That was time much better spent in retrospect on getting as many of the features complete as we could. Nobody was going to move the Oscars for us, and the safety buffer we’d decided on to make sure that everything was fully tested wasn’t changing in a hurry, either. And yet, there we were, mindlessly putting points on cards. We got enough features out in time, of course, as well as some fun extras… but I wonder if the Guardian, now far more advanced in their ability to deliver than they were in my day, still spend as much time in those meetings as we used to.

I can remember asking one project manager at a different client, “These are estimates, right? Not promises,” and getting the response, “Don’t let the business hear you say that!” The reaction to failing to deliver something to the agreed estimates was to simply get the developers to work overtime, and the reaction to that was, of course, to pad the estimates. There are a lot of posts around on the perils of estimation and estimation anti-patterns.

Even when the estimates were made in terms of time, rather than story points, I can remember decisions being unchanged in the face of the “guesses”. There was too much inertia. If that’s going to be the case, I’d rather spend my time getting work done instead of worrying about the oxymoron of “accurate estimates”.

That’s my rant finished. Woody and Neil have many more examples of decisions that are often best made with alternatives to time estimation, including much kinder, less Machiavellian ones such as trade-off and prioritization.

In that post above, Neil talks about “using empiricism over guesswork”. He regularly refers to “estimates (guesses)”, calling out the fact that we do use that terminology loosely. That’s English for you; we don’t have an authoritiative body which keeps control of definitions, so meanings change over time. For instance, the word “nice” used to mean “precise”, and before that it meant “silly”. It’s almost as if we’ve come full circle.

Defining “Definition”

Wikipedia has a page on definition itself, which points out that definitions in mathematics are different to the way I’ve used that term here:

In mathematics, a definition is used to give a precise meaning to a new term, instead of describing a pre-existing term.

I imagine this refers to “define y to be x + 2,” or similar, but just in case it’s not clear already: the #NoEstimates movement is not using the mathematical definition of “estimate”. (In fact, I’m pretty sure it’s not using the mathematical definition of “no”, either.)

We’re just trying to describe some terms, and the way they’re used, and point people at alternatives and better ways of doing things.

Defining Probabilistic Forecasting

I could describe the term, but sometimes, descriptions are better served with examples, and Troy Magennis has done a far better job of this than I ever have. If you haven’t seen his work, this is a really good starting point. In a nutshell, it says, “Use data,” and, “You don’t need very much data.”

I imagine that when David’s talk is released, that’d be a pretty good thing to watch, too.


Categories: Blogs

Accelerating Decision-Making: A Story of Agility

Bruno Collet - Agility and Governance - Fri, 05/01/2015 - 15:33
Through a story inspired by real experience, discover how a shift in culture and a system based on triage, help an organization accelerate decision-making and ultimately become more Agile.



Categories: Blogs

One-on-Ones on a Pair Programming Team

One-on-Ones are a well known management strategy. They help reduce communication misses, keep everyone on course and provide an easy platform for feedback. I’ve done them throughout my management career, but the past few years I had a few start and stops with them.

My scenario over the past few years has been working as an engineer and often a tech lead on small teams where we paired as much as 80%. Sitting side by side and rotating pairs often led me to experiment with skipping out on one on ones. If you’re having regular conversations over the code, do one-on-ones serve enough of a purpose?

I decided they were important enough to restart after my first year on the new job. Part of my reluctance was the need to come up to speed on a number of technologies. I skimped on spending time for tactical management tasks. I relished staying deep in the code and design, but I should still have carved out the time for one-on-ones.

When I transitioned to leading a new team about 6 months ago I again let the one-on-ones slip off my radar. I told myself I would restart them after I felt out the new team. Turned out I got lazy and took 6 months to restart them. Even on teams that pair and sit in close proximity, some conversations never come up and it’s rare to discuss items like career aspirations when the whole team is housed at one long table.

I have made a single adjustment from my old style where I ran 30 minute one-on-ones once a week. For my current team:

  • Scheduled for 30 minutes.
  • Most of the agenda is up to the employee, and sometimes we discuss future career type goals.
  • The last 5-10 minutes are for me, news I need to pass on or lightweight feedback.
  • Generally the one-on-ones average about 15 minutes, but they’re still scheduled for the full 30.
  • I rotate through all of them one after the other so with 3 we’re often done after about an hour.
  • If we miss a week for some reason it’s not a big deal, since these are weekly.
Categories: Blogs

Time for a Demo?

When I first learned Scrum, the idea was to have 30-day (4 week) iterations with a demo at the end of each iteration. Today I see teams that have iterations from 1 to 4 weeks, or in the case of Kanban, no iterations at all. So the question is, how often should there be a demo?
I’m a believer of getting sign off on a user story once it's done. This usually comes from the person who wrote it, who may be a proxy to the product owner. So when we get to the end of the iteration, all the stories that are completed have already been reviewed by someone on the business side but not necessarily the product owner. 
So the questions is, If it’s a short iteration, does it make sense to have a more formal demo of the stories? I think in some cases the answer is “no.” 
On one of my projects where we were using Kanban, we planned the demos every 4 weeks. This allowed us to finish a set of stories that were part of a feature, so the demo was more complete. I think this approach works with short iterations as well, only conducting the demo after every second or third iteration, depending on how long your iterations are and how complex your features are. 
There's also another consideration. I work with clients building applications that will get rolled out to the organization. On these projects, we need to consider organizational change management. The demos serve as a way to help introduce the new application to the end users. So from this perspective, there may be a large number of attendees at the demos. This is another argument for less frequent demos.
So while a demo can be held at the end of every iteration, even for short iterations, there are some reasons to hold them less frequently. 
Categories: Blogs

Flow-driven Product Development

Learn how product development organizations are using a lean, flow-driven approach to achieve predictable releases and enable continuous improvement. About This Webinar Yuval Yeret, CTO of AgileSparks and a leading Kanban practitioner, shares his experiences helping enterprise product development teams apply Kanban to become more agile. Highlights of what you’ll learn: – Why product development teams are […]

The post Flow-driven Product Development appeared first on Blog | LeanKit.

Categories: Companies

Processing Unordered Array Items In Order, Using Brute Force

Derick Bailey - new ThoughtStream - Thu, 04/30/2015 - 21:40

I recently had the opportunity to interview Aria Stewart – a developer at PayPal. The interview was for my RabbitMQ For Developers package (coming soon!) and centered around designing for failure. At one point in the conversation, we were talking about the problem of ordered messages and vector clocks and I mentioned a problem I was having in my current client system. The discussion continued for a moment after the recording ended In that time, she talked about solving the “ordering” problem of messages by rejecting a message that was out of order and sending it to the back of the queue. The idea had me intrigued, and I wanted to see if what she was saying would pan out for me. Having done a few quick tests, I think it will. So I wanted to share the most basic of the demos I put together to see the idea in action. 

Re-Ordering The Out Of Order

To start with, you need to understand that some messages sent across a message queue or otherwise processed asynchronously will arrive out of order. It happens. Sometimes it doesn’t matter, but other times this can be catastrophic – as it is in my case. 

One of the techniques that can be used to combat this is a simple sequence number on the messages. The sequence number can be used to determine if the message is out of order or not. As an example, some items may end up in your “queue” (array in this case) in an order like this (assume these are sequence id’s on an object):

You only need to track the most recently processed item and then compare that to the current message you’re looking at. If the current message is 1 number higher than the previous one, process it. If it is not one number higher than the previous one, throw it to the back of the queue and move on to the next message. Repeat until everything has been processed in order.

Implementing The Brute Force Ordering

It is fairly trivial to implement this idea with an array as the example. You only need one function and a “previous” value to use, and you can use recursion to do the whole thing.

In this example, the first line of the processItems function is the exit condition for the recursion. If there are no items left, exit. The next line gets the first value out of the list. The code then checks to see if this is the next item it needs to process. If it does, it handles it and set the “previous” item to the current one. If the items are out of order, the item is pushed to the end of the array so the next item can be checked. Recursion is used to re-enter the loop and continue processing the entire array.

(Please note that this operation is destructive to the array. If you need to keep your original array in-tact, make a copy of it before processing it like this.)

The result of this code running, looks like this:

You can see each item being checked and whether or not it was processed. If you only look at the “Processing the item” messages, you will see that they occur in the correct order. The other items – the ones that are out of order – are shoved to the end of the queue so that they can be dealt with later. 

Brute Force Is Not Performant

This solution is perfectly acceptable in a situation like mine. But you should know that I call this a “brute force” solution for a reason. There is no real intelligence, in here. It is just “throw it back”, forcefully – no regard for any other context. This is not optimal, nor is it particularly “performant”. In fact, you can see in the output that the out-of-order messages often get shoved to the back of the line multiple times. 

It would be a better solution, to prevent the items from getting out of order in the first place. It still be better to cache the messages somewhere while they wait to be ordered. But these options are not always feasible. 

If you find yourself working with a queue, though, and having to deal with ordered messages, this is at least one option for dealing with things being out of order.

Want To Know More?

The interview I mentioned above is a part of the RabbitMQ For Developers package that I am producing. This is a series of screencasts, interviews, an eBook and more that will show developers how to get up and running with RabbitMQ, what can be done with it, when and why you would want to use it, and how to work with it in NodeJS. If you’d like to get the rest of the interview with Aria and the other interviews that I am producing, be sure to join my mailing list. I’ll be announcing the package availability some there, first. 

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.