Skip to content

Feed aggregator

The importance of knowing when you are wrong as an Agile and #NoEstimates principle

Software Development Today - Vasco Duarte - Tue, 09/23/2014 - 05:00

You started the project. You spent hours, no: days! estimating the project. The project starts and your confidence in its success is high.

Everything goes well at the start, but at some point you find the project is late. What happened? How can you be wrong about estimates?

This story very common in software projects. So common, that I bet you have lived through it many times in your life. I know I have!

Let’s get over it. We’re always wrong about estimation. Sometimes more, sometimes less and very, very rarely we are wrong in a way that makes us happy: we overestimated something and can deliver the project ahead of (the inflated?) schedule.

We’re always wrong about estimation.

Being wrong about estimates is the status quo. Get over it. Now let’s take advantage of being wrong! You can save the project by being wrong. Here’s why...

The art of being wrong about software estimates

Knowing you are wrong about your estimates is not difficult after the fact, when you compare estimates to actuals. The difficult part is to make a prediction in a way that can tested regularly, and very early on - when you still have time to change the project.

Software project estimates as they are usually done, delay the feedback for the “on time” performance to a point in time when there’s very little we can do about it. Goldratt grasped this problem and made a radical suggestion: cut all estimates in half, and use the rest of the time as a project buffer. Pretty crazy hein? Well, it worked because it forced projects to face their failures much earlier than they would otherwise. Failing to meet a deadline early on in the life-cycle of the project gave them a very powerful tool in project management: time to react!

The #NoEstimates approach to being wrong...and learning from it

In this video I explain shortly how I make predictions about a possible release date for the project based on available data. Once I make a release date prediction, I validate it as soon as possible, and typically every week. This approach allows me to learn early enough when I’m wrong and then adjust the project as needed.

We’re always wrong, the important thing is to find out how wrong, as early as possible

After each delivery (whether it is a feature or a timebox like a sprint), I update my prediction for the release date of the project based on the lead time or throughput rate so far. After updating the release date projection, I can see whether it has changed enough to require a reaction by the project team. I can make this update to the project schedule without gathering the whole team (or "the chosen ones") into a room for an ungodly long estimation meeting.

If the date has not changed outside the originally interval, or if the delivery rate is stable (see the video), then I don’t need to react.

When the release date projection changes to a time outside the original interval, or the throughput rate has become unstable (did you see the video?), then you need to react. At first to investigate the situation, and later to adjust the parameters in your project if needed.


The #NoEstimates approach I advocate will allow you to know when the project has changed enough to warrant a reaction. I make a prediction, and (at least) every week I review that prediction and take action.

Estimates, done the traditional way, also give you this information, but too late. This happens because of the big-batch thinking the reliance on estimations enables (larger work items are ok if you estimate), and because of the delayed dependency integration it enables (estimated projects typically allow for teams that are dependent to work separately because of the agreed plan).

The #NoEstimates approach I advocate has one goal: reduce feedback cycle. These short feedback cycles will allow you to recognise early enough how wrong you were about your predictions, and then you can make the necessary adjustments!

Picture credit: John Hammink, follow him on twitter
Categories: Blogs

Why Agile Game Development?

Agile Game Development - Mon, 09/22/2014 - 22:23
Agile is a set of values for building products, like games, more effectively by balancing planning with reality, delivering features and quality iteratively, while focusing on the emergent fun of the game by adding features and mechanics in a player-value prioritized way.  It allows a more responsive and predictable approach to development by balancing planning, art, design, technology and testing within small iterations rather than consigning much of the work in these areas to phases of development (i.e. big planning documents at the start and a flurry of testing and bug fixing at the end.  
Scrum is an agile framework with a further focus on the people making a game.  Scrum sets up a framework of small cross-discipline teams that take control of the work done in small iterations (2-3 weeks) to achieve a functional goal.  There are a number of reasons for doing this:
  • Small teams (5-9 people) communicate more effectively than larger groups.
  • Cross-discipline teams can respond to the needs of an emerging game (quality and functionality) more quickly.
  • Creative developers are more effective and motivated when they have a clear vision of a goal, can take ownership of their work and implement changes day-to-day without being micromanaged.
  • Scrum iterations level out workload to be consistent and sustainable throughout development.  They do that by balancing design, development, debugging and tuning so that the game doesn’t build an overwhelming debt of undone work that needs to be paid back, with interest, later in a project through death marches and compromises in quality.
Agile Game Development is a set of collected experiences of what works and what doesn’t work with iterative approaches and practices (e.g. Agile, Lean, Scrum, TDD, Kanban, etc) as applied to over a decade of game development with all genres and platforms.  Although it may not follow all practices of any single framework, it does adhere to the agile mindset and values.
Categories: Blogs

Enterprise Agility and CollabNet

Danube - Mon, 09/22/2014 - 19:14

I recently had the good fortune to get on the road for a few weeks and take a deep look into both the state of software delivery and the role CollabNet plays in the industry.  From recent meetings with our growing field sales force, to visits with global customers and partners in the U.S. and Asia, I am excited to see the alignment of our company strategy with actual market dynamics.  As a company that has always been agnostic to any tool or process, the reality of today’s software development environment is that it requires both flexibility and structure across a range of tools, markets, processes, and even regional cultural differences in IT maturity. With the proliferation of open source tools, agile scaling and the rise of DevOps as the natural progression to promote collaboration, we’re seeing strategic approaches that are aimed at meeting both business needs and competitive pressures in various markets around the globe.

Trips like the one I just took give me time to think, and are a great reminder to me why CollabNet is so incredibly energized about the future.  The reasons I feel this way are many.

There’s the investment and support of our new equity partner, Vector Capital. There’s the new energy of a growing development, support and sales force. There’s the hard work and commitment from a growing executive and global team that is truly inspiring. However, it’s the external forces and dynamics – related to the role of software and how it’s delivered within large enterprise organizations – that has me most excited.

What we see is a huge mash-up of dynamics and forcing functions within the software industry that is leading to an end-goal that encompasses Agile, Continuous Delivery and DevOps combined.  These dynamics extend across IT infrastructure, external clouds  and interactions with embedded software systems. We all talk about getting software out faster, and with higher quality. Good stuff for sure. But today’s organizations – from SMBs and startups to the largest corporations, and including local, state and federal government agencies and education institutions – are now taking a big-picture view of their business value chains and their ability to respond faster to their market pressures.

So what is CollabNet doing about providing solutions to help enterprises have this “big picture view”?  Said simply, we are sharply focused on advancing the  notion of Enterprise Agility.

What is Enterprise Agility? I can tell you it’s not just another marketing buzzword being thrown around. It’s what CollabNet has been committed to for the past 15 years since we created the Subversion SCM open source project and first put commercial software development into the cloud. It’s also what Agile started – and what spurred the introduction Continuous Delivery and DevOps. It’s the notion of taking a strategic view and approach to orchestrating ALL of the functions and processes that go into building and deploying software. Enterprise Agility is about being adaptive, to internal and external dynamics, and the ever-changing tides of how people use software and the expectations they have, where they work, where they live and the federated tools they use, and finally to enabling change and responsiveness.

Today’s software-driven organizations need to have agility  – not be fixated on agile development. They need to offer developers freedom of choice when it comes to tools and processes, yet have the means to track activities with complete governance and leverage throughout the entire software delivery lifecycle. Said another way, Enterprise Agility is about enabling the combination of agile development and governance, not agile development at the expense of governance.  That’s precisely what we are committed to doing.

What’s your take on “agile versus governance”  or “Enterprise Agility”?   I’d like to get your input.


The post Enterprise Agility and CollabNet appeared first on

Categories: Companies

Xebia KnowledgeCast Episode 4: Scrum Day Europe 2013, OpenSpace Knowledge Exchange, and Fun With Stickies!

Xebia Blog - Mon, 09/22/2014 - 17:44

The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this fourth episode, we share some impressions of Scrum Day Europe 2013 and Xebia's OpenSpace Knowledge Exchange. And of course, Serge Beaumont will have Fun With Stickies! First, we interview Frank Bakker and Evelien Roos at Scrum Day Europe 2013. Then, Adriaan de Jonge and Jeroen Leenarts talk about continuous delivery and iOS development at the OpenSpace XKE. And in between, Serge Beaumont has Fun With Stickies!

Frank Bakker and Evelien Roos give their first impressions of the Keynotes at Scrum Day Europe 2013. Yes, that was last year, I know. New, more current interviews are coming soon. In fact, this is the last episode in which I use interviews that were recorded last year.

In this episode's Fun With Stickies Serge Beaumont talks about hypothesis stories. Using those, ensures you keep your Agile really agile. A very relevant topic, in my opinion, and it jells nicely with my missing line of the Agile Manifesto: Experimentation over implementation!

Adriaan de Jonge explains how automation in general, and test automation in particular, is useful for continuous delivery. He warns we should focus on the process and customer interaction, not the tool(s). That's right before I can't help myself and ask him which tool to use.

Jeroen Leenarts talks about iOS development. Listening to the interview, which was recorded a year ago, it's amazing to realize that, with the exception of iOS8 having come out in the mean time, all of Jeroen's comments are as relevant today as they were last year. How's that for a world class developer!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!


Categories: Companies

Design Thinking für Product Owner: Der Design-Thinking-Prozess

Scrum 4 You - Mon, 09/22/2014 - 07:45

Die drei wichtigen Komponenten des Design Thinkings sind das Team, der variable Raum und der Prozess. Der Design-Thinking-Prozess mag in dieser priorisierten Liste am Ende stehen, aber er ist das wohl bekannteste Element des Design Thinkings. Seine Struktur ist gut zu visualisieren und die meisten Menschen mögen klare Abfolgen, weil sie Sicherheit versprechen.

Offen und Unbekannt

Aber diese Sicherheit trügt: Die Sicherheit, die Product Owner und ScrumMaster aus dem Scrum Flow kennen, werden sie im Design Thinking nur ansatzweise wiederfinden. Der Design-Thinking-Prozess ist offener und die Ergebnisse sind unbekannt. Vielmehr ist der Design-Thinking-Prozess eine lockere Abfolge von Phasen mit jeweils einem ganzen Katalog an verschiedenen Techniken aus unterschiedlichen Disziplinen, die in der Kombination die Erfolgswahrscheinlichkeit bei der Entwicklung von Lösungen in komplexen Umfeldern erhöhen soll. Welche dieser “Vorschläge” das Team nutzt und wie lange es in den Phasen verweilt, entscheidet es selbst. Daher ist die Begleitung durch einen erfahrenen Design-Thinking-Coach sinnvoll, um nicht verloren zu gehen und stets fokussiert zu bleiben. Der Product Owner besetzt eine besondere Rolle: Zum einen profitiert er in besonderem Maße vom Design Thinking, da es hilft Produktideen zu entwickeln, zu testen und die Produktvision abzuleiten. Zum anderen ist er ein guter Wissensspeicher, um die gewonnenen Erkenntnisse aus dem Design Thinking in die Umsetzung mit Scrum zu überführen.

Zunächst die Lösungsmaschine aus …

Es gibt unterschiedlich detaillierte Varianten des Design-Thinking-Prozesses – allen gemein ist eine erste Hälfte, in der die Lösungsmaschine im Kopf abgeschaltet wird und der Fokus auf der Nutzerperspektive und auf dem Verstehen, Beobachten und Analysieren der Bedürfnisse des Nutzers liegt.

… und dann die Lösungsmaschine wieder an.

In der zweiten Hälfte wird nachgedacht, wie erkannte Probleme und Bedürfnisse gelöst und bedient werden können. Nach der Ideenfindung werden die kritischen Funktionen der erdachten Lösungen isoliert und mit Prototypen am Nutzer getestet. Mit dem so generierten Feedback kann in eine beliebige vorherige Phase des Prozesses eingestiegen werden, um die Lösung dicht am Nutzer entlang zu optimieren oder komplett zu verwerfen und mit dem neuen Wissen andere Ideen zu generieren.

Der detaillierte Design-Thinking-Prozess

Wir betrachten den Design-Thinking-Prozess so wie er an der HPI School of Design Thinking gelehrt wird. Diese Variante ist in 6 Phasen geteilt und gut zu verstehen:

Design Thinking & Change Management Flipcharts

Vorbereitung der Design-Challenge

Bevor der Prozess startet, müssen die Teammitglieder gefunden (siehe Teil 2: Das Design-Thinking-Team), der Raum bereitgestellt (siehe Teil 3: Der Design-Thinking-Raum) und die Design-Challenge (zu lösende Aufgabe/Innovationsauftrag) formuliert werden. Die Challenge soll dabei eine Richtung vorgeben, aber nicht einschränkend wirken. Beispielsweise ist die Formulierung “Finde neue Wege, Speiseeis online zu verkaufen!” sehr eng, da es gut sein kann, dass der Onlinevertrieb und das Nutzerbedürfnis nicht zusammenpassen. In dem Fall würde lediglich eine inkrementelle Verbesserung erarbeitet oder aber ein mutiges Team würde die Aufgabe reformulieren, um eine wirkliche Innovation zu kreieren.
Offener wäre: “Finde neue Wege, Speiseeis zu vertreiben!” Hierbei kann das Design-Thinking-Team auch die Möglichkeit eines Onlinevertriebs prüfen, ist aber nicht von Anfang an darauf fokussiert.

Stellen wir den Innovationsauftrag noch positiver und bedürfnisbezogener auf: “Wie können wir den Genuss von Speiseeis zu Hause verbessern?” Nun haben wir die Richtung und den Rahmen definiert, es geht um Genuss und das Bedürfnis zu genießen. Die Angabe “zu Hause” bedeutet, dass das Eis irgendwie dort hinkommen muss und gleichzeitig, dass private Situationen beleuchtet werden müssen, in denen Eis genossen wird. Wir können nun beginnen, ein Erlebnis zu kreieren.

1) Verstehen

In der ersten Phase tauschen sich die Teammitglieder aus. Durch das Teilen des vorhandenen Wissens in der Gruppe synchronisiert sich das Team, Rückfragen zur Design-Challenge können gestellt werden und alle Teilnehmer werden zu “Sofortexperten”. Das Team entwickelt ein Gespür für die Wissenslücken, die es im nächsten Schritt mit Hilfe von Recherche und realen Nutzen füllen möchte.

2) Empathie (Beobachten, Erforschen)

In der zweiten Phase geht es darum, den Nutzer zu verstehen und Empathie aufzubauen. Die drei bekanntesten Mittel sind offene Fragen (Interviews), Beobachten (Observation) und sich selbst in die Lage des Nutzers bringen (Experience).

3) Synthese (Sichtweise definieren)

Nach der Analyse werden die Massen an Informationen gebündelt, sortiert, verdichtet, ausgewertet und priorisiert. Oft haben sich die Teams in der vorangegangenen Phase in kleinere Gruppen geteilt. Nun wird mittels Storytelling die Information geteilt und im gesamten Team bearbeitet. Informationen werden auf Haftnotizen gemalt, möglichst visuell mit wenig Text, um Fotos und mitgebrachte Artefakte ergänzt und mit unterschiedlichen Tools verarbeitet: z.B. Personas, Zeitachsen, User Journeys, 2×2 Matrix, Venn-Diagramm und einige mehr.

Dies alles wird in der Sichtweise (POV – Point of View) zusammengefasst: Diese besteht aus der Definition des Nutzers, seinen Bedürfnissen und besonderen Einsichten aus der Recherche. Davon abgeleitet werden nun unterschiedliche Fragen nach dem Muster: “Wie können wir <Nutzer> helfen… <Ziel> zu erreichen… und <Bedürfnis> berücksichtigen?”

4) Ideenfindung (Ideation)

Mit den Fragen aus der vorangegangenen Phase darf endlich die Lösungsmaschine im Gehirn wieder aktiviert werden. Meist werden klassische Brainstorming-Techniken eingesetzt. Wichtig ist, dass schnell und viel produziert wird: Kritik ist verboten, verrückte Ideen sind willkommen, es soll auf den Ideen anderer aufgebaut werden und natürlich wird im Stehen gearbeitet. Wenn die Ideenfindung ins Stocken gerät, helfen Anregungen wie: “Was würde Superman tun?” oder “Was müssten wir tun, um den Erfolg auf jeden Fall zu verhindern?”
Nach dieser Phase werden Ideen nach Extremen geclustert: Was sind die hilfreichsten Ideen für den Nutzer? Welche Ideen lassen sich am schnellsten umsetzen? Welche Ideen sind am radikalsten? Welche Ideen finden wir einfach gut? Eine oder mehrere Ideen werden aus diesem Ideenspeicher für schnelle Tests ausgewählt.

5) Prototyping (denken mit den Händen)

Ideen werden nicht diskutiert, denn nicht das Team entscheidet, was erfolgversprechend ist, sondern der Nutzer! Dazu müssen die Ideen erlebbar werden. Prototypen zeigen die gesamte Idee oder einzelne kritische Funktionen davon. Es werden kleine LEGO Welten gebaut, Plakate entworfen, Rollenspiele aufgeführt, Klickstrecken produziert, Objekte erschaffen, Filme gedreht… der Phantasie sind kaum Grenzen gesetzt, um innovative Produkte und Dienstleistungen greifbar zu machen. Wichtig ist allein, dass wenig diskutiert, sondern einfach gebaut wird und die Prototypen einen improvisierten und unfertigen Eindruck machen, um im nächsten Schritt ehrliches Feedback zu erhalten, das sich auf die Funktionalität und nicht auf die Ästhetik bezieht. Außerdem wird das Team einen Prototypen, der wenig Aufwand gekostet hat, auch leichter wieder fallen lassen.

6) Test

Das Team hat beim Erstellen der Prototypen bereits die Nutzerperspektive eingebracht und somit auch erste Tests vollzogen. Nun erfolgt aber der nächste Kontakt zum realen Nutzer. Dieser soll so viel wie möglich ausprobieren und verstehen, aber bitte keine langatmigen Erklärungen! Nur beobachten und fragen, jeder kritische Blick und jedes “Aha” hilft die Idee zu verbessern.


Mit dem Feedback aus den Tests steigt das Design-Thinking Team in eine passende vorangegangene Phase wieder ein. Wird die Idee komplett verworfen, bietet sich eine neue Recherche mit dem gewonnen Wissen oder eine andere Idee aus dem Ideenspeicher an. War ein Prototyp erfolgreich, kann dieser mit einer darauf aufbauenden Ideenfindung in eine noch passendere Lösung verwandelt werden.
Oft wird so der Design-Thinking Prozess zwei bis vier Mal durchlaufen, bis am Ende eine wünschenswerte, machbare und wirtschaftlich tragfähige Lösung steht.


Das Wissen wird übergeben. Produkte müssen hergestellt, Software programmiert und Dienstleistungen und Prozesse umgesetzt werden. Und das ist nun nicht mehr die Aufgabe von Design Thinking.

Related posts:

  1. Design Thinking für Product Owner – Teil 1: Was ist eigentlich Design Thinking?
  2. Produktfindung mit Design Thinking
  3. Design Thinking für Product Owner – Teil 3: Der Design-Thinking-Raum

Categories: Blogs

From Bankruptcy to Abundance

Agile Tools - Mon, 09/22/2014 - 04:58


I recently read Rose and Benjamin Zander’s book The Art of Possibility: Transforming Professional and Personal Life and I strongly recommend it. To me it was a book full of stories about mindset management, all primarily set in the wonderful context of music. Much of the book described techniques for moving from a mindset of bankruptcy to a mindset of abundance.

That’s something that I can relate to in my current role. There are times when I find myself trapped in that mindset of bankruptcy. The narrative in my head goes something like this: None of the teams I work with is doing what I hoped they would. We’re not agile enough. We’re not innovative enough. Our culture is all wrong. We can’t get there from here. We suck.

That’s the mindset of bankruptcy talking. There’s never enough. We’re never good enough. It’s a pretty bleak place. I know I’m not alone in living there from time to time. I work with people who come to me with this narrative all the time. What do I tell them?

Well, first of all, I have to check in with myself and see where I’m at. If I’m in the same place as they are, then this conversation isn’t likely to go well. The best I can usually do in that case is to commiserate with them.

But there are times when you are in the place abundance. There is another perspective that allows a much different interpretation for the same set of circumstances. I find that talking with folks from a variety of different backgrounds helps. They’re the ones who will look at me and say, “Wow! You guys are awesome! I hope we can get there one day!” At first my reaction is to deny what they are saying. We aren’t that good. You don’t really get it. But then sooner or later it dawns on me that although we have many things to improve on, we also have managed to achieve amazing things along the way. Things that we now take for granted.

The difference between those two mindsets is that one has room for new opportunity and the other leaves little room for any opportunity at all. I loved their expression when something fails, “How fascinating!” Using a phrase like that suggests curiosity and an openness to exploration. I love it.

I don’t know if I have the kind of temperament that would enable me to live in this mindset full time. But I sure would like to visit it more often and maybe even share the trip with a friend.

Filed under: Agile, Teams Tagged: abundance, Agile, attitude, bankruptcy, management, mindset, perspective
Categories: Blogs

Calculating Standard Deviation with and Array.reduce, In JavaScript

Derick Bailey - new ThoughtStream - Sun, 09/21/2014 - 23:26

I’ve built a handful of reports for podcasters on SignalLeaf to see how large their audience is, how the audience is listening, etc. One report that I have been wanting to build for a while, though, needs to show the standard deviation for episode listens. That is, I want to show the listens for each episode and then show the average range in which those listens are falling. The resulting chart would look something like this:

Standard deviation

In order to show this, I need to be able to calculate the standard deviation based on the number of listens for each episode shown in the graph. And while I could reach for an existing JavaScript statistics library, like one of the many listed here, I decided it would be more fun to learn how to do this myself.

Standard Deviation

According to Wikipedia, the most basic example of standard deviation is

found by taking the square root of the average of the squared differences of the values from their average value

The article goes on to show an example of how to calculate the standard deviation for a given data set:

  1. get the average value of the data set
  2. calculate the difference between each value in the set and the average
  3. then square the result of each difference
  4. average the squared differences
  5. get the square root of the average squared difference

 There’s a lot more detail about this being a “population” standard deviation, and different forms of standard deviation, etc. But for me, this is good enough. I don’t really need to more any more detail than this. I have everything I need to get in to the code and calculate some stuff.

Standard Deviation, In JavaScript

The first thing to do is get the average of a given data set. For this, I’ll use a combination of Array.prototype.reduce and Array.length

The reduce method allows me to iterate over the items in an array and produce a single result. In this case, I am producing a sum of all the items in the array… that is, I am reducing the values of the array down in to the sum of the values.

The next line divides the sum by the number of items in the array… the length of the array… the give me the average. 

Now I need to calculate the difference between each of the values in the array, and the average value. To do that, I’m going to use the function.

The map function allows me to iterate over the items in an array, and return a new array of values that are calculated from the original array of values. The resulting array will be the same length as the original array, but it will contain the values that were calculated and returned from the callback of the .map call, instead of the original values. In this case, I am calculating the difference between the original values and the average of those values.

I also need to square the differences of each value as part of this process. Rather than running a second map over the differences, though, I will modify the previous map to return the square of the difference, instead of just the difference. 

With the squared differences in hand, I need to get the average of these values. Again, I need to sum and divide the array – the same way I did it before. Only this time, I am creating the average of the squared differences instead of the average of the original values. Rather than repeat the code from above, I’ll extract that in to an “average” function:

And now I can run my average function to get the average squared difference:

Lastly, I need to calculate the square root of the average squared difference. This final result is the standard deviation for the data set from which I am calculating:

The Math.sqrt function is the “square root” function built in to the JavaScript Math object. It returns the the square root of the value that is passed in to the method call.

The Final Code

Putting it all together, the resulting code looks like this in one complete listing:

I can now run my standard deviation calculations by calling the “standardDeviation” function, passing in an array of numeric values.

Check out the working example in this JSFiddle, for the data set [1, 2, 3, 4, 5, 6, 7, 8, 9, 25]:

From here, I can apply the standard deviation to the average value.

Applying The Standard Deviation

Given the above data set, the average will be 7 with a standard deviation of 6.48. 

The application of standard deviation is often shown as a +/- on top of an average. The +/- value is the standard deviation. Therefore, with the data set above, we can say that we have an average value of 7 with a standard deviation of +/- 6.48. 

The result of this information is shown in the original chart from the top of this post

Standard deviation

In this graph, the average is the solid yellow line through at the value of 7, while the standard deviation is shown with dashed lines on the upper and lower bounds: 7 – 6.48 for the lower bounds, and 7 + 6.48 for the upper bounds.

And there we have it… the standard deviation for a given set of values, calculated in JavaScript and graphed appropriately!

Array Map and Reduce

In the above code I’ve used both the map and reduce functions of a JavaScript array. These are powerful functions that allow you to quickly and easily manipulate an entire set of data in an array. Unfortunately these methods are not available everywhere. If you need support for IE 8 or below, for example, you will need to use a library like Underscore or Lodash, both of which provide a map and reduce function built with backward compatibility. For recent browsers, NodeJS and other modern JavaScript runtime environments, though, map and reduce are built in to the Array prototype and are available to use.



More Fun With Arrays

Looking for more fun things to do with arrays? Check out these episodes of WatchMeCode, where I walk through some of the interesting things that can be done with them!

NewImage NewImage NewImage

     Related Stories 
Categories: Blogs

McKinsey on Unleashing the Value of Big Data Analytics

J.D. Meier's Blog - Sun, 09/21/2014 - 19:15

Big Data Analytics and Insights are changing the game, as more businesses introduce automated systems to support human judgment.

Add to this, advanced visualizations of Big Data, and throw in some power tools for motivated users and you have a powerful way to empower the front-line to better analyze, predict, and serve their customers.

McKinsey shares a framework and their insights on how advanced analytics can create and unleash new business value from Big Data, in their article:
Unleashing the value of advanced analytics in insurance

Creating World-Class Capabilities

The exciting part is how you can create a new world-class capability, as you bake Big Data Analytics and Insights into your business.

Via Unleashing the value of advanced analytics in insurance:

“Weaving analytics into the fabric of an organization is a journey. Every organization will progress at its own pace, from fragmented beginnings to emerging influence to world-class corporate capability.”

5-Part Framework for Unleashing the Value of Big Data Analytics

McKinsey's transformation involves five components.  The five components include the source of business value, the data ecosystem, modeling the insights, workflow integration, and adoption.

Via Unleashing the value of advanced analytics in insurance:

1. The source of business value Every analytics project should start by identifying the business value that can lead to revenue growth and increased profitability (for example, selecting customers, controlling operating expenses, lowering risk, or improving pricing). 2. The data ecosystem It is not enough for analytics teams to be “builders” of models. These advanced-analytics experts also need to be “architects” and “general contractors” who can quickly assess what resources are available inside and outside the company. 3. Modeling insights Building a robust predictive model has many layers: identifying and clarifying the business problem and source of value, creatively incorporating the business insights of everyone with an informed opinion about the problem and the outcome, reducing the complexity of the solution path, and validating the model with data. 4. Transformation: Work-flow integration The goal is always to design the integration of new decision-support tools to be as simple and user friendly as possible. The way analytics are deployed depends on how the work is done. A key issue is to determine the appropriate level of automation. A high-volume, low-value decision process lends itself to automation. 5. Transformation: Adoption Successful adoption requires employees to accept and trust the tools, understand how they work, and use them consistently. That is why managing the adoption phase well is critical to achieving optimal analytics impact. All the right steps can be made to this point, but if frontline decision makers do not use the analytics the way they are intended to be used, the value to the business evaporates.

Big Data Analytics and Insights is a hot trend for good reason.  If you saw the movie Moneyball you know why.

Businesses are using analytics to identify their most profitable customers and offer them the right price, accelerate product innovation, optimize supply chains, and identify the true drivers of financial performance.

In the book, Competing on Analytics: The New Science of Winning, Thomas H. Davenport and Jeanne G. Harris share examples of how organizations like Amazon, Barclay’s, Capital One, Harrah’s, Procter & Gamble, Wachovia, and the Boston Red Sox, are using the power of Big Data Analytics and Insights to achieve new levels of performance and compete in the digital economy.

You can read it pretty quickly to get a good sense of how analytics can be used to change the business and the more you expose yourself to the patterns, the more you can apply analytics to your work and life.

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

Management Innovation is at the Top of the Innovation Stack

Categories: Blogs

Hands-on Test Automation Tools session wrap up - Part1

Xebia Blog - Sun, 09/21/2014 - 16:57

Last week we had our first Hands-on Test Automation sessions.
Developers and Testers were challenged to show and tell their experiences in Test Automation.
That resulted in lots of in depth discussions and hands-on Test Automation Tool shoot-outs.

In this blogpost we'll share the outcome of the different sessions, like the famous Cucumber vs. FitNesse debat.

Stay tuned for upcoming updates!

Test Automation Frameworks

The following Test Automation Frameworks were demoed and discussed

1. FitNesse

FitNesse is a test management and execution tool.
You'll have to write/use fixture code if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
You can have a good drill down in test results.
You can make use of scenario's and scenario libraries to make test automation steps reusable.
But refactoring is hard when scenario's are used extensively since there is no IDE support (yet)

2. Cucumber

Cucumber is a specification tool which describes how software should behave.
You'll have to write/use step definitions if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
Cucumber forces you to write specifications / tests with scenarios (Behaviour in human readable language).
You can drill down into test results, but you'll need reporting libraries like Cucumber Reporting
We recommend using IntelliJ IDEA with the Cucumber plugin since it supports Cucumber seamlessly.
Refactoring becomes less problematic since you're using a IDE.

3. Selenium / WebDriver IDE

Selenium / WebDriver automates human interactions with web browser.
With the Selenium IDE you can record and play your tests in Firefox

Pros and Cons
It can get you started very quickly. You can record and play your tests scripts without writing any code.
Reusability of test automation code is not possible. You'll need to export it into an IDE to introduce reusability.

Must haves in Test Automation Tools

During the parallel sessions we've collected the following must haves in test automation tools.

Testers and Developers becoming best friends

When developers do not feel comfortable with the test automation tool, testers will try to fill the gap all by themselves. Most of the time these efforts result in hard to maintain test automation code. At some point in time test automation will become a bottleneck in Continuous Delivery. When picking a test automation tool consider each other's needs and pick a tool together. Feeling comfortable in writing and maintaining test automation code is really important to make test automation beneficial.

Separating What from How

Tools like FitNesse and Cucumber were designed to separate the What from the How in test automation. When combining both in those tools, you'll end up getting lost in details and you'll lose focus of what you are testing.
Use tools like FitNesse and Cucumber to describe What you are testing and put all details about How you are testing in test automation code (like fixture code and step definitions)

Other interesting tools
  • Thucydides: Reporting tests and results (including functional coverage)
  • Vagrant: Provisioning System Under Test instances
  • Liquibase: Treating database schema changes as 'code'

Stay tuned for upcoming updates!


Categories: Companies

The Danger of Point Solutions

NetObjectives - Sun, 09/21/2014 - 14:58
The software development world has created several approaches to improving the work at the team level. These include eXtreme Programming, Scrum, Kanban, Kanban Method. While all of these solutions are based on some degree of reality, much of their organization and practices are based on the belief systems of their creators. I think we can get the best of what these all contribute not merely by...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Release Planes versus Release Trains

Leading Agile - Mike Cottmeyer - Sun, 09/21/2014 - 11:49

There is a lot of talk these days about SAFe.  I have a lot of respect for what Dean Leffingwell has done but there is a minor use of language that has been bugging me in recent days. Just as I disagree in using farm animals to describe people on a Scrum team, I believe the Release Train metaphor is dated and has its limitations. I believe, when doing Agile at scale, a Release (Jet) Plane offers a better representation of the complexity of enterprise level delivery processes.

When I think of a train, I think Amtrak in the NorthEast Corridor, the DC Metro, or School House Rock.  I don’t think of a train when thinking of the most modern or efficient mode of transportation. So, why pick the train?

I get the metaphor:

  • The train departs the station and arrives at the next destination on a reliable schedule.
  • All “cargo,” including code, documentation, etc., goes on the train.
  • Most people needed on the train are dedicated to the train.

When speaking to architecture, SAFe refers to architectural runway, not architectural train tracks or a rail-yard.  It’s a mixed use of transportation metaphors that is not explained.  The runway is a perfect alignment with my Release Plane metaphor.

  • Most people needed on the plane are dedicated to the plane.  Sure, each plane has a flight crew but they also have a ground crew at each airport (DevOps).
  • All “cargo,” including code, documentation, etc., goes on the plane.  I also recognize that some cargo is more valuable than others. Let’s put them in first class or allow early boarding.
  • Planes depart airports and arrive at the next destination on a reliable schedule, unless you fly American Airlines.

A few more comparisons:

  • When you go to the airport and try to get you and your luggage on a plane, don’t you and your luggage (the cargo) go through a series of checkpoints?
  • If you don’t get past a checkpoint, do you still think you’re going to get on the plane? This should be pointed out.
  • Doesn’t everyone have to comply with cargo size limitations?  Again, this should be pointed out.

None of these additional comparisons apply to the release train metaphor as well as the release plane.

I just thought I would bring this up.  Can you think of any other things that would apply to my plane metaphor?  Maybe we’ll see the release plane used in SAFe 4.0.

The post Release Planes versus Release Trains appeared first on LeadingAgile.

Categories: Blogs

Building the Corporate User Interface

Agile Tools - Sun, 09/21/2014 - 07:47


A while back I did a talk on the subject of “Hacking the Organization”. It was largely inspired by Jim McCarthy’s talk at a local Open Space. Listening to his talk I realized that people who have programming skills AND insight into processes have a unique opportunity to reprogram the organizations that they work in. This reprogramming can be done in a few different fashions:

Changing the processes: Changing the way people work by introducing new methods, practices and protocols

Changing the systems: integrating the systems to make reporting, operations, and other business processes work more smoothly

Blending the processes and the systems: Changing the way people work and the systems that they work with so that they support each other – making people more alive and engaged in the organization. It’s merging the people and the machine to enhance each other.

In fact, in the lean/agile community, we have become very adept at creating relatively high functioning teams using practices that have evolved significantly over the last few decades. Technology has evolved at an even faster rate, with the web, mobile and other technologies creating opportunities for collaboration that never existed just a few short years ago. Modern teams have the opportunity to revolutionize the way people and systems work together.

Those of us who can code and who are interested in improving the process to benefit everyone are the magicians who have a uniquely powerful opportunity to create real change in organizations. That’s not a bad thesis, right?

Well, the more I thought about it, the more I realized that we are not simply trying to change the organization for the pure sake of change. We strive to make the organization more “user friendly”. Our changes aim to make the organization into a place where people can express their work as joy and express passion for what they do. It should enable that sort of engagement, which forms the catalyst for genuine innovation and products that people love.

What would an organization with a good user interface look like? That’s easy:

  • It’s fun to use
  • It has functions we care about and are easy to find
  • We can clearly see how to do what we need to
  • When we take action it is effortless and feels natural
  • It’s responsive, giving users rapid feedback to their actions

What could we do as organizational hackers to achieve these ends? We could introduce ramification to corporate operations. Turn in your timecard promptly and level up! Create electronic systems that make it easier to reward or thank our peers for their work. We could create dashboards that provide visualizations for corporate operations. We can make this information universally available, even omnipresent for everyone in the company from the CEO to the janitor. We can make our work visible so that people can make educated decisions about what the most important work is that they can be doing. All corporate activities should be self serve and provide immediate feedback.

Wouldn’t that be awesome?

We can do that. We don’t have to ask for permission, we can just do it. Link the sales reporting system to a dashboard. Go ahead, do it. Pull in some transaction metrics and do a little simple math to demonstrate the average dollars per transaction. Automate the HR system so that with the press of a single button you can initiate hiring someone – automate all the paperwork. You make the work easier for everyone. You not only save yourself time, but you save time for everyone else who does that work now – from now on! This kind of savings can multiply very quickly.

The coding really isn’t that hard – most back office systems have pretty sophisticated APIs that enable the possibility of this kind integration. All it takes is someone with the will to make it happen. Guess who that is? You.

Filed under: Agile Tagged: Agile, back office, core protocols, hacking, integration, organization
Categories: Blogs

Enough With Defending Approaches

NetObjectives - Sat, 09/20/2014 - 19:14
My banter on twitter about being unhappy with Scrum and the Kanban Method leaving things out is reasonably well known.  I very often exclaim that Scrum should be done within the context of lean (a 10+ year old rant) and that the Kanban Method must attend to teams (which it’s been deemed to be orthogonal too).  When I make my claims I usually hear that “these approaches are designed this way”....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

XProgNew – Working with layout - Sat, 09/20/2014 - 17:04

I’ve been trying to learn some things about how the new implementation will handle the front page. The front page has a top row the most recent articles in two categories, “Beyond Agile” and “The Annals of Kate Oneal”, and a randomly selected “Classic” article.

A row below that contains the five most recent articles of any category, except not anything from the first row. (No duplicates.)

The team have no real understanding of how all this Sinatra stuff should ideally hook together. We know some things that work but not yet how it “should” be done. So today I was spiking some things.

Last night I downloaded some books on Bootstrap, because it is a famous “mobile-first” framework, and I thought I should know about it. Since it is also easy to start with, I decided to try some things with it.

The original home page looks like this:


Thinking about the entities represented on that page, coming from the articles repository, we have a couple of collections of articles. For each one we need to have a picture, a title, and a precis. Each article needs to know those things, it seems to me, and deliver them to the page to be rendered.

We also have a “header” which is the bit that says “Beyond Agile” and so on. We could go one of at least two ways on that. We could have the header string in the HTML, or we could provide it as part of the article. It seems to me that selecting what will be in that top row is a decision, and one that is kind of a “site strategy”. Rather than have the page be edited when this list changes, I am inclined to make the change in the model — that is, in the code that supports the page and provides its content.

So my plan is that the “Article” will provide a header, a little picture (icon), a title, and a precis. I’m only part way there right now. The Ruby / Sinatra code looks like this:

require 'sinatra/base'
require 'kramdown'

set :static, false

class SinatraDemo < Sinatra::Base

Article =, :header, :title, :content)

a1 ="katetrans.png", "Beyond Agile", \
                "This is interesting", \
                "This is a particularly interesting article which all should consider and enjoy.")
a2 ="kate.png", "Annals of Kate Oneal", "Kate makes us laugh", \
                "That is a fascinating compendium of topical subjects arranged in a amusing fashion.")
a3 ="kate.png", "Classics", "Consider this, vagrants!", \
                "The other is something well worth consideration by the thoughtful reader as well as the itinerant browser.")

  get '/' do
    actual_article = + "/articles/xprog-implementing-anew/")
    @article = markdown actual_article
    @fave = [a1, a2, a3]
    # this means that the erb has to know an array is coming.
    # could instead build up a longer HTML but the erb has a header in it

    erb :frontpage
    # what's returned here is what's displayed

  get '/image/:img' do
    send_file File.join('articles/xprog-implementing-anew/', params[:img]), :type => 'image/jpeg', :disposition => 'inline'

What we have there, in addition to the Sinatra boiler plate, is a struct “Article” that contains the fields we need for display. They are all text at this point. It may be that we’ll have markdown to worry about later.

We define three articles with the image name of their “icon”, header, title, and the precis (which is called “content”) in the display above. Writing this, I realize I need a better name. Like precis.

In our get, we’re creating an actual_article by reading a file from one of the folders I’m using as a repository. For testing this article, I’ll change that code to use this file. Further out, we’ll have a list of articles, click on one, and display that page. Right now we’re spiking to learn things.

We markdown the actual_article into a member variable called @article, and we create an array, @fave, containing our three Article structs. Then we render the frontpage with erb and see what comes out. Here’s a look:


What we see here is that there are three articles at the top and they all have (almost) the same icon. (The first one’s picture is transparent, the other two have a background. Note the difference in the Ruby code: one of them uses katetrans.png)

Now let’s look at the template for this page, in the file frontpage.erb. It looks like this:

Screen Shot 2014-09-20 at 11.15.20 AM

Let me call out a few things of note. Up in the header, there are some meta lines and a link. I copied those directly from the book. I am not entirely clear what they do, and not very worried about it. There’s a lot that I do (nearly) understand, so I’m happy about that.

Down after that empty line, we have a div with class of “row”, then a patch of code, embedded in script brackets, saying @fave.each do | s |. That’s an embedded ruby statement opening a loop over the instance variable @fave. That this works tells us that somehow, this HTML page has access to our instance variables. It is as if it is part of our code: inside this erb file, self.class returns SinatraDemo, the name of the Ruby class shown above. I have no idea how this works. I’d like to know someday but for now it’s just another magic incantation.

Reading on down that mixture of Ruby and HTML, we see that inside the row, we are looping for each item @fave, each one with header, title, and content. Each one is inside a div with class “col-lg-3 col-md-4″ and so on. This is the Bootstrap CSS magic to make the columns that we see in the picture above, where the little Kate pictures are. If I resize the browser window, the CSS makes things like up differently until finally, if it’s narrow enough, it goes down to one column like this:


Pretty sweet, if you ask me.

One more thing (Columbo)

Note that “get ‘/image/:img’ do” down toward the bottom of the Ruby code. What’s that about? Well, that’s the code for the problem mentioned in another article, up on my real web site, about displaying images. I want the images with the article. The current design is that every article should be in its own folder, with its images, and its metadata. That would make it convenient to edit articles and to pack them up. And it would mean that in Markdown indicating an image will be as simple as possible: !()[foo.png].

Rackup and Sinatra, in our infrastructure, assume that images will be in the “public” folder. They also want to be sure people can’t go browsing around outside the public folder. So there is folderol and rigmarole going on to ensure that no URL can be served from outside that folder. Somehow they’re controlling what folders you can access, and no amount of dot-dotting and the like will get you out. Security is a good thing, we’re down with that.

So I tweeted for help on this topic, and put a little article on XProgramming, and Cory Foy, among others, went to the trouble of demonstrating a way to do it. Our version of that, right now, is that the image is shown as !()[image/foo.png] in the Markdown, and the code in that get is tricking Sinatra into serving the picture in line.

That won’t do long term. In particular, we don’t know right now what the folder is for that particular picture, so the folder name is in there literally. (That’s true in the main get as well.) That will get sorted out in the future, and we’ll probably wind up putting all the article folders under public as part of that.

Anyway, that’s what’s going on there …

Summing up and it’s about time if you ask me

What we have here is a spike of a moderately complicated story. In that spike we have learned a lot:

  • Experimenting with Bootstrap CSS suggests that it’s pretty easy to get a mobile-first design that’s responsive.
  • Experimenting with communication between the template and the Ruby code shows that we can share information with member variables and we can process fairly complex objects and collections.
  • We see that writing Ruby in line looks weird. We also found that it’s very fiddly.
  • An “Article” class is probably emerging. Right now it’s just a Struct, but I can feel behavior right around the corner.
  • Our assumed folder design conflicts with Rackup and Sinatra a bit.
  • None of this looks right, yet all of it works pretty well.

This is the nature of spikes. We get everything connected up. Having learned one way to do things, we’re past “impossible” and into finding better ways.

Bill and I will figure out next steps on Monday. Some candidate ideas are:

  • Get this stuff into a shared repository. Currently he and I are working separately on separate code all the way down.
  • See about pushing this to a public site (Heroku?) so that you can follow along.
  • Decide where to put articles and images.
  • Get the index page producing real lists of articles not made up ones.
  • Work out ways to let multiple gets on a page know where we’re coming from so they can plug in the right folders as needed.
  • … and surely much more.

So far so good. Thanks for coming along. Write me an email if you have a helpful idea.

Categories: Blogs

XProgNew – Progress Report - Sat, 09/20/2014 - 16:43

Tozier and I worked on the new thing yesterday. I had written a little article with a picture in it, to drive our plan of keeping each article in a folder named by its “slug”, with pictures and metadata in that same folder.

This would allow the ![]() notation for pictures to “just work”. Well, it didn’t just work. After long internetting, we found that Sinatra wants “assets” like that to be in a folder named “public”, and it seems that templates are evaluated as if they are in that folder, even though they are not.

We found thst you can change that folder. That turns out to mean that you can give it another name, not really say “assets are now in ‘this/that/’. We found that you can’t navigate out of that folder by any combination of “../” magic. We found that this is probably a function of “rackup”, not of Sinatra.

Finally, Cory Foy, tweeted a reply that I was finally able to understand, that involves using send_file upon seeing the get for the picture. Understanding required me to learn, or relearn, a little about Ruby, about Sinatra, and about how the Internet works. Also maybe some quantum physics, I’m not sure. Anyway thanks to Cory and all the folks who tweeted info and help in response to my crying in the wilderness.

The result is that in three three hour sessions, we have my computer set up with the right stuff, and a rudimentary Sinatra thing running that looks a lot like what we’re trying to do. We have not, however, started TDDing, but with no methods and each get being about two lines, there’s not much to TDD yet. Next week for that, too, I think.

Categories: Blogs

XProgNew – A New Implementation - Sat, 09/20/2014 - 16:41 is implemented using WordPress. A number of things make me feel I need to re-implement it. It needs a look and feel revamping; my webmistress is moving on; WordPress updates almost always just work; the existing implementation is crufty; it might be fun; I have a pal who’s doing something similar; and so on.

Bill Tozier is building a new site for his wife, and one for himself, and suggested that since my site is getting tatty, we might pair on mine. And Laura Fisher, who put the current one together, wants to be sure I’m in good shape before she jets off to California. So here we go.

I’ve written a bunch of cards to serve as a backlog, which I’ll list in a separate article. I’ll write a report for each “Sprint”, as we do things. Here, I’ll just introduce what’s going on.


When Laura moves on, I’d like to be able to maintain the site’s structure as well as content.

I’d like it to be easier to write articles: I’m using Markdown a lot lately and liking it. It’d be nice to just write an article, click a button, and voila! it’s up on the site, pictures and all.

I’d like to build something made out of software, and to learn a bit in the process.


Bill’s using Ruby, Sinatra, and the usual suspects, and since I like Ruby, why not. I plan to keep the site with my current ISP, but we might do the building on Heroku, just for practice and to learn how to do that.

Here’s a picture I drew during our early chat on the project. I include it mostly so that we’ll have to implement pictures.


We’ll pair on most everything, and test-drive what makes sense to test-drive. I don’t find it easy to stick to my guns on testing things like web sites, especially with small steps, modular code, and very visible effects. So I’ve asked Bill to be sure to push on that.

We both want to understand “everything” so we’re not dividing up the development work, though I imagine we’ll both “spike” things from time to time. We’re doing our joint work at the BAR, the Brighton Agile Roundtable, at the Barnes and Noble coffee shop in Brighton, Michigan.

Bill has a better handle on installing the stack of stuff you have to have to build things. (He’ll be documenting details on that: I plan not to, but we’ll see.) And I hate installing development stuff: it never works the way it is supposed to and then you have to run around on the Internet figuring out what to do. His help is valuable in that he has done more of it and seems a bit more patient with it than I am.

Looking Forward

I hope to write a little article like this one, reporting on every day or every couple of days we work. I may put them up on the current site, and will in any case be using them as the core articles for the new site. Stay tuned!

Categories: Blogs

Recommendations for Kanban Coaching Professional Masterclass

Recent attendees of the Masterclass tell you what they valued and why you should attend...

David's approach to training is truly unique. I now have a different lens to view my team's upstream work, current work in progress, and deeper knowledge on how to communicate risk without disrupting the flow of changes throughout the organization.  What David has created with his, Modern Management Framework, is a revolutionary way of thinking for an evolutionary way of change. Jay Paulson


read more

Categories: Companies

Hunting for The Elusive Swarming Rule

Agile Tools - Sat, 09/20/2014 - 07:58


There are a variety of different ways of going about swarming. There is the ad hoc approach: all hands on deck, everybody find a way to help. Then there are methods are a bit more subtle. You aren’t just dog piling on whatever the problem is. Instead, what you are doing is applying simple rules that allow behavior to emerge. Easy, right?

Not really. The trick is learning what those rules might be. We are looking for simple rules that when followed may reveal emergent patterns that are hard to predict. That very emergent nature makes them hard to discover. You can’t just say, “Hey, I want a simple rule to build the next mobile app.” It just doesn’t work that way. The rule and the emergent behavior often have absolutely no apparent relationship to each other. So how can we find these rules?

One way to do it is to simply look at others and see what they do. Perhaps there are things people do that bring about the behavior. The problem with this approach is that you need to find a source of pretty significant variation in behavior in order to have the best chance of discovering the kind of behavior you are looking for. That leads me to this conclusion: You aren’t going to find it where you work. In fact, you probably aren’t going to find it in your industry (yeah, I’m talking about software). If you are looking for these kind of rules you need to cast your net really wide. Across multiple industries.

If it were me, I’d look into industries like logistics and shipping, I’d look into the printing industry (they are the reigning kings of resource planning). I’d look across different cultures at food vendors in the streets of india, or the techniques that a London cabby uses to remember all of the streets and byways of London. I’d go to emergency call centers, emergency rooms, trucking dispatchers, call centers, anything you can imagine is game. Start with the end state in mind (at least in broad terms) and then work your way down toward the specific building blocks that might get you there. It’s probably a fool’s errand, but at least you are looking for an answer.

Perhaps it can only come from experience?

It reminds me of deer hunting with my father. In the darkest and coldest hours of the early fall morning we would leave our camp to hunt. As we walked out to where ever we were hunting for the day, often we would debate our strategy for the hunt. My Dad always told me that the best strategy was to pick a likely spot and wait for the deer to come to you. If you were patient and sat still, you were much more likely to see something or have a deer come stumbling across your path. He said that’s why old guys were better hunters. They would go out and fall asleep on stump and wake up an hour later to find a monster buck blithely munching away right in front of them. All you have to do is just sit there perfectly quietly. All. Day. Long.

Now to me, as a teenager, this sounded like an excellent recipe for going rather swiftly and completely insane. There we were, out in the hills wandering through what could easily be described as “God’s Country” There was deer habitat everywhere. A herd of deer over every hill. They could be down in the next valley beside the river in the morning. Later in the day they might be high on the ridges bedded down among the rimrocks. The only way to get those deer was to go to where they were. You had to move around and cover some territory.

I spent days roaming the hills and valleys hunting – and finding nothing but dry grass. I would cover 20 miles a day. It turns out that I make a hell of a lot of noise when I’m roaming around. Even when I’m being really sneaky I still make an unholy racket (at least to a deer). They hear you coming a mile off. That’s if they don’t see you silhouetted against the skyline as you cross the ridge. Or if they smell you coming from the next county as you sweat your way up to the rimrocks. It seemed like where ever I went, they were always long gone.

You see each of us was practicing a simple rule. My Dad’s rule: sit still. Mine? Keep moving. So who do did the best? Dad.

Me? I scared the hell out of every deer, fox, mule, turkey and woodchuck in a 20 mile radius. On the bright side, I never spent a dime on bullets.

I eventually learned which rule worked best, but it took me time and experience to get there.

Filed under: Agile, Swarming Tagged: Agile, complexity, Hunting, rules, simple rules, Swarming
Categories: Blogs

Show A Friendly Error Message If A User Specified Image URL Doesn’t Load

Derick Bailey - new ThoughtStream - Sat, 09/20/2014 - 01:46

SignalLeaf allows a podcaster to specify an image for the podcast and/or episodes. To do this, you paste a URL to an image in to an input box. It’s a pretty standard setup, over-all. 


With this being a public facing system, letting people specify any URL they want often leads to mistakes – the most common of which is pasting in a web page URL that contains the image they want. For example, someone might paste a media file location from a WordPress blog, and accidentally paste the view page instead of just the image URL. When this happens, I want to show people a friendly error message to say something went wrong and they need to fix it. Fortunately, this is fairly simple with jQuery and the error / load methods for loading images.

Handling Errors On Image Load

The first thing to set up is a jQuery selector around your image tag, and for the “change” event of the input for the URL. Once you have that, can call the .error method on the img selector object. In this callback, run additional code to show an error message of some kind.

In my case, I’m showing a previously hidden error message and also hiding the image preview “img” tag. The end result looks like this:


(note that I left the single pixel border an image preview space in place, to show that no image was loaded)

Handling Successful Image Load

With the error handler in place, you will also need a success handler for when the image loads properly. You don’t want to continue showing the error message, or hiding the image when it does load successfully after all. 

Modify the existing code to add a .load method call on to the img tag selector. This takes another callback which can be used to show the image preview and hide the error message.

The end result is what I showed in the first screenshot above – a successful image load with the URL in the input box.

A Note On When To Wire This Up

One thing you need to watch for, is when you wire up the .error and .load callback methods. You need to ensure this is done before the <img> has a src attribute – before the image is loaded. If you don’t put the error and load callbacks in place prior to the the image source being set, the callback methods won’t be fired and things won’t work.

A Much Better User Experience

Having a simple set up to show error messages like this is one way to help create a better user experience. Instead of just giving people an input box and hoping for the best, this code provides immediate feedback on whether or not they got it right. Giving your users that kind of knowledge, as quickly as possible, will save a lot of frustration and headache for the people using your system and for the support people who would have to help fix simple mistakes that could have been avoided.

     Related Stories 
Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.