Skip to content

Feed aggregator

Agile Requirements: a Definition of Ready Checklist

Scrum Expert - Tue, 11/01/2016 - 17:30
We all know the “Definition of Done” used in Scrum for items that should be potentially shippable to the customer at the end of the sprint. In his book Essential Scrum, Kenneth Rubin discusses the “Definition of Ready” that applies to product backlog items that should be ready to be developed before the start of the sprint. Grooming the product backlog should ensure that items at the top of the backlog are ready to be moved into a sprint so that the development team can confidently commit and complete them by the end of a sprint. Some Scrum teams formalize this idea by establishing a definition of ready. You can think of the definition of ready and the definition of done as two states of product backlog items during a sprint cycle. Both the definition of done and the definition of ready are checklists of the work that must be completed before a product backlog item can be considered to be in the respective state. An example of a definition-of-ready checklist for product backlog items is given below. Definition of Ready * Business value is clearly articulated. * Details are sufficiently understood by the development team so it can make an informed decision as to whether it can complete the product backlog item (PBI). * Dependencies are identified and no external dependencies would block the PBI from being completed. * Team is staffed appropriately to complete the PBI. * The PBI is estimated and small enough to comfortably be completed [...]
Categories: Communities

OMG They made me Product Owner!!

Xebia Blog - Tue, 11/01/2016 - 14:24
The face of guy in the hallway expressed a mixture of euphoria and terror when I passed him in the hallway. We had met at the coffee machine before and we discussed how the company was moving to a more Scrum based way of developing their products. “You sort of know how this PO thing
Categories: Companies

For and Against and For Software Craftsmanship

Leading Agile - Mike Cottmeyer - Tue, 11/01/2016 - 13:00

The idea of software craftsmanship, as expressed in the Manifesto for Software Craftsmanship, is (in part) to encourage software developers to strive for excellence in their work in order to create productive partnerships with customers and to add value steadily for those customers.
The highly respected software developer and customer-focused consultant, Dan North, blogged in 2011 that “Software Craftsmanship risks putting the software at the centre rather than the benefit the software is supposed to deliver.” Let’s ignore (or try to ignore) the obvious contradiction between the critique of the concept and its actual expression, and examine an analogy Dan uses to illustrate his point.

He points out that in a craft such as, for instance, cathedral-building, the work is intrinsically beautiful in its own right. In contrast, using the same sort of stone as was used in the cathedral to build a bridge, the goal is to make the bridge sturdy and utilitarian, such that people don’t even notice it.

As I see it, both the cathedral and the bridge are equally beautiful. Each is designed to serve a particular purpose. One purpose of the cathedral is to inspire awe and wonder in those who see it. This is one of the ways in which it performs its function in the society. One purpose of the bridge is to be functional without distracting the user from his/her own business. This is one of the ways in which it performs its function in the society. These are different design goals, and yet both require the same degree of engineering skill and craftsmanship.

Joel Spolsky has also questioned the usefulness of the term “craftsmanship” as applied to software. In a piece dating from 2003, he writes “If writing code is not assembly-line style production, what is it? Some have proposed the label craftsmanship. That’s not quite right, either, because I don’t care what you say: that dialog box in Windows that asks you how you want your help file indexed does not in any way, shape, or form resemble what any normal English speaker would refer to as ‘craftsmanship.'”

He’s right. The average English speaker does associate some sort of subjective notion of “beauty” or “artistry” with the word “craftsmanship.” But average English speakers don’t know any more about what makes the utilitarian bridge “beautiful” to an engineer than they know what makes the Windows dialog box “beautiful” to a software developer. And they don’t need to know that. It isn’t part of their world. They’re getting what they need from the cathedral, the bridge, and the dialog box. That is, in fact, the reason those things are recognized as beautiful by makers. If the dialog box performs its function without interfering with the user’s workflow, it’s damned beautiful. It’s as beautiful as a cathedral.

Another highly-respected software expert, Liz Keogh, has also weighed in against the idea of software craftsmanship; or at least, against the way the idea has been expressed. She writes, “I dislike the wording of the manifesto’s points because I don’t think they differentiate between programmers who genuinely care about the value they deliver, programmers who care about the beauty of their code, and programmers who hold a mistaken belief in their own abilities. Any software developer–even the naive, straight-out-of-college engineer with no knowledge of design and little knowledge of incremental delivery–could sign up to that manifesto in the mistaken belief that they were doing the things it espouses.”

She’s right. Many individuals overestimate their own abilities. I disagree that this invalidates the attempt to express an aspirational goal…and it says right near the top of the manifesto: “As aspiring software craftsmen…” (emphasis mine). So, it isn’t a question of people believing they’re already software craftsmen. Therefore, although Liz is correct in saying some people overestimate their own abilities, that fact has nothing to do with the document in question.

Liz is also right that different statements in the manifesto address different topics: Both customer value and code quality are mentioned. One is a goal and the other is a means. Both should be mentioned.

And there’s a false dichotomy in Liz’s comment, I think. Why would a software craftsperson not care about both the value they deliver and the beauty of their code? Does one negate the other? Indeed, doesn’t attention to clean design help support value delivery? Badly-designed code is more likely to contain errors and more likely to be hard to maintain than well-designed code.

The manifesto, like most products of humans beings, is imperfect. If we were to wait for a thing to be perfect before finding it in any way useful, then we’d still be fleeing from sabre-toothed cats in the tall grass of the savannah. Actually, come to think of it, we wouldn’t. We’d be dead. Our ancestors would have eschewed any less-than-perfect means of escape. Having eschewed, they would have been chewed.

Why is it that critics of software craftsmanship seem to miss the point? I might offer three humble observations.

1. Snap judgment

The criticisms of the manifesto almost universally suggest the critic has not read the document carefully. It’s possible that some people read the title and skim the thing, and then react on a gut level to one or more words they assume carry some implication they disapprove of.

It seems to me “add value steadily [for] customers” doesn’t mean “elevate the software above customer value.” Similarly, “aspiring software craftsmen” doesn’t mean “I overestimate my own abilities.”

2. Inability to compartmentalize thinking

When we try to understand a complicated thing, it’s often useful to switch between big-picture and focused thinking. We want to keep the whole in mind without discarding our ability to comprehend its parts.

The big picture is that the purpose of software development is to provide value to the stakeholders of the software. I doubt anyone means to elevate the craft of software development above that purpose. To think about, talk about, and strive to excel in the craft of software development takes nothing away from the larger goal of providing value to stakeholders. Indeed, such activity is motivated by the desire to provide that value. I might suggest it would be difficult, if not impossible, to deliver value to customers without paying due attention to craftsmanship.

If we turn the critique around, we might ask: How does one propose to provide value to the stakeholders of software without understanding or applying good software development practices? How does one propose to develop an understanding of good software development practices, and sound habits to apply them, without making a conscious and mindful effort to do so? If the stonemasons and others involved in construction had ignored the skills of their respective crafts, how good would the cathedral be? The bridge?

3. Limited conception of beauty

If we define “beauty” to suggest a close alignment between the finished product and its design goals, then I suggest both the cathedral and the bridge are beautiful from the perspective of the engineers, architects, and craftsmen who contributed to their construction. The fact the cathedral catches the attention of passers-by while the bridge goes unnoticed as people cross it means nothing less than both structures have achieved their design goals. Their “users” appreciate the value both objects bring, even if they don’t grasp the nuances of craftsmanship that went into their construction. And without those nuances, the cathedral would be nothing more than “a big hut for people to meet in” and the bridge would be a disaster waiting to happen.


At LeadingAgile, we appreciate software craftsmanship. We understand it exists solely to enable the delivery of value to customers. We’re frankly a bit confused when we hear or read interpretations that miss that point. We also understand that without attention to excellence of execution, no one can deliver value to customers. Has the manifesto been signed by people who may not be well qualified to speak of craftsmanship? Maybe, but the document is more a commitment than a diploma, so I think it’s fine for anyone to sign it who is on board with the concept. When you sign it, you place yourself publicly on a lifelong journey of learning and self-improvement. I’m at a loss to see what’s wrong with that.

The post For and Against and For Software Craftsmanship appeared first on LeadingAgile.

Categories: Blogs

What Is The Role Of Project Manager In Scrum?

Agile Learning Labs - Mon, 10/31/2016 - 23:21

This question came from a client: What is the project manager’s role in scrum?

In answer to your question about project managers, there is no official project manager role in scrum. The duties of a project manager gets split between the product owner, scrum master and the development team.

In general the product owner who has the vision of the product and is the business representative accountable for making sure the business is kept up to date about the product, the schedule, and the budget. The product owner does this in multiple ways including:

  • Grooming and refining the product backlog
  • Understanding the development team’s velocity so he/she has a sense of when backlog items may be ready for release
  • Communicating frequently with the stakeholders
  • In the sprint review meeting, helping the team demonstrate new features and facilitating conversations with the stakeholders on the direction of the product and the product backlog
  • Sharing and maintaining a budget

The scrum master is responsible for coaching the development team, protecting the team from changes during the sprint, training the team in scrum, helping them overcome obstacles, coaching and supporting the product owner, facilitating scrum ceremonies (meetings like the retrospective). The scrum master is truly a servant leader. They also are the agile champion for the whole organization. They would work with executives and other departments to have scrum work. For example – as a scrum master I would work to coordinate the work between scrum teams and non scrum teams if needed.

The development team takes work into a sprint and then they themselves decide who does what and when they do it. They have a daily scrum everyday to update each other on status and plan the day – so there is no need for a project manager to hand out tasks or manage people to make sure they are working.

So what I ask project managers when they are switching to scrum is – Where do you find your passion? Is it in working with business, developing a product vision and getting that product to market, managing the status, deciding which work should be done, spending a lot of time with stakeholders getting their opinions and buy in?

Or do you love spending time with the development team, having them win, solving problems or removing obstacles, helping people work better together, being an agile champion, training and coaching, creating retrospectives, being more in the weeds of how the development team works without being their manager or telling them what to do. Helping the development team to be self organizing.

This usually gives project managers a good direction of which scrum role they would be best suited for. It’s best that people choose where they want to provide value rather than assigning them to one role or another. I always ask project managers – Where do you think you could create the most value? That is everyone’s job in scrum, to create value anywhere they can.

Hope this was helpful.

Warm regards,

Categories: Companies

Coaches, Managers, Collaboration and Agile, Part 3

Johanna Rothman - Mon, 10/31/2016 - 22:33

I started this series writing about the need for coaches in Coaches, Managers, Collaboration and Agile, Part 1. I continued in Coaches, Managers, Collaboration and Agile, Part 2, talking about the changed role of managers in agile. In this part, let me address the role of senior managers in agile and how coaches might help.

For years, we have organized our people into silos. That meant we had middle managers who (with any luck) understood the function (testing or development) and/or the problem domain (think about the major chunks of your product such as Search, Admin, Diagnostics, the feature sets). I often saw technical organizations organized into product areas with directors at the top, and some functional directors such as those test/quality and/or performance.

In addition to the idea of functional and domain silos, some people think of testing or technical writing as services. I don’t think that way. To me, it’s not a product unless you can release it. You can’t release a product without having an idea of what the testers have discovered and, if you need it, user documentation for the users.

I don’t think about systems development. I think about product development. That means there are no “service” functions, such as test. We need cross-functional teams to deliver a releasable product. But, that’s not how we have historically organized the people.

When an organization wants to use agile, coaches, trainers, and consultants all say, “Please create cross-functional teams.” What are the middle managers supposed to do? Their identity is about their function or their domain. In addition, they probably have MBOs (Management By Objective) for their function or domain. Aside from not working and further reducing flow efficiency, now we have affected their compensation. Now we have the container problem I mentioned in Part 2.

Middle and senior managers need to see that functional silos don’t work. Even silos by part of product don’t work. Their compensation has to change. And, they don’t get to tell people what to do anymore.

Coaches can help middle managers see what the possibilities are, for the work they need to do and how to muddle through a cultural transition.

Instead of having managers tell people directly what to do, we need senior management to update the strategy and manage the project portfolio so we optimize the throughput of a team, not a person. (See Resource Management is the Wrong Idea; Manage Your Project Portfolio Instead and Resource Efficiency vs. Flow Efficiency.)

The middle managers need coaching and a way to see what their jobs are in an agile organization. The middle managers and the senior managers need to understand how to organize themselves and how their compensation will change as a result of an agile transformation.

In an agile organization, the middle managers will need to collaborate more. Their collaboration includes: helping the teams hire, creating communities of practice, providing feedback and meta-feedback, coaching and meta-coaching, helping the teams manage the team health, and most importantly, removing team impediments.

Teams can remove their local impediments. However, managers often control or manage the environment in which the teams work. Here’s an example. Back when I was a manager, I had to provide a written review to each person once a year. Since I met with every person each week or two, it was easy for me to do this. And, when I met with people less often, I discovered they took initiative to solve problems I didn’t know existed. (I was thrilled.)

I had to have HR “approve” these reviews before I could discuss them with the team member. One not-so-experienced HR person read one of my reviews and returned it to me. “This person did not accomplish their goals. You can’t give them that high a ranking.”

I explained that the person had finished more valuable work. And, HR didn’t have a way to update goals in the middle of a year. “Do you really want me to rank this person lower because they did more valuable work than we had planned for?”

That’s the kind of obstacle managers need to remove. Ranking people is an obstacle, as well as having yearly goals. If we want to be able to change, the goals can’t be about projects.

We don’t need to remove HR, although their jobs must change. No, I mean the HR systems are an impediment. This is not a one-conversation-and-done impediment. HR has systems for a reason. How can the managers help HR to become more agile? That’s a big job and requires a management team who can collaborate to help HR understand. That’s just one example. Coaches can help the managers have the conversations.

As for senior management, they need to spend time developing and updating the strategy. Yes, I’m fond of continuous strategy update, as well as continuous planning and continuous project portfolio management.

I coach senior managers on this all the time.

Let me circle back around to the question in Part 1: Do we have evidence we need coaches? No.

On the other hand, here are some questions you might ask yourself to see if you need coaches for management:

  • Do the managers see the need for flow efficiency instead of resource efficiency?
  • Do the managers understand and know how to manage the project portfolio? Can they collaborate to create a project portfolio that delivers value?
  • Do the managers have an understanding of how to do strategic direction and how often they might need to update direction?
  • Do the managers understand how to move to more agile HR?
  • Do the managers understand how to move to incremental funding?

If the answers are all yes, you probably don’t need management coaching for your agile transformation. If the answers are no, consider coaching.

When I want to change the way I work and the kind of work I do, I take classes and often use some form of coaching. I’m not talking about full-time in person coaching. Often, that’s not necessary. But, guided learning? Helping to see more options? Yes, that kind of helping works. That might be part of coaching.

Categories: Blogs

Sponsor Profile – Agile Pain Relief

Agile Ottawa - Mon, 10/31/2016 - 21:49
Can you tell us a little about yourself? Agile Pain Relief was founded by Mark Levison in 2009. Mark is a Certified Scrum Trainer who started using Agile in 2001, later introducing Cognos and IBM to Agile methods before starting … Continue reading →
Categories: Communities

Neo4j: Find the midpoint between two lat/longs

Mark Needham - Mon, 10/31/2016 - 21:31

2016 10 31 06 06 00

Over the last couple of weekends I’ve been playing around with some transport data and I wanted to run the A* algorithm to find the quickest route between two stations.

The A* algorithm takes an estimateEvaluator as one of its parameters and the evaluator looks at lat/longs of nodes to work out whether a path is worth following or not. I therefore needed to add lat/longs for each station and I found it surprisingly hard to find this location date for all the points in the dataset.

Luckily I tend to have the lat/longs for two points either side of a station so I can work out the midpoint as an approximation for the missing one.

I found an article which defines a formula we can use to do this and there’s a StackOverflow post which has some Java code that implements the formula.

I wanted to find the midpoint between Surrey Quays station (51.4931963543,-0.0475185810) and a point further south on the train line (51.47908,-0.05393950). I wrote the following Cypher query to calculate this point:

WITH 51.4931963543 AS lat1, -0.0475185810 AS lon1, 
     51.47908 AS lat2 , -0.05393950 AS lon2
WITH radians(lat1) AS rlat1, radians(lon1) AS rlon1, 
     radians(lat2) AS rlat2, radians(lon2) AS rlon2, 
     radians(lon2 - lon1) AS dLon
WITH rlat1, rlon1, rlat2, rlon2, 
     cos(rlat2) * cos(dLon) AS Bx, 
     cos(rlat2) * sin(dLon) AS By
WITH atan2(sin(rlat1) + sin(rlat2), 
           sqrt( (cos(rlat1) + Bx) * (cos(rlat1) + Bx) + By * By )) AS lat3,
     rlon1 + atan2(By, cos(rlat1) + Bx) AS lon3
RETURN degrees(lat3) AS midLat, degrees(lon3) AS midLon
│midLat           │midLon               │

The Google Maps screenshot on the right hand side shows the initial points at the top and bottom and the midpoint in between. It’s not perfect; ideally I’d like the midpoint to be on the track, but I think it’s good enough for the purposes of the algorithm.

Now I need to go and fill in the lat/longs for my location-less stations!

Categories: Blogs

Dockerfile Configuration Cheatsheets

Derick Bailey - new ThoughtStream - Mon, 10/31/2016 - 13:45

Building your own Docker image is just about the easiest thing you can imagine doing with a command-line tool. It’s only 3 “words” to build the image, after all.

But getting the Dockerfile right, so that these three words will run correctly and produce the results that you want? Well… that’s a bit of a different story.

Dockerfile configuration has dozens of options.

And several of which seem to do the something similar or the same (ADD vs COPY, and ENTRYPOINT vs CMD, for example).

Then when you put all of these option in a single “page” of endless scrolling for the official Dockerfile reference, it’s easy to see how this can make Dockerfile configuration frustrating – especially if it’s not something you do on a regular basis. 

To combat this problem, I created the Dockerfile Configuration and Advanced Dockerfile cheatsheets.

They represent the most common and useful Dockerfile configuration items, allowing you to quickly and easily be reminded of what options you should be using, when.

And like the Docker Management cheatsheet I created, they are free!

Download the Docker Cheatsheets

You can grab the cheatsheets individually:

 Or you can grab them all at once, with the cheatsheet collection

Get The Complete Docker Cheatsheet Collection

Docker cheatsheet stack

The post Dockerfile Configuration Cheatsheets appeared first on

Categories: Blogs

Links for 2016-10-30 []

Zachariah Young - Mon, 10/31/2016 - 09:00
Categories: Blogs

Agile Pracitioners Israel, Tel Aviv, Israel, January 24-25 2017

Scrum Expert - Mon, 10/31/2016 - 08:30
The Agile Practitioners conference is the first community-led Agile and Scrum conference organized in Israel. The first day is dedicated to workshops and the second day will be full of interesting presentations from local and international Agile software development and Scrum project management experts. In the agenda of the Agile Practitioners you can find topics like “Rock, Paper, Stories”, “Agile mind games and the art of self delusion”, “Individuals, interactions and improvisation”, “BDD – Balloon driven development”, “Fostering diversity and inclusion in agile teams”, “Violating scrum – how far can you go?”, “The Spider-man antidote to the anti-pattern of agile leaders”, “How to successfully fail”, “Continuous product improvement”, “Technical… user stories?!”, “Architecting large features and products in an agile environment”. Web site: Location for the Agile Practitioners conference: Tel Aviv, Israel
Categories: Communities

CodeFreeze, Inari, Finland, January 15-19 2017

Scrum Expert - Mon, 10/31/2016 - 08:00
CodeFreeze is a two-day unconference taking place in Finland that defines itself as a ” time and place for software craftspeople to meet”. It is part of the international group of SoCraTes conferences that are focused on software craftsmanship. The CodeFreeze Finland event follows the open space format for conferences. Open space is a simple methodology for self-organizing conference tracks. It relies on participation by people who have a passion for the topics to be discussed. There is no preplanned list of topics, only time slots and a space in the main meeting room where interested participants propose topics and pick time slots. Web site: Location for the CodeFreeze conference: Kiilopää Fell Center, Inari, Finland
Categories: Communities

Building an Agile Culture of Learning

Does your Agile education begin and end with barely a touch of training?  A number of colleagues have told me that in their companies, Agile training ranged from 1 hour to 1 day.  Some people received 2 days of Scrum Master training. With this limited training, they were expected to implement and master the topic.  Agile isn’t simply a process or skill that can be memorized and applied. It is a culture shift. Will this suffice for a transformation to Agile?
Education is an investment in your people.  A shift in culture requires an incremental learning approach that spans time.  What works in one company doesn’t work in another. A learning culture should be an intrinsic part of your Agile transformation that includes skills, roles, process, culture and behavior education with room to experience and experiment.
An Agile transformation requires a shift toward a continuous learning culture which will give you wings to soar!  You need a combination of training, mentoring, coaching, experimenting, reflecting, and giving back. These education elements can help you become a learning enterprise.  Let's take a closer look at each:
Training is applied when an enterprise wants to build employee skills, educate employees in their role, or roll out a process. It is often event driven and a one-way transfer of knowledge. What was learned can be undone when you move back into your existing culture.
Coaching helps a team put the knowledge into action and lays the groundwork for transforming the culture. Coaching provides a two-way communication process so that questions can be asked along the way. A coach can help you course-correct and promote right behaviors for the culture you want.
Mentoring focuses on relationships and building confidence and self-awareness. The mentee invests time by proposing topics to be discussed with the mentor in the relationship. In this two-way communication, deep learning can occur.
Experimentingfocuses on trying out the new skills, roles, and mindset in a real world setting.  This allows first-hand knowledge of what you’ve learned and allows for a better understanding of Agile.
Reflecting focuses on taking the time to consider what you learned whether it is a skill, process, role, or culture, and determine what you can do better and what else you need on your learning journey. 
Giving back occurs when the employee has gained enough knowledge, skills, experience, to start giving back to their community to make the learning circle complete. Helping others highlight a feeling of ownership to the transformation and the learning journey.
It takes a repertoire of educational elements to achieve an Agile culture and becoming a Learning enterprise. When you have people willing to give back is when the learning enterprise has become full circle and your enterprise can soar.


For more Agile related Learning and Education articles, consider reading:

Categories: Blogs

Neo4j: Create dynamic relationship type

Mark Needham - Mon, 10/31/2016 - 00:12

One of the things I’ve often found frustrating when importing data using Cypher, Neo4j’s query language, is that it’s quite difficult to create dynamic relationship types.

Say we have a CSV file structured like this:

load csv with headers from "file:///people.csv" AS row
│row                                                    │
│{node1: Mark, node2: Reshmee, relationship: MARRIED_TO}│
│{node1: Mark, node2: Alistair, relationship: FRIENDS}  │

We want to create nodes with the relationship type specified in the file. Unfortunately, in Cypher we can’t pass in relationship types so we have to resort to the FOREACH hack to create our relationships:

load csv with headers from "file:///people.csv" AS row
MERGE (p1:Person {name: row.node1})
MERGE (p2:Person {name: row.node2})
FOREACH(ignoreMe IN CASE WHEN row.relationship = "MARRIED_TO" THEN [1] ELSE [] END |
 MERGE (p1)-[:MARRIED_TO]->(p2))
FOREACH(ignoreMe IN CASE WHEN row.relationship = "FRIENDS" THEN [1] ELSE [] END |
 MERGE (p1)-[:FRIENDS]->(p2))

This works, but:

  1. Looks horrendous
  2. Doesn’t scale particularly well when we have multiple relationship types to deal with

As in my last post the APOC library comes to the rescue again, this time in the form of the apoc.create.relationship procedure.

This procedure allows us to change our initial query to read like this:

load csv with headers from "file:///people.csv" AS row
MERGE (p1:Person {name: row.node1})
MERGE (p2:Person {name: row.node2})
WITH p1, p2, row
CALL apoc.create.relationship(p1, row.relationship, {}, p2) YIELD rel

Much better!

Categories: Blogs

The Simple Leader: Minimize

Evolving Excellence - Sun, 10/30/2016 - 10:37

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
– Antoine de Saint-Exupery

If you’ve removed the clutter in your life, then the last thing you really want to do is add something. Or so you’d think. Unfortunately, our culture has conditioned us to always want more. We like our toys. We buy impulsively. We take on too many projects. (Well, at least I used to.)

Zen teaches us the value of koko, or austerity. This doesn’t mean you have to live the life of a monk, having no real possessions and relying on the morning alms for nutrition, but it does mean challenging yourself to find the point where you have what you need and nothing more. That point is different for each of us, and an item or expense that is a toy or luxury to one person may be a valuable component of another’s life. The purpose is to become consciously aware, and to make a conscious decision before adding something new.

Most of us have seen the famous photo of Steve Jobs sitting in the middle of his living room on a mat, sipping tea, surrounded by just a single lamp and a couple of books. Living so frugally is probably not how most of us would choose to live, but consider the freedom—and focus—that lifestyle creates (not to mention the fatter savings account, which in itself creates more freedom).

You should have seen the look on my real estate agent’s face years ago when I told her I wanted a house with less storage space and not more. It was obviously the first time she had heard that request, especially from someone looking for a nicer house. After looking at several options that did not fit our vision of a nicer but smaller house, my wife and I decided to build our own. The project has created many new minimization ideas, and our architect is working on a design that minimizes doors and walls, reduces angles and unnecessary trim, and lowers the number of horizontal surfaces upon which clutter can be stacked. All these features will help us reduce our lives’ clutter and avoid the temptation to buy more things.

Categories: Blogs

Horizon Line Group

Agile Elements - Fri, 10/28/2016 - 21:34
Horizon Line Group We are moving our blog to better reflect the expanded scope of our interests. Businesses looking to the Horizon Line are working for long term performance. They have the right perspective to journey across what is in front of them while preparing for the unseen. Expect the same great insights and service as we make this transition. Horizon Line Group; The right business perspective. Please subscribe and continue to explore our thoughts and offerings at .

Categories: Blogs

Build Quality In: The Key to Continuous Delivery in Kanban

The technology world is changing fast, faster than ever. A few years ago, we thought that only...

The post Build Quality In: The Key to Continuous Delivery in Kanban appeared first on Blog | LeanKit.

Categories: Companies

SonarQube Embraces the .NET Ecosystem

Sonar - Fri, 10/28/2016 - 15:05

In the last couple months, we have worked on further improving our already-good support for the .NET ecosystem. In this blog post, I’ll summarize the changes and the product updates that you’re about to see.

C# plugin version 5.4

We moved all functionalities previously based on our own tokenizer/parser to Roslyn. This lets us do the colorization more accurately and will allow future improvements with less effort. Also, we’re happy to announce the following new features:

  • Added symbol reference highlighting, which has been available for Java source code for a long time.
  • Improved issue reporting with exact issue locations.
  • Added the missing complexity metrics: “complexity in classes” and “complexity in functions”
  • Finally, we also updated the rule engine (C# analyzer) to the latest version, so you can benefit from the rules already available through SonarLint for Visual Studio.

With these changes you should have the same great user experience in SonarQube for C# that is already available for Java.

VB.NET plugin version 3.0

The VB.NET plugin 2.4 also relied on our own parser implementation, which meant that it didn’t support the VB.NET language features added by the Roslyn team, such as string interpolation, and null-conditional operators. The deficit resulted in parsing errors on all new constructs, and on some already existing ones too, such as async await, and labels that are followed by statements on the same line. The obvious solution to all these problems was to use Roslyn internally. In the last couple months, we made the necessary changes, and now the VB.NET plugin uses the same architecture as the C# plugin. This has many additional benefits above and beyond eliminating the parsing errors, such as enabling the following new features in this version of the VB.NET plugin:

  • Exact issue location
  • Symbol reference highlighting
  • Colorization based on Roslyn
  • Copy-paste detection based on Roslyn
  • Missing complexity metrics are also computed
  • Support all the coverage and testing tools already available for C#

Additionally, we removed the dependency between the VB.NET and C# plugins, so if you only do VB.NET development, you don’t have to install the C# plugin any more.

While we were at it, we added a few useful new rules to the plugin: S1764, S1871, S1656, S1862. Here’s an issue we found with these rules in Roslyn itself:

Scanner for MsBuild version 2.2

Some of the features mentioned above couldn’t be added just by modifying the plugins. We had to improve the Scanner for MSBuild to make the changes possible. At the same time, we fixed many of the small annoyances and a few bugs. Finally, we upgraded the embedded SonarQube Scanner to the latest version, 2.8, so you’ll benefit from all changes made there too (v2.7 changelog, v2.8 changelog).

Additionally, when you use MSBuild14 to build your solution, we no longer need to compute metrics, copy-paste token information, code colorization information, etc. in the Scanner for MSBuild “end step”, so you’ll see a performance improvement there. These computations were moved to the build phase where they can be done more efficiently, so that step will be a little slower, but the overall performance should still be better.

FxCop plugin version 1.0

A final change worth mentioning is that we extracted FxCop analysis from the C# plugin into a dedicated community plugin. This move seems to align with what Microsoft is doing: not developing FxCop any longer. Microsoft’s replacement tool will come in the form of Roslyn analyzers.

Note that we not only extracted the functionality to a dedicated plugin, but fixed a problem with issues being reported on excluded files (see here).


That’s it. Huge architectural changes with many new features driven by our main goal to support .NET languages to the same extent as we support Java, JavaScript, and C/C++.

Categories: Open Source

Robots bring business and IT together

Xebia Blog - Fri, 10/28/2016 - 14:46
Maybe you’ve already read the diary of one of our mBots, if not I encourage you to do so first! So, what was this day all about? How did we come to organise this and what did the participants learn? Changing teams As companies decide to adopt a more agile way of working, they also start
Categories: Companies

A Whirlwind Tour of the Kotlin Type Hierarchy

Mistaeks I Hav Made - Nat Pryce - Fri, 10/28/2016 - 10:08
Kotlin has plenty of good language documentation and tutorials. But I’ve not found an article that describes in one place how Kotlin’s type hierarchy fits together. That’s a shame, because I find it to be really neat1. Kotlin’s type hierarchy has very few rules to learn. Those rules combine together consistently and predictably. Thanks to those rules, Kotlin can provide useful, user extensible language features – null safety, polymorphism, and unreachable code analysis – without resorting to special cases and ad-hoc checks in the compiler and IDE. Starting from the Top All types of Kotlin object are organised into a hierarchy of subtype/supertype relationships. At the “top” of that hierarchy is the abstract class Any. For example, the types String and Int are both subtypes of Any. Any is the equivalent of Java’s Object class. Unlike Java, Kotlin does not draw a distinction between “primitive” types, that are intrinsic to the language, and user-defined types. They are all part of the same type hierarchy. If you define a class that is not explicitly derived from another class, the class will be an immediate subtype of Any. class Fruit(val ripeness: Double) If you do specify a base class for a user-defined class, the base class will be the immediate supertype of the new class, but the ultimate ancestor of the class will be the type Any. abstract class Fruit(val ripeness: Double) class Banana(ripeness: Double, val bendiness: Double): Fruit(ripeness) class Peach(ripeness: Double, val fuzziness: Double): Fruit(ripeness) If your class implements one or more interfaces, it will have multiple immediate supertypes, with Any as the ultimate ancestor. interface ICanGoInASalad interface ICanBeSunDried class Tomato(ripeness: Double): Fruit(ripeness), ICanGoInASalad, ICanBeSunDried The Kotlin type checker enforces subtype/supertype relationships. For example, you can store a subtype into a supertype variable: var f: Fruit = Banana(bendiness=0.5) f = Peach(fuzziness=0.8) But you cannot store a supertype value into a subtype variable: val b = Banana(bendiness=0.5) val f: Fruit = b val b2: Banana = f // Error: Type mismatch: inferred type is Fruit but Banana was expected Nullable Types Unlike Java, Kotlin distinguishes between “non-null” and “nullable” types. The types we’ve seen so far are all “non-null”. Kotlin does not allow null to be used as a value of these types. You’re guaranteed that dereferencing a reference to a value of a “non-null” type will never throw a NullPointerException. The type checker rejects code that tries to use null or a nullable type where a non-null type is expected. For example: var s : String = null // Error: Null can not be a value of a non-null type String If you want a value to maybe be null, you need to use the nullable equivalent of the value type, denoted by the suffix ‘?’. For example, the type String? is the nullable equivalent String, and so allows all String values plus null. var s : String? = null s = "foo" s = null s = bar The type checker ensures that you never use a nullable value without having first tested that it is not null. Kotlin provides operators to make working with nullable types more convenient. See the Null Safety section of the Kotlin language reference for examples. When non-null types are related by subtyping, their nullable equivalents are also related in the same way. For example, because String is a subtype of Any, String? is a subtype of Any?, and because Banana is a subtype of Fruit, Banana? is a subtype of Fruit?. Just as Any is the root of the non-null type hierarchy, Any? is the root of the nullable type hierarchy. Because Any? is the supertype of Any, Any? is the very top of Kotlin’s type hierarchy. A non-null type is a subtype of its nullable equivalent. For example, String, as well as being a subtype of Any, is also a subtype of String?. This is why you can store a non-null String value into a nullable String? variable, but you cannot store a nullable String? value into a non-null String variable. Kotlin’s null safety is not enforced by special rules, but is an outcome of the same subtype/supertype rules that apply between non-null types. This applies to user-defined type hierarchies as well. Unit Kotlin is an expression oriented language. All control flow statements (apart from variable assignment, unusually) are expressions. Kotlin does not have void functions, like Java and C. Functions always return a value. Functions that don’t actually calculate anything – being called for their side effect, for example – return Unit, a type that has a single value, also called Unit. Most of the time you don’t need to explicitly specify Unit as a return type or return Unit from functions. If you write a function with a block body and do not specify the result type, the compiler will treat it as a Unit function. Otherwise the compiler will infer it. fun example() { println("block body and no explicit return type, so returns Unit") } val u: Unit = example() There’s nothing special about Unit. Like any other type, it’s a subtype of Any. It can be made nullable, so is a subtype of Unit?, which is a subtype of Any?. The type Unit? is a strange little edge case, a result of the consistency of Kotlin’s type system. It has only two members: the Unit value and null. I’ve never found a need to use it explicitly, but the fact that there is no special case for “void” in the type system makes it much easier to treat all kinds of functions generically. Nothing At the very bottom of the Kotlin type hierarchy is the type Nothing. As its name suggests, Nothing is a type that has no instances. An expression of type Nothing does not result in a value. Note the distinction between Unit and Nothing. Evaluation of an expression type Unit results in the singleton value Unit. Evaluation of an expression of type Nothing never returns at all. This means that any code following an expression of type Nothing is unreachable. The compiler and IDE will warn you about such unreachable code. What kinds of expression evaluate to Nothing? Those that perform control flow. For example, the throw keyword interrupts the calculation of an expression and throws an exception out of the enclosing function. A throw is therefore an expression of type Nothing. By having Nothing as a subtype of every other type, the type system allows any expression in the program to actually fail to calculate a value. This models real world eventualities, such as the JVM running out of memory while calculating an expression, or someone pulling out the computer’s power plug. It also means that we can throw exceptions from within any expression. fun formatCell(value: Double): String = if (value.isNaN()) throw IllegalArgumentException("$value is not a number") else value.toString() It may come as a surprise to learn that the return statement has the type Nothing. Return is a control flow statement that immediately returns a value from the enclosing function, interrupting the evaluation of any expression of which it is a part. fun formatCellRounded(value: Double): String = val rounded: Long = if (value.isNaN()) return "#ERROR" else Math.round(value) rounded.toString() A function that enters an infinite loop or kills the current process has a result type of Nothing. For example, the Kotlin standard library declares the exitProcess function as: fun exitProcess(status: Int): Nothing If you write your own function that returns Nothing, the compiler will check for unreachable code after a call to your function just as it does with built-in control flow statements. inline fun forever(action: ()->Unit): Nothing { while(true) action() } fun example() { forever { println("doing...") } println("done") // Warning: Unreachable code } Like null safety, unreachable code analysis is not implemented by ad-hoc, special-case checks in the IDE and compiler, as it has to be in Java. It’s a function of the type system. Nullable Nothing? Nothing, like any other type, can be made nullable, giving the type Nothing?. Nothing? can only contain one value: null. In fact, Nothing? is the type of null. Nothing? is the ultimate subtype of all nullable types, which lets the value null be used as a value of any nullable type. Conclusion When you consider it all at once, Kotlin’s entire type hierarchy can feel quite complicated. But never fear! I hope this article has demonstrated that Kotlin has a simple and consistent type system. There are few rules to learn: a hierarchy of supertype/subtype relationships with Any? at the top and Nothing at the bottom, and subtype relationships between non-null and nullable types. That’s it. There are no special cases. Useful language features like null safety, object-oriented polymorphism, and unreachable code analysis all result from these simple, predictable rules. Thanks to this consistency, Kotlin’s type checker is a powerful tool that helps you write concise, correct programs. “Neat” meaning “done with or demonstrating skill or efficiency”, rather than the Kevin Costner backstage at a Madonna show sense of the word↩
Categories: Blogs

CQRS/MediatR implementation patterns

Jimmy Bogard - Thu, 10/27/2016 - 18:36

Early on in the CQRS/ES days, I saw a lot of questions on modeling problems with event sourcing. Specifically, trying to fit every square modeling problem into the round hole of event sourcing. This isn’t anything against event sourcing, but more that I see teams try to apply a single modeling and usage strategy across the board for their entire application.

Usually, these questions were answered a little derisively  – “you shouldn’t use event sourcing if your app is a simple CRUD app”. But that belied the truth – no app I’ve worked with is JUST a DDD app, or JUST a CRUD app, or JUST an event sourcing app. There are pockets of complexity with varying degrees along varying axes. Some areas have query complexity, some have modeling complexity, some have data complexity, some have behavior complexity and so on. We try to choose a single modeling strategy for the entire application, and it doesn’t work. When teams realize this, I typically see people break things out in to bounded contexts or microservices:


With this approach, you break your system into individual bounded contexts or microservices, based on the need to choose a single modeling strategy for the entire context/app.

This is completely unnecessary, and counter-productive!

A major aspect of CQRS and MediatR is modeling your application into a series of requests and responses. Commands and queries make up the requests, and results and data are the responses. Just to review, MediatR provides a single interface to send requests to, and routes those requests to in-process handlers. It removes the need for a myriad of service/repository objects for single-purpose request handlers (F# people model these just as functions).

Breaking down our handlers

Usage of MediatR with CQRS is straightforward. You build distinct request classes for every request in your system (these are almost always mapped to user actions), and build a distinct handler for each:


Each request and response is distinct, and I generally discourage reuse since my requests route to front-end activities. If the front-end activities are reused (i.e. an approve button on the order details and the orders list), then I can reuse the requests. Otherwise, I don’t reuse.

Since I’ve built isolation between individual requests and responses, I can choose different patterns based on each request:


Each request handler can determine the appropriate strategy based on *that request*, isolated from decisions in other handlers. I avoid abstractions that stretch across layers, like repositories and services, as these tend to lock me in to a single strategy for the entire application.

In a single application, your handlers can execute against:

It’s entirely up to you! From the application’s view, everything is still modeled in terms of requests and responses:


The application simply doesn’t care about the implementation details of a handler – nor the modeling that went into whatever generated the response. It only cares about the shape of the request and the shape (and implications and guarantees of behavior) of the response.

Now obviously there is some understanding of the behavior of the handler – we expect the side effects of the handler based on the direct or indirect outputs to function correctly. But how they got there is immaterial. It’s how we get to a design that truly focuses on behaviors and not implementation details. Our final picture looks a bit more reasonable:


Instead of forcing ourselves to rely on a single pattern across the entire application, we choose the right approach for the context.

Keeping it honest

One last note – it’s easy in this sort of system to devolve into ugly handlers:


Driving all our requests through a single mediator pinch point doesn’t mean we absolve ourselves of the responsibility of thinking about our modeling approach. We shouldn’t just pick transaction script for every handler just because it’s easy. We still need that “Refactor” step in TDD, so it’s important to think about our model before we write our handler and pay close attention to code smells after we write it.

Listen to the code in the handler – if you’ve chosen a bad approach, refactor! You’ve got a test that verifies the behavior from the outermost shell – request in, response out, so you have a implementation-agnostic test providing a safety net for refactoring. If there’s too much going on in the handler, push it down into the domain. If it’s better served with a different model altogether, refactor that direction. If the query is gnarly and would better suffice in SQL, rewrite it!

Like any architecture, one built on CQRS and MediatR can be easy to abuse. No architecture prevents bad design. We’ll never escape the need for pull requests and peer reviews and just standard refactoring techniques to improve our designs.

With CQRS and MediatR, the handler isolation supplies the enablement we need to change direction as needed based on each individual context and situation.

Categories: Blogs

Knowledge Sharing

SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.