Skip to content

Blogs

SPC Reading List

Agile Product Owner - Fri, 08/07/2015 - 20:46
SPC Reading List

We hope you are having a safe and enjoyable summer!  Don’t forget the hat, sunglasses and sunscreen!  We will be updating our SPC Reading list and thought it would be a good idea to share this with you now.  Summer is a great time to catch up on your reading, while enjoying SAFe time in the sun.

  • Principles of Product Development Flow, Don Reinertsen
  • The Lean Machine, Dantar Oosterwal
  • Leading SAFe Live Lessons (Video Book),  Dean Leffingwell
  • Lean Product and Process Development, Allen Ward and Durward Sobeck II
  • The Goal, Eliyahu Goldratt
  • Out of the Crisis, E. Deming
  • Agile Software Requirements, Dean Leffingwell
  • Switch, Chip Heath and Dan Heath
  • The Five Dysfunctions of a Team, Patrick Lencioni

For those in the Southern Hemisphere, we haven’t forgotten about you and hope that you are having a mild and safe winter.  When it’s cold out, nothing like a good book to read under the warm covers or by the fireplace.

Always be SAFe,

Richard

Categories: Blogs

Making Workflow Explicit In JavaScript (Repost)

Derick Bailey - new ThoughtStream - Fri, 08/07/2015 - 18:13

A long time ago, in what seems to be a previous life at this point, I wrote a small blog post about modeling and creating an explicit return value from a dialog form in a Windows application. Fast forward a lifetime and I’m finding that this knowledge and experience is resurfacing itself in my daily work with. Whether it’s Backbone and Marionette or Node.js and RabbitMQ, I’ve used this pattern that I first learned in WinForms and my applications have benefited greatly from it.

Confused workflow

A Poorly Constructed Workflow

It seems to be common in the JavaScript world to have very poorly defined and constructed workflow in applications. We take one object and build some functionality. Then when the next part of the app needs to fire up, we call it directly from the first object. Then when the next part of the app is requested, we call that object from the second one. And we continue on down this path ad-infinitum, creating a mess of tightly coupled concerns, tightly coupled concepts, tightly coupled objects, and a fragile system that is dependent entirely on the implementation details to understand the higher level process.

Consider this example: a human resources application allows you to add a new employee and select a manager for the employee. After entering a name and email address, the form to select the manager should be shown. When save is clicked, the employee should be created. A crude, but all too common implementation of this workflow might look something like this:

Can you quickly and easily describe the workflow in this example? If you can, you probably paid attention to the description above. Look at the code again, and follow the workflow through the code.

Personally, I have to spend a fair amount of time looking at the implementation details of both views in order to see what’s going on. I have to piece together the bits of the workflow from multiple places to form a more coherent, high level overview in my head. It’s not easy for me to see what’s going on. Every time I look at one part of the code, I have to mentally dig through implementation details that cloud the vision of the high level workflow. This is time consuming and prone to mistakes.

Too Many Concerns

This code has a number of different concerns mixed in to very few objects, and those concerns are split apart in some rather un-natural ways. To understand the complete concern, code from different parts of the app have to be mentally put back together. But, what are the concerns that are presented in this code?

The first set of concerns are found in the high level workflow:

  • Enter employee info
  • Select manager
  • Create employee

The second set of concerns are what should be the implementation detail:

  • Show the EmployeeInfoForm 
  • Allow the user to enter a name and email address
  • When “next” is clicked, gather the name and email address of the employee
  • Then show the SelectManagerForm with a list of possible managers to select from
  • When “save” is clicked, grab the selected manager
  • Then take all of the employee information and create a new employee record on the server

This list doesn’t even cover all of the edge cases or common scenarios. What happens when the user hits cancel on the first screen? Or on the second? What about invalid email address validation? Adding these steps to the list of implementation details has things getting out of hand very quickly.

By implementing both the high level workflow and the implementation detail in the views – the details and implementation – the ability to see the high level workflow at a glance has been destroyed. This will cause problems as details will be forgotten when changing the system. Code will be broken. Features will be missing. Adding more to the process – like the validation or cancel buttons that are already missing – will make it more complicated, still. 

This situation has to change.

Modeling An Explicit Workflow In Code

Instead of tightly coupling the workflow to the implementations, the high level workflow should be extracted. The governing process for this part of the application should be made explicit in the code, in a way that makes it easy to see the over-all flow.

Think of a workflow diagram as an example. The diagram doesn’t show all of the details. It shows the high level steps. Each step may be composed of additional detail, but the diagram shows it simplified in to single boxes.

Code should be modeled in the sam manner. The workflow should be high level, showing the basic steps. Detail of each step should be modeled in to other objects that are called by the workflow. This makes it easier to change the workflow and to change any specific implementation detail without having to rework the higher level process.

Consider this code, for example:

Here, the high level workflow is easier to see. After the employee info is entered, the manager selection comes up next. When that completes, the employee info is saved. It all looks very clean and simple. More importantly, the additional features like validation and cancel buttons can be added to the code. The validation may be a detail that happens in the individual form, but the cancel button is likely to be a part of the high level workflow.

From here, at the workflow level, moving to the details can be accomplished with a couple of Backbone views and a model for the details:

(I’ve omitted some of the details of the views and model, but I think the idea is there)

The Benefits

There are a number of benefits to writing code in this manner:

  • It’s easy to see the high level workflow
  • You don’t have to worry about all of the implementation details for each of the views or the model when dealing with the workflow
  • You can change any of the individual views without affecting the rest of the workflow
  • Adding new features or process to the workflow is easier
  • (and more!) 

Of all the benefits listed and the ones that I am not thinking of at the moment, the most important one may be the ability to see the high level workflow. 6 months from now – or if you’re like me, 6 hours from now – you won’t remember that you have to trace through 5 different Views and three different custom objects and models, to piece together the workflow. But if you have a workflow modeled explicitly, you’re more likely to pick up the code and understand the process quickly.

The Drawbacks

Everything has a price, right?

You will end up with a few more objects and a few more methods to keep track of. There’s a mild overhead associated with this in the world of browser based JavaScript, but that’s likely to be so small that you won’t notice – especially with the rapid rate of optimization in JavaScript engines.

The real cost, though, is learning new implementation patterns to get this working. That takes time. Sure, looking at an example like this is easy. But it’s a simple example and a simple implementation. When you get down to actually trying to write this style of code for yourself, it will be more complicated. There’s no simple answer for this problem, either. You have to learn through it in order to improve your application design.

It’s Worth It

In the end and in spite of potential drawbacks and learning curves, explicitly modeling workflow in your application is important.

It really doesn’t matter what language your writing in, either. I’ve shown these examples in JavaScript and Backbone because that’s where I’m seeing a lot of need for this, recently. But I’ve applied these same rules to C#/.NET, Ruby and other languages for years.

As with any good architecture and philosophy, the principles are the same across languages and other boundaries. It’s only the implementation details that change.

 

(This article was originally published on LosTechies, and has been revised and edited here)

Categories: Blogs

Spark: Convert RDD to DataFrame

Mark Needham - Thu, 08/06/2015 - 23:11

As I mentioned in a previous blog post I’ve been playing around with the Databricks Spark CSV library and wanted to take a CSV file, clean it up and then write out a new CSV file containing some of the columns.

I started by processing the CSV file and writing it into a temporary table:

import org.apache.spark.sql.{SQLContext, Row, DataFrame}
 
val sqlContext = new SQLContext(sc)
val crimeFile = "Crimes_-_2001_to_present.csv"
sqlContext.load("com.databricks.spark.csv", Map("path" -> crimeFile, "header" -> "true")).registerTempTable("crimes")

I wanted to get to the point where I could call the following function which writes a DataFrame to disk:

private def createFile(df: DataFrame, file: String, header: String): Unit = {
  FileUtil.fullyDelete(new File(file))
  val tmpFile = "tmp/" + System.currentTimeMillis() + "-" + file
  df.distinct.save(tmpFile, "com.databricks.spark.csv")
}

The first file only needs to contain the primary type of crime, which we can extract with the following query:

val rows = sqlContext.sql("select `Primary Type` as primaryType FROM crimes LIMIT 10")
 
rows.collect()
res4: Array[org.apache.spark.sql.Row] = Array([ASSAULT], [ROBBERY], [CRIMINAL DAMAGE], [THEFT], [THEFT], [BURGLARY], [THEFT], [BURGLARY], [THEFT], [CRIMINAL DAMAGE])

Some of the primary types have trailing spaces which I want to get rid of. As far as I can tell Spark’s variant of SQL doesn’t have the LTRIM or RTRIM functions but we can map over ‘rows’ and use the String ‘trim’ function instead:

rows.map { case Row(primaryType: String) => Row(primaryType.trim) }
res8: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[29] at map at DataFrame.scala:776

Now we’ve got an RDD of Rows which we need to convert back to a DataFrame again. ‘sqlContext’ has a function which we might be able to use:

sqlContext.createDataFrame(rows.map { case Row(primaryType: String) => Row(primaryType.trim) })
 
<console>:27: error: overloaded method value createDataFrame with alternatives:
  [A <: Product](data: Seq[A])(implicit evidence$4: reflect.runtime.universe.TypeTag[A])org.apache.spark.sql.DataFrame <and>
  [A <: Product](rdd: org.apache.spark.rdd.RDD[A])(implicit evidence$3: reflect.runtime.universe.TypeTag[A])org.apache.spark.sql.DataFrame
 cannot be applied to (org.apache.spark.rdd.RDD[org.apache.spark.sql.Row])
              sqlContext.createDataFrame(rows.map { case Row(primaryType: String) => Row(primaryType.trim) })
                         ^

These are the signatures we can choose from:

2015 08 06 21 58 12

If we want to pass in an RDD of type Row we’re going to have to define a StructType or we can convert each row into something more strongly typed:

case class CrimeType(primaryType: String)
 
sqlContext.createDataFrame(rows.map { case Row(primaryType: String) => CrimeType(primaryType.trim) })
res14: org.apache.spark.sql.DataFrame = [primaryType: string]

Great, we’ve got our DataFrame which we can now plug into the ‘createFile’ function like so:

createFile(
  sqlContext.createDataFrame(rows.map { case Row(primaryType: String) => CrimeType(primaryType.trim) }),
  "/tmp/crimeTypes.csv",
  "crimeType:ID(CrimeType)")

We can actually do better though!

Since we’ve got an RDD of a specific class we can make use of the ‘rddToDataFrameHolder’ implicit function and then the ‘toDF’ function on ‘DataFrameHolder’. This is what the code looks like:

import sqlContext.implicits._
createFile(
  rows.map { case Row(primaryType: String) => CrimeType(primaryType.trim) }.toDF(),
  "/tmp/crimeTypes.csv",
  "crimeType:ID(CrimeType)")

And we’re done!

Categories: Blogs

Article published on Information Age: Why expert developers make the worst tech leads

thekua.com@work - Thu, 08/06/2015 - 09:46

Promoting your best or most experienced developer to lead a technology team may seem like a logical move, but it could prove to be a disastrous decision

I recently published a new article on the Information Age website: “Why expert developers make the worst tech leads.” You can read more here.

Categories: Blogs

Hugh MacLeod’s Illustrated Guide to Life Inside Microsoft

J.D. Meier's Blog - Wed, 08/05/2015 - 20:32

imageIf you remember the little blue monster that says, “Microsoft, change the world or go home.”, you know Hugh MacLeod.

Hugh is the creative director at Gaping Void.  I got to meet Hugh, along with Jason Korman (CEO), and Jessica Higgins, last week to talk through some ideas.

Hugh uses cartoons as a snappy and insightful way to change the world.  You can think of it as “Motivational Art for Smart People.”

The Illustrated Guide to Life Inside Microsoft

One of Hugh’s latest creations is the Illustrated Guide to Life Insight Microsoft.  It’s a set of cards you can flip, with a cartoon on the front, and a quote on the back.  It’s truly insight at your fingertips.

image

I like them all … from “Microsoft is a ‘Get Stuff Done’ company” to “Software is the thing between the things”, but my favorite is:

“It’s more fun being the underdog.”

It’s a reminder how you can take the dog out of the fight, but you can’t take the fight out of the dog, and as long as you’re still in the game, and you are truly a learning company, and a company that continues to grow and evolve, you can change the world … your unique way.

Tweaking People in the Right Direction

Hugh is an observer and participant who inspires and prods people in the right direction …

Via Hugh MacLeod Connects the Dots:

“’Attaching art to business outcomes can articulate deep emotions and bring things to light fast,’ said MacLeod. To get there requires MacLeod immersing himself within a company, so he can look for what he calls ‘freaks of light’—epiphanies about a company that express the collected motivations of its people. ‘My cartoons make connections,’ said MacLeod. ‘I create work in an ambient way to tweak people in the right direction.’”

Via Hugh MacLeod Connects the Dots:

“He’s an observer and a participant, mingling temporarily within a culture to better understand it. He’s also a listener, taking your thoughts and combining them with his own to piece together the puzzle he is trying to solve about the human condition and business environment.”

Check out the Illustrated Guide to Life Inside Microsoft and some of the ideas just might surprise you, or, at least inspire and motivate you today – you smart person, you.

Categories: Blogs

AutoMapper 4.0 Released

Jimmy Bogard - Wed, 08/05/2015 - 18:46

Release notes here: https://github.com/AutoMapper/AutoMapper/releases/tag/v4.0.0

On NuGet of course.

This was a big release – I undertook the exciting challenge of supporting all the new platforms from VS 2015, and in the process, collapsed all of the projects/assemblies into exactly one assembly per target framework. It’s much easier to manage on my side with just the one project instead of many different ones:

I have to use compiler directives instead of feature discovery, but it’s a tradeoff I’m happy to make.

There’s a ton of small bug fixes in this release, quite a few enhancements and a few larger new features. Configuration performance went up quite a bit, and I’ve laid the groundwork to make in-memory mapping a lot faster in the future. LINQ projection has gotten to the point where you can do anything that the major query providers support.

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Pitfall of Scrum: ScrumMaster as Contributor

Learn more about our Scrum and Agile training sessions on WorldMindware.com

The ScrumMaster is like a fire-fighter: it’s okay for them to be idle – just watching the team – waiting for an emergency obstacle. Taking on tasks tends to distract the ScrumMaster from the job of helping the team follow the rules of Scrum, from the job of vigorously removing obstacles, and from the job of protecting the team from interruptions. Let’s look at each of these aspects of the ScrumMaster role in turn:

The ScrumMaster Helps the Team Follow the Rules of Scrum

The ScrumMaster is a process facilitator. The Scrum process, while simple to describe, is not easy to do. As the Scrum Guide says:

Scrum is:

Lightweight

Simple to understand

Difficult to master

The ScrumMaster helps the Scrum Team and the organization to master the Scrum framework. Helping everyone understand Scrum and respect its rules is a first step. Some of the rules are particularly challenging. In some companies, being on time for meetings and ending them on time is hard. Scrum requires this. The ScrumMaster helps the team do this. In some companies, meeting deadlines, even short ones, is difficult. Scrum requires this every Sprint. The ScrumMaster helps the team do this. In some companies, giving time to improving things is hard. Scrum Teams do retrospectives. The ScrumMaster ensures that the team takes the time for this.

Of course, following the rules is hard for people. Even just the concept of “rules” is hard for some people. Everyone has the right to do whatever they want. Well, if you aren’t following the rules of Scrum you aren’t doing Scrum. So for some teams, just getting to the point of being willing to follow the rules of Scrum is a big step. The ScrumMaster needs to help with motivation.

The ScrumMaster is Vigorously Removing Obstacles

The Scrum Team is going to be working hard to meet a goal for the Sprint. As they work, they are going to work through many challenges and problems on their own. However, the team will start to encounter obstacles as well. These obstacles or impediments come from a few sources:

  1. Dependencies on other people or parts of the organization outside the Scrum Team.
  2. Skill gaps within the team.
  3. Burdensome bureaucracy imposed by the organization.
  4. Lack of resources such as tools, equipment, licenses, or even access to funds.

The ScrumMaster needs to work through these.

On a panel talk on Saturday one person said “the scrum master is an administrator, moving cards, updating the burn down. It is an easy job, I think my son could do it.” I then rebutted his remarks….

The ScrumMaster will tackle enterprise operations for their slow error prone deployment process, tackle Sarbox [Sarbanes-Oxley] compliance policy that has been way over-engineered to the point of slowing dev to a crawl, telling the PMO that 3 sets of reports is waste, exhorting the team to try to do unit tests in ABAP (SAP cobol), etc.

Robin Dymond, CST – (Scrum Training and Coaching Community Google Group, Sep. 23, 2009)

The ScrumMaster is Protecting the Team from Interruptions

Every organization seems to have more work than their staff have the capacity to deliver. Staff are often asked to task switch repeatedly over the course of a day or even in a single hour. Sometimes people are “allocated” to multiple projects simultaneously. This breaks the Scrum value of focus. The ScrumMaster needs to protect the team from interruptions or anything else that would break their focus.

But what should the Scrum Team members be focused on? Simply: the goal of a single Sprint. And a single Scrum Team is focused on a single product. The Product Owner should be the point of contact for any and all requests for the time and effort of a Scrum Team. The ScrumMaster needs to re-direct any interruptions to the Product Owner. The Product Owner decides if:

  • the interruption results in a new Product Backlog Item, OR
  • the interruption is irrelevant to the product and simply discarded, OR
  • the interruption is important enough to cancel the current Sprint.

There are no other options in Scrum for handling requests for work from the Scrum Team (or any member of the Scrum Team).

Contribution as Distraction for the ScrumMaster

Any time the ScrumMaster starts to contribute to the product development effort directly, the ScrumMaster is distracted from the other three duties. Although simple, following the rules of Scrum is not easy. Getting distracted from the duty of helping the team follow the rules of Scrum means that the team is likely to develop bad habits or regress to non-Scrum behaviour. Vigorously removing obstacles is usually a huge job all on its own. Most Scrum Teams have huge organizational obstacle that must be worked on. Some of these obstacles will take years of persistent effort to deal with. The ScrumMaster cannot become distracted by tactical details of product development. Protecting the team from interruptions means the ScrumMaster must have broad awareness, at all times, of what is happening with the team. If a team member is interrupted by a phone call, an email, or someone walking into the Scrum team room, the ScrumMaster needs to notice it immediately.

Whenever a ScrumMaster takes on a product development task, focus on the role is lost and a condition of a simple conflict-of-interest is created. If the team has “committed” to deliver certain Product Backlog Items at the end of a Sprint, then that feeling of commitment may lead a ScrumMaster to focusing on the wrong things.

The time of a ScrumMaster is an investment in continuous improvement. Letting a ScrumMaster contribute to the work of the team dilutes that investment.

This article is a follow-up article to the 24 Common Scrum Pitfalls written back in 2011.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Pitfall of Scrum: ScrumMaster as Contributor appeared first on Agile Advice.

Categories: Blogs

focus

Derick Bailey - new ThoughtStream - Wed, 08/05/2015 - 12:00

The month of July was almost entirely unfocused for me. I spent more than a week on “vacation” with my family, and I spent another week sick and stuck in my bed. With more than two weeks of downtime, essentially, I feel lost. I feel like I’m not sure of what I’m doing … and I’m scared of that feeling.

focus

It’s more than a bit scary, honestly – the feeling of being unfocused and not knowing what I’m supposed to be working on. I panicked a bit, too, and on the Entreprogrammers podcast, I ended up talking with Josh and John for almost 3 hours because I was panicking about being unfocused.

An Existential Crisis

The kind of focus I’m talking about isn’t the “put twitter down, get back to work” kind. No, I’m talking about the big picture… the “What is my purpose? Why am I here? What is the real value that my business, my blog and my efforts bring to the community?”

It turns out that the 2+ weeks of down time were what I actually needed – even if they weren’t what I wanted. Having that much down time gave me a chance to step back and think for a bit. And the questions I came up with while I was thinking scared me because I was seriously asking “What am I doing? Why?”

I was having a mild existential crisis.

I wasn’t sure what I was doing and I almost deleted my twitter account and stopped working entirely. I didn’t and I’m glad I didn’t. But I was close. Really close.

I Am Unfocused, But Now I Can Fix That

I’ve known that I have been unfocused for a long time now. This is why I shut down SignalLeaf – because I was stretched in too many directions. I had no time to do anything I wanted, and was constantly rushing around in a panic trying to keep things floating. SignalLeaf took a back seat when it should have been a top priority. WatchMeCode languished in the site design and features I wanted to add. My blogging and writing have been almost non-existent for a while.

When I shut down SignalLeaf and had those 2 weeks of down time in July, I realized how completely unfocused I was with everything else.

And now, I have the time to fix that. Now, I don’t have three separate businesses that I’m trying to run. I only have my consulting and my screencasting now. And my consulting feeds my screencasting, to a large degree. So really, I have a much more focused business already.

But I still don’t have the kind of focus I want.

Finding Focus

In the next few weeks, I’m going to spend some more time thinking. The thought process is going to be aimed at creating new focus for WatchMeCode and DerickBailey.com. I am going to put a new emphasis on the direction that I am heading with JavaScript and all of the writing that I am doing. That new heading will be centered around architecture and messaging based patterns in JavaScript – both on the client and on the server.

These next few weeks will be used to determine how I want to approach this renewed focus. I’ll be setting goals for myself. I’ll be generating metrics that I want to capture. I’ll be looking at what I need to do in order to accomplish these things. I’ll be creating a singular focus for my work as an entrepreneur in the JavaScript space.

Resurrecting Architecture

Messaging and architecture aren’t new territories for me. I’ve got the RabbitMQ For Developers package which is all about messaging. But I’ve also got years of experience writing about architecture and messaging patterns for all kinds of applications, over at my old DerickBailey.LosTechies.com blog.

Architecture is what I did for many, many years – including JavaScript architecture. I brought messaging patterns to the world of Backbone.js when I built Marionette.js on top of it. I helped bring these same patterns to Windows applications and mobile devices when I built projects on those platforms, as well.

I want to bring these patterns of architecture, of scalable (not just large scale – scalable from small, all the way out) systems, and of messaging both in memory and across distributed systems, in to the world of JavaScript. This little language that could is finally doing it for more people than ever, and it’s about time the JavaScript community embraced the architecture it needs.

Join Me In My New Focus

I’m getting ready to refocus my blogging and screencasting, to center around JavaScript based architecture and messaging patterns, and I want you to come along with me. Clearly, you’re here now on my mailing list. You’ll get the blog posts as I restart this journey. You’ll be on the edge of what I am working on, already. But, there’s so much more to be found in WatchMeCode, already.

Take advantage of the 100+ screencasts and other videos on WatchMeCode.

This new direction will take a while to gain momentum. But I’m already moving. And I want you to join me in this new, better and more focused direction.

Categories: Blogs

Agile leads to technical debt?

Does Agile lead to technical debt?

The phrase "technical debt" was coined by Ward Cunningham in 1992.  Ward is known for a few significant things: wikis, CRC cards, and influencing Extreme Programming (XP).

XP was by far the dominant "lightweight methodology" when the Agile Manifesto was created.

So it seems rather odd to suggest that Agile leads to technical debt.  All the talk about technical debt came from Agile people.

Here's Ward talking about the history of "technical debt":

Here's a couple articles by Martin Fowler on technical debt:
Ward and Martin are signatories of the Agile Manifesto, which makes sense because they participated in creating it.

It's really ironic that people today might believe that Agile leads to technical debt.

But I can understand the belief.

XP is no longer what people typically think of when they think "Agile".  There has been a shift to Scrum and Kanban and management issues in general.  I can understand that people exposed to Flaccid Scrum might believe that Agile equates to poor technical practice.

The more controversial, though unoriginal, claim would be that XP practices lead to technical debt.  In other words, claiming that the community that developed and embraced the concept of being mindful of technical debt advocates practices that lead to it.
Categories: Blogs

Agile is for extroverts?

Is Agile for extroverts?
And given most programmers are introverts, is Agile just stupid?

Given that I'm an introvert and most people I've worked with are introverted, I find this question weird every time it's asked.

As long as you are building things for other people, you cannot be effective unless you engage with those people to understand the problem to be solved.  As long as you do not have perfect focus and infinite perspective, you cannot be effective unless you learn to work with other people in a close, collaborative way.

I will acknowledge that engaging with people in this way might initially feel uncomfortable.

But I'll suggest that you should be more interested in being effective than not being uncomfortable.  If  you can't make that shift, then yes, Agile is not for you.
Categories: Blogs

Agile is not sustainable?

Does Agile produce a working environment that is unsustainable?

Extreme Programming (XP) originally had an explicit practice called "40 Hour Week", which eventually became "Sustainable Pace".

Part of this was because of an intent to make software development humane; part of this was because people were interested in what was actually effective and not what just looked effective.  Working excessive, unsustainable hours is not effective.

As Ward Cunningham said, in response to a request to get a team to work more hours: "We would if it would help."

There are other practices that address sustainability:
I will acknowledge that there are some things that may seem like they are designed to increase unsustainability:
  • The use of the word "sprint"
  • Daily stand-ups
  • The continuous flow approach of kanban
  • Continuous Delivery
All the principles and practices designed for increased pace and speed can easily lead to unsustainability if you hold a particular mistaken assumption.

The mistaken assumption is that Agile encourages a one-way communication where a Product Owner tells a development team what to do rather than a n-way collaboration between everyone because that's just a better way to do software development.
Categories: Blogs

Guest Blog: The Value of Continuous Agile Retrospectives

Ben Linders - Tue, 08/04/2015 - 16:58
In this gust blog post on BenLinders.com David Horowitz, CEO and Co-Founder of Retrium, explores why you should do continuous retrospectives and how you can do them to establish continuous improvement. Continue reading →
Categories: Blogs

Who Should be Your Product Owner?

Johanna Rothman - Tue, 08/04/2015 - 14:01

In agile, we separate the Product Owner function from functional (development) management. The reason is that we want the people who can understand and evaluate the business value to articulate the business value to tell the people who understand the work’s value when to implement what. The technical folks determine how to implement the what.

Separating the when/what from how is a great separation. It allows the people who are considering the long term and customer impact of a given feature or set of features a way to rank the work. Technical teams may not realize when to release a given feature/feature set.

In my recent post, Product Manager, Product Owner, or Business Analyst?, I discussed what these different roles might deliver. Now it’s time to consider who should do the product management/product ownership roles.

If you have someone called a product manager, that person defines the product, asks the product development team(s) for features, and talks to customers. Notice the last part, the talking to customers part. This person is often out of the office. The product manager is an outward-facing job, not an internally-focused job.

The product owner works with the team to define and refine features, to replan the backlogs, and to know when it is time to release. The product owner is an inward-facing function.

(Just for completeness, the business analyst is an inward-facing function. The BA might sit with people in the business to ask, “Exactly what did you mean when you asked for this functionality? What does that mean to you?” A product owner might ask that same question.)

What happens when your product manager is your product owner? The product development team doesn’t have enough time with the product owner. Maybe the team doesn’t understand the backlog, or the release criteria, or even something about a specific story.

Sometimes, functional managers become product owners. They have the domain expertise and the ability to create a backlog and to work with the product manager when that person is available. Is this a good idea?

If the manager is not the PO for his/her team, it’s okay. I wonder how a manager can build relationships with people in his/her team and manage the problems and remove impediments that the team needs. Maybe the manager doesn’t need to manage so much and can be a PO. Maybe the product ownership job isn’t too difficult. I’m skeptical, but it could happen.

There is a real problem when a team’s manager is also the product owner. People are less likely to have a discussion and disagree with their managers, especially if the organization hasn’t moved to team compensation. In Weird Ideas That Work: How to Build a Creative Company, Sutton discusses the issue of how and when people feel comfortable challenging their managers. 

Many people do not feel comfortable challenging their managers. At all.

We want the PO and the team to be able to have that give-and-take about ranking, value, when it makes sense to do what. The PO makes the decision, and with information from the team, can take all the value into account. The PO might hear, “We can implement this feature first, and then this other feature is much easier.” Or, “If we fix these defects now, we can make these features much easier.” You want those conversations. The PO might say, “No, I want the original order” and the team will do it. The conversations are critical.

If you are a manager, considering being a PO for your team, reconsider. Your organization may have too many managers and not enough POs. That’s a problem to fix. Don’t make it difficult for your team to have honest discussions with you. Make it possible for people with the best interests of the product to have real discussions without being worried about their jobs.

(If you are struggling with the PO role, consider my Product Owner Training for Agencies. It starts next week.)

Categories: Blogs

Scrum Without Removing Impediments Isn’t Scrum

Notes from a Tool User - Mark Levison - Tue, 08/04/2015 - 13:00
Mechanical Scrum Versus True Scrum – What’s the Difference?

Speed BumpsRecently I was talking to a friend about their company’s implementation of Scrum. They don’t see the point. Before Scrum was implemented, they often had to wait an hour or more to access a test machine.  After several years of using Scrum, it’s still a problem. The same company has its test servers in another country and there are often network issues that cause the running tests to fail. These were problems before they started using Scrum, and nothing has been done to address them since.

Scrum only works when we use it to address the problems that exist in our environment, and then work to resolve them.

Consider the basic principle that a Scrum Team is required to produce high quality working software (or hardware) at the end of every Sprint. Shippable quality.

Anything that stops the Team from achieving Shippable Quality Software every Sprint is an impediment, and must be resolved.

Here are some of the common reasons that impediments don’t get resolved:

  • Definition of Done[1] doesn’t exist, isn’t honoured, or isn’t published in a truly visible place (usually the Team room wall).
  • Definition of Done isn’t frequently updated and improved until the Team is at least able to ship at the end of every sprint; preferably able to ship continuously.
  • There are no Component or Feature Teams[2]. Scrum doesn’t require Feature Teams explicitly, but it does require that the Team has a sufficient supply of skills that it can get a small vertical slice (not just one component) of software to truly done at the end of every Sprint. Component Teams, while legitimate in Scrum, increase the coordination effort required to get to done every Sprint.
  • Pressure exists to deliver more every sprint. Many Teams feel a constant pressure to deliver significantly more than they’re able to. The pressure is so intense, that they constantly take shortcuts to try to meet the demand. Scrum was created to give the doers more freedom in deciding how much work they could achieve and how they could achieve it. It only works when the Team takes a stand and makes clear their capacity. Real improvement is only possible when the Team has sufficient slack to pause, reflect, and improve.
  • Organizational impediments are not removed. For example, network issues between one office and another, coordination issues between Teams, or anything else that can’t be resolved at the Team level.
Remember, Scrum is a problem-finding tool, not a problem-solving tool.

Your organization has to solve the problems that Scrum highlights, for it to work effectively. Since Scrum doesn’t solve the problems for your organization, you’re not actually practicing Scrum if you don’t follow through and resolve the issues that it finds. Instead, you’re practicing Mechanical Scrum.

 

[1] Definition of Done – a checklist used by a Scrum Team to test if a Product Backlog Item is truly completed.
[2] Feature Teams – A feature team is “a long-lived, cross-functional, cross-component team that completes many end-to-end customer features—one by one.” Source: Larman and Vodde: http://www.featureteamTeams.org
Categories: Blogs

Scrum Without Removing Impediments Isn’t Scrum

Notes from a Tool User - Mark Levison - Tue, 08/04/2015 - 12:00
Mechanical Scrum Versus True Scrum – What’s the Difference?

Speed-BumpsRecently I was talking to a friend about their company’s implementation of Scrum. They don’t see the point. Before Scrum was implemented, they often had to wait an hour or more to access a test machine.  After several years of using Scrum, it’s still a problem. The same company has its test servers in another country and there are often network issues that cause the running tests to fail. These were problems before they started using Scrum, and nothing has been done to address them since.

Scrum only works when we use it to address the problems that exist in our environment, and then work to resolve them.

Consider the basic principle that a Scrum Team is required to produce high quality working software (or hardware) at the end of every Sprint. Shippable quality.

Anything that stops the Team from achieving Shippable Quality Software every Sprint is an impediment, and must be resolved.

Here are some of the common reasons that impediments don’t get resolved:

  • Definition of Done[1] doesn’t exist, isn’t honoured, or isn’t published in a truly visible place (usually the Team room wall).
  • Definition of Done isn’t frequently updated and improved until the Team is at least able to ship at the end of every sprint; preferably able to ship continuously.
  • There are no Component or Feature Teams[2]. Scrum doesn’t require Feature Teams explicitly, but it does require that the Team has a sufficient supply of skills that it can get a small vertical slice (not just one component) of software to truly done at the end of every Sprint. Component Teams, while legitimate in Scrum, increase the coordination effort required to get to done every Sprint.
  • Pressure exists to deliver more every sprint. Many Teams feel a constant pressure to deliver significantly more than they’re able to. The pressure is so intense, that they constantly take shortcuts to try to meet the demand. Scrum was created to give the doers more freedom in deciding how much work they could achieve and how they could achieve it. It only works when the Team takes a stand and makes clear their capacity. Real improvement is only possible when the Team has sufficient slack to pause, reflect, and improve.
  • Organizational impediments are not removed. For example, network issues between one office and another, coordination issues between Teams, or anything else that can’t be resolved at the Team level.
Remember, Scrum is a problem-finding tool, not a problem-solving tool.

Your organization has to solve the problems that Scrum highlights, for it to work effectively. Since Scrum doesn’t solve the problems for your organization, you’re not actually practicing Scrum if you don’t follow through and resolve the issues that it finds. Instead, you’re practicing Mechanical Scrum.

 

[1] Definition of Done – a checklist used by a Scrum Team to test if a Product Backlog Item is truly completed.
[2] Feature Teams – A feature team is “a long-lived, cross-functional, cross-component team that completes many end-to-end customer features—one by one.” Source: Larman and Vodde: http://www.featureteamTeams.org
Categories: Blogs

Spark: pyspark/Hadoop – py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4

Mark Needham - Tue, 08/04/2015 - 08:35

I’ve been playing around with pyspark – Spark’s Python library – and I wanted to execute the following job which takes a file from my local HDFS and then counts how many times each FBI code appears using Spark SQL:

from pyspark import SparkContext
from pyspark.sql import SQLContext
 
sc = SparkContext("local", "Simple App")
sqlContext = SQLContext(sc)
 
file = "hdfs://localhost:9000/user/markneedham/Crimes_-_2001_to_present.csv"
 
sqlContext.load(source="com.databricks.spark.csv", header="true", path = file).registerTempTable("crimes")
rows = sqlContext.sql("select `FBI Code` AS fbiCode, COUNT(*) AS times FROM crimes GROUP BY `FBI Code` ORDER BY times DESC").collect()
 
for row in rows:
    print("{0} -> {1}".format(row.fbiCode, row.times))

I submitted the job and waited:

$ ./spark-1.3.0-bin-hadoop1/bin/spark-submit --driver-memory 5g --packages com.databricks:spark-csv_2.10:1.1.0 fbi_spark.py
...
Traceback (most recent call last):
  File "/Users/markneedham/projects/neo4j-spark-chicago/fbi_spark.py", line 11, in <module>
    sqlContext.load(source="com.databricks.spark.csv", header="true", path = file).registerTempTable("crimes")
  File "/Users/markneedham/projects/neo4j-spark-chicago/spark-1.3.0-bin-hadoop1/python/pyspark/sql/context.py", line 482, in load
    df = self._ssql_ctx.load(source, joptions)
  File "/Users/markneedham/projects/neo4j-spark-chicago/spark-1.3.0-bin-hadoop1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/Users/markneedham/projects/neo4j-spark-chicago/spark-1.3.0-bin-hadoop1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.
: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
	at org.apache.hadoop.ipc.Client.call(Client.java:1070)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
	at com.sun.proxy.$Proxy7.getProtocolVersion(Unknown Source)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
	at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:176)
	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
	at org.apache.spark.rdd.RDD.take(RDD.scala:1156)
	at org.apache.spark.rdd.RDD.first(RDD.scala:1189)
	at com.databricks.spark.csv.CsvRelation.firstLine$lzycompute(CsvRelation.scala:129)
	at com.databricks.spark.csv.CsvRelation.firstLine(CsvRelation.scala:127)
	at com.databricks.spark.csv.CsvRelation.inferSchema(CsvRelation.scala:109)
	at com.databricks.spark.csv.CsvRelation.<init>(CsvRelation.scala:62)
	at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:115)
	at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:40)
	at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:28)
	at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:290)
	at org.apache.spark.sql.SQLContext.load(SQLContext.scala:679)
	at org.apache.spark.sql.SQLContext.load(SQLContext.scala:667)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
	at py4j.Gateway.invoke(Gateway.java:259)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:207)
	at java.lang.Thread.run(Thread.java:745)

It looks like my Hadoop Client and Server are using different versions which in fact they are! We can see from the name of the spark folder that I’m using Hadoop 1.x there and if we check the local Hadoop version we’ll notice it’s using the 2.x seris:

$ hadoop version
Hadoop 2.6.0

In this case the easiest fix is use a version of Spark that’s compiled against Hadoop 2.6 which as of now means Spark 1.4.1.

Let’s try and run our job again:

$ ./spark-1.4.1-bin-hadoop2.6/bin/spark-submit --driver-memory 5g --packages com.databricks:spark-csv_2.10:1.1.0 fbi_spark.py
 
06 -> 859197
08B -> 653575
14 -> 488212
18 -> 457782
26 -> 431316
05 -> 257310
07 -> 197404
08A -> 188964
03 -> 157706
11 -> 112675
04B -> 103961
04A -> 60344
16 -> 47279
15 -> 40361
24 -> 31809
10 -> 22467
17 -> 17555
02 -> 17008
20 -> 15190
19 -> 10878
22 -> 8847
09 -> 6358
01A -> 4830
13 -> 1561
12 -> 835
01B -> 16

And it’s working!

Categories: Blogs

Agile and Strategic Alignment

Leading Answers - Mike Griffiths - Tue, 08/04/2015 - 05:01
This month’s theme at ProjectManagement.com is “Strategy Alignment/IT Strategy.” This can seem at odds for agile teams who organically grow solutions towards evolving requirements. Where’s the strategy in that, and how do we promote empowered teams while preventing chaos? Most... Mike Griffiths
Categories: Blogs

Book Review: Difficult Conversations

thekua.com@work - Mon, 08/03/2015 - 05:13

Conflict, negotiation and difficult conversations are hard, but there are plenty of good books help. I often recommend Crucial Confrontations, Getting to Yes and Getting Past No. Someone recommended Difficult Conversations, a book that I recently finished reading.

Difficult ConversationsDifficult Conversations

Where the other books I read tended to take a more mechanistic view of steering the conversation, I really appreciated the slightly different take with this book, which I felt more humanistic because it acknowledged the emotional side to difficult conversations. The authors suggest that when we have a difficult conversation, we experience three simultaneous conversations:

  1. The “What Happened” Conversation
  2. The Feelings Conversation
  3. The Identity Conversation
The “What Happened” Conversation

We often assume we know what happened, because we know what we know (Our Story). The authors (rightly) point out, that our story may be completely different from the other person (Their Story). A good practical tip is to focus on building the Third Story as a way of building a shared awareness and appreciation of other data that may make a difference to the conversation

The Feelings Conversation

As much as we like to think we are logical, we are highly emotional and biased people. It’s what makes us human. We manifest this by saying things based on how we are feeling. Sometimes we don’t even know this is happening. The book helps us understand and gives us strategies for uncovering the feelings that we may be experiencing during the conversation. They also suggest building empathy with the other person by walking through the Feelings Conversation the other person will be having as well.

The Identity Conversation

I think this was the first time that I had thought about when we struggle to communicate, or agree on something, we may be doing so because have difficulty accepting something we may not like, or something that threatens our identity. This is what the authors call out as the Identity Conversation and is a natural part of successfully navigating a difficult conversation.

Conclusion

I found Difficult Conversations a really enjoyable read that added a few new perspectives to my toolkit. I appreciate their practical advice such as stepping through each of the three conversations from both your and the other person’s perspective and avoiding speaking in different modes. I like the fact that they address the emotional side to difficult conversations and give concrete ways of understanding and coping with them, instead of ignoring them or pushing them aside.

Categories: Blogs