Skip to content

Blogs

A Bit of Agile in Waterfall Project Approach

TV Agile - Mon, 12/01/2014 - 19:19
Short stories and lessons learned from a waterfall oriented organization that wants to get more flexible but language barriers and time zone difference bring extra challenges for timely delivery – from iterations to multi-year product road-maps, from individual responsibility to multi-project teams. Video producer: http://agileprague.com/
Categories: Blogs

Product Backlog Refinement

Learn more about our Scrum and Agile training sessions on WorldMindware.com

The ultimate purpose of Product Backlog refinement is to ensure an ongoing conversation that increases transparency of the Product Backlog and therefore the Product itself – to orient everyone on the team to breaking out of their waterfall silos and focus on delivering business value, period.

On mature teams, a lot of the refinement work happens as ad hoc conversations while they are sitting around and thinking together about how to build something great because they are just motivated by that and it becomes part of their mode of operation.

The objective of the refinement work of any given Sprint (that often needs to be repeated over and over like a mantra with new, immature teams) is to ensure that the items at the top of the Backlog are transparent enough that the Development Team considers them ready to pull and get “Done” in the next Sprint.  This is where the concept of the Definition of “Ready” (DoR) comes from – the Scrum Team defines the DoR and spends up to 10% of its capacity refining enough items at the top of the Backlog so that it can provide estimates (if required) and have a reasonable degree of confidence that it can deliver the items in the next Sprint.

Refinement is NOT solutioning – I think this is the big trap that a lot of teams fall into because there is a false assumption that technical solutions need to be hashed out before estimates can be made (part of the carried-over lack of trust and communication between the business and IT) – I would almost rather throw out estimates in cases where this is not improving – The Planning Game exercise, when facilitated well, lends itself more to increasing transparency rather than solutioning.

The fact that teams are telling us that they need to solution before they can estimate is also an indication of weak Agile Engineering practices such as refactoring, test-driven development and continuous integration (XP).  The best refinement sessions are those in which the team is able to focus on the “what” – the business benefit results that the Product Owner really wants – rather than the “how” (solution).  Strong teams emerge in an environment in which they are trusted by the business and management to find the right solution as a team.  They don’t need to have it all figured out before giving an estimate because they are not afraid to give a bad estimate and fail.  Also, if the team is struggling to give estimates, this is often a sign that the Product Backlog Items are too big.  Most likely the team also needs to expand the Definition of “Done” to include testing against acceptance criteria within the Sprint so that they can estimate based on that criteria.

The “how” (solution) should be mapped out by the Development Team at a high level in the 2nd part of Sprint Planning (partly why the time box is bigger than they often think they need) and more detailed architecture, requirements and design work as part of the Sprint Backlog

But this level of maturity is very hard to do and it will take a while to get there, perhaps even years.

It also depends on your interpretation of “detail”, the word used in the Scrum Guide to describe what the team does in Product Backlog refinement. To me, it means understanding in more detail what the Product Owner really wants and needs. What does it mean to you?

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Why you want to give up coding

thekua.com@work - Mon, 12/01/2014 - 14:18
A background story

A friend of mine worked as a Tech Lead, let’s call them Jo (not their real name) for most of their career. A few years ago, Jo moved into a management role that involved very little coding. They were no longer working full-time as a developer and they stopped playing a Tech Lead role. Jo now leads an organisation with six or seven large development groups.

In the definition of a Tech Lead, I suggested a Tech Lead should write code a minimum of 30% of their time. Jo managed to find some time writing code, but it was inconsistent – about an hour a week. In a 40-hour week, that is less than 3%. Jo missed writing code. Jo was no longer a developer and Jo was no Tech Lead.

What do you control? What do you influence?

Every role comes with a set of responsibilities, and the authority to fulfil those responsibilities. This authority gives you a certain amount of control.

For example, a developer has control over the code and tests that they design and write. Others may have influence over the code. Examples of influencing factors include architectural or product and/or platform constraints, team code standards and code reviews. Ultimately the developer has control over their own code.

Every company has a certain organisational design. Developers (and other employees) work within this structure. The organisational design impacts on how effective software delivery is. Rigidly hierarchical structures with completely different departments (e.g. developers and testers) have difficulty collaborating. An ineffective organisational design is a sore point for many developers because it makes delivering software much harder.

A developer has zero control over organisational design. They might be able to influence it, but it is ultimately controlled by managers. Some developers may try to change this, but most complain. I have heard this from developers in the past:

Who’s brilliant idea was to setup a development [team] like this?

Who's great idea was this?
Another example.

I can’t get anything done because I rely on those people to get something done and they sit on another floor.

Developers can complain, which is an ineffective style of influencing, although there are many more effective ways. In the book Influence: The Psychology of Persuasion, the author, Robert Cialdini outlines six key influencing strategies: Reciprocity, Commitment (and Consistency), Social Proof, Liking (a variant of the Halo Effect), Authority, and Scarcity.

Developers only influence their working environment. They do not control it. Note: An exception is, of course, when a company is small enough that the developer also takes on general organisational management responsibilities (e.g. in a startup)

Trading control for influence

Influence for control

Jo, who moved from being a developer to a Tech Lead, and again from a Tech Lead to a general manager shared an interesting insight with me.

You know those things I used to complain about as a developer? Well, now I have the ability to change them.

Programmers see the “non-technical” path as a path with nothing to offer. When programmers step into a leadership role, they inherit both responsibilities and the authority to control more of their work environment. A developer works within these constraints. A Tech Lead has more authority to change those constraints, and a manager even more authority to control and change those constraints.

How does this impact developers who become Tech Leads?

When developers give up their control in trade of influence, their sphere of influence grows. Instead of developing a single feature, they can guide the entire technical solution. The Tech Lead’s influence also grows between the technical and the business side. A Tech Lead has much more influence over how technology can be used to solve business problems, while developers are engaged too late.

I would suggest to Tech Leads never to give up all coding. Instead, it is trading more of the time you would spend crafting code, in exchange for a wider sphere of influence. The wider sphere of influence helps not just you, but also your team write better code in a better work environment.

If you liked this article, you will be interested in “Talking with Tech Leads,” a book that shares real life experiences from over 35 Tech Leads around the world. Now available on Leanpub. The featured image from this post is taken from Flickr under the Creative Commons licence

Categories: Blogs

What's the best way to reduce the weight of a car?

We need to reduce the weight of the car we're designing.

So what we'll do is shave 10% off every component.  10% off the wheels, 10% off the engine, 10% off the frame...

OR maybe we'll remove all the passenger seats, the air conditioning unit, the entertainment system, investigate a ceramic engine, carbon fibre frame...

We need to reduce the operating cost of our department.

So what we'll do is ask every team to reduce their budget by 10%.

OR...
Categories: Blogs

Don’t Give Partial Credit

Leading Agile - Mike Cottmeyer - Mon, 12/01/2014 - 09:05

What do you do with stories that don’t finish before the end of the sprint? Do we get partial credit?

I’m asked that a lot. Everyone wants to know whether to split the story and what to do with the points. Don’t give partial credit for unfinished stories or make untestable splits.

Don’t Bother Splitting Unfinished, Untestable Stories

Move unfinished, untested stories to the next sprint, without splitting. What benefit would come from splitting?

Sometimes people tell me that in the future they will need to know what work was done in this sprint, so that’s why they split stories. I’ve never seen the need to do that. If that question does arise, your agile tool’s history will show what you need to know.

It’s too easy to make these lousy splits. Every user story must be a proper user story. Don’t get sloppy. Slitting a story into an unfinished and untestable portion is sloppy. It’s a bad habit that will begin to show up as poor stories in your product backlog. You have to be disciplined at being disciplined.

Don’t Give Partial Credit

Others tell me that they want the velocity to look right for this sprint, to take into account the work they did on the unfinished story. They want partial credit. Bad idea.

Once you’ve begun development, once you’ve done some design and dug into the detail, it’s difficult to correctly estimate a piece of a story relative to other stories that you haven’t started developing yet. For a longer explanation of this, see the post Don’t Estimate In Sprint Planning. It’s difficult to correctly estimate such work relative to the rest of the stories in your product backlog, those that you haven’t started working on. It’s especially hard if you are trying to estimate some poor split to an unfinished story, a piece that doesn’t meet your definition of done. Just don’t do it. Just move the whole story to the next sprint.

I do, however, recommend adjusting the estimate on the story downwards (never up) if the estimate of the remaining work is smaller than the original estimate. I care a great deal about being predictable, about conservative planning, and about not overstating my velocity. The problem with giving all the points in the next sprint (n+1) is that it makes the recent average velocity (usually over 3 sprints) be too high a month and a half later when this sprint (n) drops out of the average. A month and a half later no one will remember that we carried over all the points for some story into that next sprint. No one will realize that the velocity they are using to evaluate their release plan is abnormally high.

In Fact, Don’t Give Credit at All

Others tell me they want to get credit for the work done on the story.

I teach against the notion of “partial credit”. I teach against the notion of getting credit in general. There is no credit. It isn’t about getting credit. That’s the wrong way of thinking about velocity. Dangerous even.

Velocity is a tool to help us with release planning. If the team feels they need to get credit, there is dysfunctional behavior in the organization. Perhaps teams are being challenged to increase their velocity, or are reprimanded if their velocity dips, or are being compared against another team’s velocity. If the team feels they need to get credit, they will game the numbers and velocity will not be useful for its intended purpose.

Velocity is what it is. It’s not about credit.

A Better Plan: Finish Sprints Cleanly

In training, I make a big deal about the problem of unfinished stories. Whatever you do, however you handle them, unfinished stories are difficult to deal with and they mess with your velocity. There is no good solution other than finishing sprints cleanly.

Better than knowing the how to handle unfinished stories is to not have unfinished stories. Don’t start work you can’t finish in the sprint. Before starting any story, first see if the team agrees that it and everything else already started can be finished cleanly.

Stop starting and start finishing

That was coined by David Anderson, right? It means to focus on finishing stuff that is already started before starting new work. Scrum teams can learn a lot from David. Study his books on Agile Management and Kanban.

Each day, before starting work on another user story, the team should consider whether it can finish all other in-progress work and this additional story before the end of the sprint.

In the last half of a sprint, the team should start asking whether there is anything they need to pull out so that they can swarm and finish the other stuff cleanly. If it looks like multiple stories aren’t going to make it, sacrifice one or two stories, stop working on them now, split them appropriately (see below) or throw them out of the sprint, so that you can finish all the other stories cleanly.

Split Splitable Stories

There is a scenario in which I would split an unfinishable story, but you should do that before the last day of the sprint, both halves must meet the INVEST criteria, and the done half must fully meet the definition of done.

Sometimes this happens when the team has a story generally working and tested but is having trouble with some particular error scenario or some advanced usability issue. I’ll split it into a basic and advanced story, or a happy path and an error handling story. But I will always split such a story so that both parts meet the INVEST criteria. They must both be true User Stories. And the part that is done has to meet the definition of done and be accepted by the product owner.

In that case, I might split the points if the team can decide how to allocate the points. (But never such that the total is greater than the original.)

Conclusion

So there you have it. Any time you split a story, for whatever reason you split it, make sure each half is a proper user story, meeting the INVEST criteria. Don’t cause velocity inflation and risky release planning by increasing the total points on underestimated stories. Finish sprints cleanly. And just forget about trying to “get credit” for unfinished work.

The post Don’t Give Partial Credit appeared first on LeadingAgile.

Categories: Blogs

Another Approach to the Diamond Kata

George Dinwiddie’s blog - Mon, 12/01/2014 - 06:02

I saw that Alistair Cockburn had written a post about Seb Rose’s post on the Diamond Kata. I only read the beginning of both of those because I recognized the problem that Seb described with the “Gorilla” approach that, upon reaching the ‘C’ case.

“The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once. That’s hard, because we’ll need to cope with multiple lines, varying indentation, and repeated characters with a varying number of spaces between them.”

I’ve run into such situations before, and it’s always be a clue for me to back up and work in smaller steps. Seb describes that the ‘B’ case, “easy enough to get this to pass by hardcoding the result.” Alistair describes the strategy as “shuffle around a bit” for the ‘B’ case. I’m not sure what “shuffling around a bit” means and I don’t think it would be particularly easy to get both ‘A’ and ‘B’ cases working with constants and not heading down a silly “if (letter == 'A') … elseif (letter == 'B') …” implementation. I was curious how I would approach it, and decided to try. (Ron Jeffries also wrote a post on the topic.) I didn’t read any of these three solutions before implementing my own, just so I could see what I would do.

If you don’t want to read the details, you can skip to the conclusion.

Step 1: Making sure rspec is working

Like Ron, I often write a dummy test first to make sure things are hooked up right. This was especially true since I haven’t programmed in Ruby for awhile, so long that I haven’t worked with Ruby 2 or Rspec 3. It seemed like a good time to get familiar with these.

require 'rspec'

describe "setup" do
  it "can call rspec" do
    expect(2).to eql(2)
  end
end

Of course, I originally had “expect(1).to eql(2)” to make the test fail. Once I got the syntax right and installation correct, I had a failing test and then changed it to make it pass.

Step 2: Trivial representation of degenerate diamond

Now I start in earnest, taking care of the trivial case.

describe Diamond do
  describe '.create(A)' do
    subject { Diamond.create('A') }

    it "has a trivial representation" do
      expect(subject.representation).to eql "A\n"
    end
  end
end

is accomplished with

class Diamond
  def self.create(max_letter)
    Diamond.new(max_letter)
  end

  def initialize(max_letter)
  end

  def representation
    "A\n"
  end
end

There are several design choices in this. I chose a factory method because it seemed more readable in the test. It delegated to the constructor so I’d have an instance for expressing expectations. And, of course, as Seb and Alistair and Ron all expect, I fake the return with a constant. Easy-peasy!

Step 3: Output all the necessary lines

The next step is obviously to implement the ‘B’ case. Hmmm… Seb suggested faking it with a constant, also, but that immediately leads me down the path of an “if (letter == 'A') … elseif (letter == 'B') …” implementation. I don’t mind faking a return, but I don’t want my program to look like What the Tortoise Said to Achilles. I find I have to insert logic before I reach the “solve the whole problem at ‘C'” point predicted by Seb and Alistair. If I write a test for the representation, I need to solve the whole problem for the ‘B’ case, and that’s too big a step for me. It’s got calculations and formatting all at once, and that’s too much for me to hold in my brain at once.

I decided to start with outputting the right number of lines.

  describe '.create(B)' do
    subject { Diamond.create('B') }

    it "has three lines in the representation" do
      expect(subject.representation.lines.count).to eql 3
    end
  end

I need a line for each letter from ‘A’ up to the maximum letter, and then back down to ‘A’ without a duplicate line for the maximum letter. I ended up with this:

  def initialize(max_letter)
    @letters= ('A' .. max_letter).to_a
  end

  def representation
    output= ""
    @letters.each { |letter| 
      output << letter+"\n"
    }
    @letters.reverse[1..-1].each { |letter| 
      output << letter+"\n"
    }
    output
  end

Saving the array of letters rather than the max_letter seemed an easy way to count, and I would need the value of the current letter, anyway. In retrospect, that was a bit of speculation. The current test doesn’t check the letters in the output.

I initially wanted to walk up the array of letters and back down. In the C language that would have been simple, but it wasn’t convenient in Ruby. Reversing the array, minus the max_letter, let me conveniently use the ‘each’ idiom. I avoided numerical calculations and I think this will work all the way up to the ‘Z’ case. I check it for the ‘C’ case.

  describe '.create(C)' do
    subject { Diamond.create('C') }

    it "has five lines in the representation" do
      expect(subject.representation.lines.count).to eql 5
    end
  end

Everything works fine.

Taking stock of where we are

Let’s take a look now and see what we have. I haven’t described the non-working changes I’ve made in each of these steps. I’d be embarrassed at all the silly things I typed that didn’t compile. (I did learn that you can’t reverse the subscripts of an array to get a slice in the other direction.) I also haven’t described how the code looked before making simplifying refactorings.

As I look at this code, I’m more aware that I’m outputting the correct letter in each row, even though I don’t have a test for this. No matter, that test will come. I’m more unhappy with the duplication between the ‘describe’ and ‘subject’ lines in the tests. There’s surely a way to avoid this duplication, but I didn’t find it in a 10-minute search of the web. I decided to let it be for now. It’s only a kata, and by publishing something “wrong,” people will surely tell me how to do it right. This is the first time I’ve used the explicit “subject” idiom and it’s still unfamiliar to me.

Step 4: handle indentation of early lines

Formatting the entire line still seemed like a big step to me. Perhaps it would have been less daunting had I done a lot of up-front thinking about the problem as Alistair did. Instead, I just toddled along at my own pace. If I figured out how many spaces went at the beginning of each line, that should help me figure out how many spaces go in the middle and end. Ignoring those later problems, I added a new expectation for the ‘B’ case.

  describe '.create(B)' do
    subject { Diamond.create('B') }

    it "has three lines in the representation" do
      expect(subject.representation.lines.count).to eql 3
    end

    it "indents the first line" do
      expect(subject.representation.lines[0]).to start_with "_A"
    end
  end

I almost started calculating the number of spaces required, but I noticed that, in addition to each row being dedicated to a letter, so was each column. Like iterating through the rows, I could iterate through the columns.

  def representation
    output= ""
    @letters.each { |letter| 
      @letters.reverse.each { |position|
        character= (position == letter) ? letter : '_'
        output << character
      }
      output << "\n"
    }
    @letters.reverse[1..-1].each { |letter| 
      output << letter+"\n"
    }
    output
  end

Again, this code looks like it will work all the way to the ‘Z’ case, so I add a couple checks for the ‘C’ case.

    it "indents the first line" do
      expect(subject.representation.lines[0]).to start_with "__A"
    end

    it "indents the second line" do
      expect(subject.representation.lines[1]).to start_with "_B_"
    end

The code seems a little messy. I’ve got a doubly nested loop with a calculation in the middle. I decided it would read more clearly if I extracted the calculation.

  def either_letter_or_blank (position, letter)
    (position == letter) ? letter : '_'
  end

  def representation
    output= ""
    @letters.each { |letter| 
      @letters.reverse.each { |position|
        output << either_letter_or_blank(position, letter)
      }
      output << "\n"
    }
    @letters.reverse[1..-1].each { |letter| 
      output << letter+"\n"
    }
    output
  end

Note that only the first half of the diamond is being formatted. I’ll get to the latter half, but it’ll probably be simpler if I format the whole line, first. Otherwise I’ll have two places to fix that formatting. I proceed to …

Step 5: handle filling out early lines

Let’s add spaces to the ends of the lines. For the ‘B’ case, that means…

    it "fills out the first line" do
      expect(subject.representation.lines[0]).to end_with "A_\n"
    end

Walking the reversed array worked so well for going down the later rows, let’s do the same for the later columns.

  def representation
    output= ""
    @letters.each { |letter| 
      @letters.reverse.each { |position|
        output << either_letter_or_blank(position, letter)
      }
      @letters[1..-1].each { |position| 
        output << either_letter_or_blank(position, letter)
      }
      output << "\n"
    }
    @letters.reverse[1..-1].each { |letter| 
      output << letter+"\n"
    }
    output
  end

That’s pretty ugly, isn’t it. Having two inner loops within the first of two outer loops is nuts, especially since it’ll have to be done again in the second outer loop. Let’s extract another method.

  def line_for_letter (letter)
    line= ""
    @letters.reverse.each { |position|
      line << either_letter_or_blank(position, letter)
    }
    @letters[1..-1].each { |position| 
      line << either_letter_or_blank(position, letter)
    }
    line << "\n"
  end

  def representation
    output= ""
    @letters.each { |letter| 
      output << line_for_letter(letter)
    }
    @letters.reverse[1..-1].each { |letter| 
      output << letter+"\n"
    }
    output
  end

We’re almost done. The descending rows are now trivial. I take a big step and specify the entire ‘B’ case output.

    it "outputs the correct diamond" do
      expected= "_A_\n"+
                "B_B\n"+
                "_A_\n"
      expect(subject.representation).to eql expected
    end

And make it pass with a single line change.

  def representation
    output= ""
    @letters.each { |letter| 
      output << line_for_letter(letter)
    }
    @letters.reverse[1..-1].each { |letter| 
      output << line_for_letter(letter)
    }
    output
  end

I think I’m done. Let’s check the ‘C’ case.

    it "outputs the correct diamond" do
      expected= "__A__\n"+
                "_B_B_\n"+
                "C___C\n"+
                "_B_B_\n"+
                "__A__\n"
      expect(subject.representation).to eql expected
    end

Yep, it works as expected. Were this a production app, I’d do some work to protect against bad input. I’d also hook it up to the command line, as Seb initially described. Oh, and remove that original “setup” spec. The full code is below, and in GitHub.

The complete spec

require 'rspec'
require_relative './diamond'

describe "setup" do
  it "can call rspec" do
    expect(2).to eql(2)
  end
end

describe Diamond do
  describe '.create(A)' do
    subject { Diamond.create('A') }

    it "has a trivial representation" do
      expect(subject.representation).to eql "A\n"
    end
  end
  
  describe '.create(B)' do
    subject { Diamond.create('B') }

    it "has three lines in the representation" do
      expect(subject.representation.lines.count).to eql 3
    end

    it "indents the first line" do
      expect(subject.representation.lines[0]).to start_with "_A"
    end

    it "fills out the first line" do
      expect(subject.representation.lines[0]).to end_with "A_\n"
    end

    it "outputs the correct diamond" do
      expected= "_A_\n"+
                "B_B\n"+
                "_A_\n"
      expect(subject.representation).to eql expected
    end
  end
  
  describe '.create(C)' do
    subject { Diamond.create('C') }

    it "has five lines in the representation" do
      expect(subject.representation.lines.count).to eql 5
    end

    it "indents the first line" do
      expect(subject.representation.lines[0]).to start_with "__A"
    end

    it "indents the second line" do
      expect(subject.representation.lines[1]).to start_with "_B_"
    end

    it "outputs the correct diamond" do
      expected= "__A__\n"+
                "_B_B_\n"+
                "C___C\n"+
                "_B_B_\n"+
                "__A__\n"
      expect(subject.representation).to eql expected
    end
  end
end

The complete code

class Diamond
  def self.create(max_letter)
    Diamond.new(max_letter)
  end

  def initialize(max_letter)
    @letters= ('A' .. max_letter).to_a
  end

  def either_letter_or_blank (position, letter)
    (position == letter) ? letter : '_'
  end

  def line_for_letter (letter)
    line= ""
    @letters.reverse.each { |position|
      line << either_letter_or_blank(position, letter)
    }
    @letters[1..-1].each { |position| 
      line << either_letter_or_blank(position, letter)
    }
    line << "\n"
  end

  def representation
    output= ""
    @letters.each { |letter| 
      output << line_for_letter(letter)
    }
    @letters.reverse[1..-1].each { |letter| 
      output << line_for_letter(letter)
    }
    output
  end
end

TL;DR

There are some interesting things I notice by comparing my solution with Seb’s and Alistair’s and Ron’s approaches.

Alistair starts with a primitive at the inside of the problem, making rows and then padding them to size. I don’t know whether thinking deeply about the problem leads to starting with a primitive, or if starting with a primitive requires you to think deeply before you start coding. I do think there’s a connection between the two.

I don’t think that connection is unbreakable. I’ve started with a low-level primitive when I was trying to see what I could do with some example data, or with a library that was new to me. Once I got a clue what I could do, then I found it easier to switch gears and work outside in, story-test first, and work my way back down to that primitive. I found it interesting that sometimes I would change the primitive subtly when I did that.

Seb started with the concept of what letters should appear, and in what order, and then made modifications to get them into the right format. I think that’s what lead him to modify and reuse his tests. They were specifying things that were only temporarily true.

I find it interesting how Seb approached the output like a sentence or paragraph. That never occurred to me. I viewed it as a two-dimensional shape, which would never lead me to the series of tests that he used. Seb’s approach is unique among the four in that is doesn’t pad the right side of the rows.

Ron started with building rows and calculating how many spaces they should have. This lead him down a path of lots of calculations. Along the way, he made the simplifying discovery that the four quadrants were symmetrical, saving some calculations.

From this, I conclude that there are certainly many ways to approach a given problem. The way you conceive of the problem and approach the solution has a great effect on the result. If you think you need to think things out before you start coding, then you probably will, and that may affect the solution you achieve. Sometimes it’s hard to let go of our ideas and be open to the opportunities presented by the code.

Certainly Ron is right that it’s helpful to think about things all the way through the implementation.

[Dec 2, 2014 — fixed some mangled formatting of the code.]

Categories: Blogs

MustBe: Authorization Plumbing For NodeJS / Express Apps

Derick Bailey - new ThoughtStream - Mon, 12/01/2014 - 00:34

I’ve looked at a number of authorization and authentication frameworks for NodeJS in the last year. While there are a number of good authentication libraries around, most of them bill themselves as authorization, as well – which is very wrong. There just aren’t very many good authorization frameworks… maybe one well rounded framework that I found. Unfortunately for me, I didn’t agree with the opinions of this framework, so I’ve spent the last year building my own for my apps and my needs. Having been using it in production for quite some time now, I found myself needing to extend it for additional scenarios and at this point, I’m happy to say that it is ready for the general public to use. 

Authorized access only

Announcing MustBe

My authorization system, MustBe, is up on Github and available via NPM for you to use in your applications. But there’s something you need to know, before you dig in. 

MustBe does not bill itself as a complete framework for all of your authorization needs. Rather, it is the plumbing that you need to integrate authorization in to your NodeJS / Express applications. In other words, it will provide the core features that you need for configuring and authorizing access to your Express application’s routes. It will not provide data access, user models or other application specific logic for you. It provides a way for you to integrate your app’s logic and code, with NodeJS / Express, to create a complete authorization 

I built MustBe with the knowledge that I will be (and am) using it across multiple projects, with varying data access and user authentication strategies. Therefore, MustBe requires a necessary amount of configuration to be used with both your data access / authentication strategy, and for the various activities that you wish to secure in your system.

Activities Based Authorization

One of the more important points in my deciding to build MustBe, was the need for activities based authorization. The idea behind this, is to not have your code check for specific roles in order to authorize access to a feature. That creates problems in the long run, as it hard codes your application to a specific set of roles. 

Instead, activity based authorization allows you to specify that access to a feature or function within your code requires authorization for a given activity. For example, updating a user may require authorization for the “user.update” activity. Creating a new Widget for your store to sell might require authorization for the “widget.create” activity. 

Thinking in terms of activities allows your system to remain flexible, as you can decide which roles have access to which activities in your system, in any manner that suits you. You can use data access to load the roles and access rules. You can hard code the relationship between a user and an activity. You can even grant access to everything, in an unlimited manner, by providing an override and specifying “admin” users as always being allowed to do everything.

The manner in which you configure whether or not someone is allowed to do a given activity is up to you – and this is where MustBe steps out of the way, so that you can provide your own logic and code. MustBe will take the code that you write to check for authorization, and apply it to your Express routes and middleware, handling both the success and failure of the authorization request.

Authorizing Users And Other Things

One of the more recent additions to MustBe, is the ability to authorize things that are not users. Most applications depend on users being logged in, to be authorized, but not all of them or all the time. There are scenarios where you need something other than a user to be authorized.

In my case, I have Accounts inside of my SignalLead podcast hosting service. These accounts must be authorized to serve podcasts. Being authorized entails a number of things, including the account being “active”, having a podcast to serve, etc. 

When I set out to implement this originally, I hacked in some code and features to make it work but it was ugly. I’ve recently rebuilt MustBe through a series of refactorings and restructuring, allowing me to add the notion of custom Identities to MustBe. Using custom Identities, an application can now authorize anything that needs to be authorized – and can authorize as many things as are needed, from within a single application. 

The default identity for authorization is still a user – but that is only the default. If you need to authorize something other than a user, you only need to provide a few lines of code to define the thing that needs authorization, and then configure what it is allowed to do through the normal activities configuration.

Securing Routes, And Security As Middleware

Another aspect of MustBe that makes it powerful, is the ability to use its “routeHelpers” as both a security mechanism for a single route handler, or as middleware for an entire set of routes or the application as a whole.

While MustBe can be applied to a single route quite easily, it is not fun to copy & paste the same authorization check in to dozens of routes. Using MustBe as middleware for your Express application, you can reduce the configuration of authorization checks down to a single location in many cases. I’ve personally used it to secure entire sites, and single routes and sub-route tree structures, quite easily.

Check Out MustBe, And Authorize Your Users Easily

I’ve built this system from within my own sphere of needs, meaning it may still be missing some features or configuration to be truly useful to others. But the beauty of open source, is the ability to see a need, modify the system and send in a pull request. 

Check out MustBe on Github, read the documentation (linked from the ReadMe on the project), install it via NPM and let me know what you think!

Categories: Blogs

Spark: Write to CSV file with header using saveAsFile

Mark Needham - Sun, 11/30/2014 - 10:21

In my last blog post I showed how to write to a single CSV file using Spark and Hadoop and the next thing I wanted to do was add a header row to the resulting row.

Hadoop’s FileUtil#copyMerge function does take a String parameter but it adds this text to the end of each partition file which isn’t quite what we want.

However, if we copy that function into our own FileUtil class we can restructure it to do what we want:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.IOUtils;
import java.io.IOException;
 
public class MyFileUtil {
    public static boolean copyMergeWithHeader(FileSystem srcFS, Path srcDir, FileSystem dstFS, Path dstFile, boolean deleteSource, Configuration conf, String header) throws IOException {
        dstFile = checkDest(srcDir.getName(), dstFS, dstFile, false);
        if(!srcFS.getFileStatus(srcDir).isDir()) {
            return false;
        } else {
            FSDataOutputStream out = dstFS.create(dstFile);
            if(header != null) {
                out.write((header + "\n").getBytes("UTF-8"));
            }
 
            try {
                FileStatus[] contents = srcFS.listStatus(srcDir);
 
                for(int i = 0; i < contents.length; ++i) {
                    if(!contents[i].isDir()) {
                        FSDataInputStream in = srcFS.open(contents[i].getPath());
 
                        try {
                            IOUtils.copyBytes(in, out, conf, false);
 
                        } finally {
                            in.close();
                        }
                    }
                }
            } finally {
                out.close();
            }
 
            return deleteSource?srcFS.delete(srcDir, true):true;
        }
    }
 
    private static Path checkDest(String srcName, FileSystem dstFS, Path dst, boolean overwrite) throws IOException {
        if(dstFS.exists(dst)) {
            FileStatus sdst = dstFS.getFileStatus(dst);
            if(sdst.isDir()) {
                if(null == srcName) {
                    throw new IOException("Target " + dst + " is a directory");
                }
 
                return checkDest((String)null, dstFS, new Path(dst, srcName), overwrite);
            }
 
            if(!overwrite) {
                throw new IOException("Target " + dst + " already exists");
            }
        }
        return dst;
    }
}

We can then update our merge function to call this instead:

def merge(srcPath: String, dstPath: String, header:String): Unit =  {
  val hadoopConfig = new Configuration()
  val hdfs = FileSystem.get(hadoopConfig)
  MyFileUtil.copyMergeWithHeader(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, header)
}

We call merge from our code like this:

merge(file, destinationFile, "type,count")

I wasn’t sure how to import my Java based class into the Spark shell so I compiled the code into a JAR and submitted it as a job instead:

$ sbt package
[info] Loading global plugins from /Users/markneedham/.sbt/0.13/plugins
[info] Loading project definition from /Users/markneedham/projects/spark-play/playground/project
[info] Set current project to playground (in build file:/Users/markneedham/projects/spark-play/playground/)
[info] Compiling 3 Scala sources to /Users/markneedham/projects/spark-play/playground/target/scala-2.10/classes...
[info] Packaging /Users/markneedham/projects/spark-play/playground/target/scala-2.10/playground_2.10-1.0.jar ...
[info] Done packaging.
[success] Total time: 8 s, completed 30-Nov-2014 08:12:26
 
$ time ./bin/spark-submit --class "WriteToCsvWithHeader" --master local[4] /path/to/playground/target/scala-2.10/playground_2.10-1.0.jar
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.propertie
...
14/11/30 08:16:15 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
14/11/30 08:16:15 INFO SparkContext: Job finished: saveAsTextFile at WriteToCsvWithHeader.scala:49, took 0.589036 s
 
real	0m13.061s
user	0m38.977s
sys	0m3.393s

And if we look at our destination file:

$ cat /tmp/singlePrimaryTypes.csv
type,count
THEFT,859197
BATTERY,757530
NARCOTICS,489528
CRIMINAL DAMAGE,488209
BURGLARY,257310
OTHER OFFENSE,253964
ASSAULT,247386
MOTOR VEHICLE THEFT,197404
ROBBERY,157706
DECEPTIVE PRACTICE,137538
CRIMINAL TRESPASS,124974
PROSTITUTION,47245
WEAPONS VIOLATION,40361
PUBLIC PEACE VIOLATION,31585
OFFENSE INVOLVING CHILDREN,26524
CRIM SEXUAL ASSAULT,14788
SEX OFFENSE,14283
GAMBLING,10632
LIQUOR LAW VIOLATION,8847
ARSON,6443
INTERFERE WITH PUBLIC OFFICER,5178
HOMICIDE,4846
KIDNAPPING,3585
INTERFERENCE WITH PUBLIC OFFICER,3147
INTIMIDATION,2471
STALKING,1985
OFFENSES INVOLVING CHILDREN,355
OBSCENITY,219
PUBLIC INDECENCY,86
OTHER NARCOTIC VIOLATION,80
RITUALISM,12
NON-CRIMINAL,12
OTHER OFFENSE ,6
NON - CRIMINAL,2
NON-CRIMINAL (SUBJECT SPECIFIED),2

Happy days!

The code is available as a gist if you want to see all the details.

Categories: Blogs

Spark: Write to CSV file

Mark Needham - Sun, 11/30/2014 - 09:40

A couple of weeks ago I wrote how I’d been using Spark to explore a City of Chicago Crime data set and having worked out how many of each crime had been committed I wanted to write that to a CSV file.

Spark provides a saveAsTextFile function which allows us to save RDD’s so I refactored my code into the following format to allow me to use that:

import au.com.bytecode.opencsv.CSVParser
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
 
def dropHeader(data: RDD[String]): RDD[String] = {
  data.mapPartitionsWithIndex((idx, lines) => {
    if (idx == 0) {
      lines.drop(1)
    }
    lines
  })
}
 
// https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2
val crimeFile = "/Users/markneedham/Downloads/Crimes_-_2001_to_present.csv"
 
val crimeData = sc.textFile(crimeFile).cache()
val withoutHeader: RDD[String] = dropHeader(crimeData)
 
val file = "/tmp/primaryTypes.csv"
FileUtil.fullyDelete(new File(file))
 
val partitions: RDD[(String, Int)] = withoutHeader.mapPartitions(lines => {
  val parser = new CSVParser(',')
  lines.map(line => {
    val columns = parser.parseLine(line)
    (columns(5), 1)
  })
})
 
val counts = partitions.
  reduceByKey {case (x,y) => x + y}.
  sortBy {case (key, value) => -value}.
  map { case (key, value) => Array(key, value).mkString(",") }
 
counts.saveAsTextFile(file)

If we run that code from the Spark shell we end up with a folder called /tmp/primaryTypes.csv containing multiple part files:

$ ls -lah /tmp/primaryTypes.csv/
total 496
drwxr-xr-x  66 markneedham  wheel   2.2K 30 Nov 07:17 .
drwxrwxrwt  80 root         wheel   2.7K 30 Nov 07:16 ..
-rw-r--r--   1 markneedham  wheel     8B 30 Nov 07:16 ._SUCCESS.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00000.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00001.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00002.crc
-rw-r--r--   1 markneedham  wheel    12B 30 Nov 07:16 .part-00003.crc
...
-rwxrwxrwx   1 markneedham  wheel     0B 30 Nov 07:16 _SUCCESS
-rwxrwxrwx   1 markneedham  wheel    28B 30 Nov 07:16 part-00000
-rwxrwxrwx   1 markneedham  wheel    17B 30 Nov 07:16 part-00001
-rwxrwxrwx   1 markneedham  wheel    23B 30 Nov 07:16 part-00002
-rwxrwxrwx   1 markneedham  wheel    16B 30 Nov 07:16 part-00003
...

If we look at some of those part files we can see that it’s written the crime types and counts as expected:

$ cat /tmp/primaryTypes.csv/part-00000
THEFT,859197
BATTERY,757530
 
$ cat /tmp/primaryTypes.csv/part-00003
BURGLARY,257310

This is fine if we’re going to pass those CSV files into another Hadoop based job but I actually want a single CSV file so it’s not quite what I want.

One way to achieve this is to force everything to be calculated on one partition which will mean we only get one part file generated:

val counts = partitions.repartition(1).
  reduceByKey {case (x,y) => x + y}.
  sortBy {case (key, value) => -value}.
  map { case (key, value) => Array(key, value).mkString(",") }
 
 
counts.saveAsTextFile(file)

part-00000 now looks like this:

$ cat !$
cat /tmp/primaryTypes.csv/part-00000
THEFT,859197
BATTERY,757530
NARCOTICS,489528
CRIMINAL DAMAGE,488209
BURGLARY,257310
OTHER OFFENSE,253964
ASSAULT,247386
MOTOR VEHICLE THEFT,197404
ROBBERY,157706
DECEPTIVE PRACTICE,137538
CRIMINAL TRESPASS,124974
PROSTITUTION,47245
WEAPONS VIOLATION,40361
PUBLIC PEACE VIOLATION,31585
OFFENSE INVOLVING CHILDREN,26524
CRIM SEXUAL ASSAULT,14788
SEX OFFENSE,14283
GAMBLING,10632
LIQUOR LAW VIOLATION,8847
ARSON,6443
INTERFERE WITH PUBLIC OFFICER,5178
HOMICIDE,4846
KIDNAPPING,3585
INTERFERENCE WITH PUBLIC OFFICER,3147
INTIMIDATION,2471
STALKING,1985
OFFENSES INVOLVING CHILDREN,355
OBSCENITY,219
PUBLIC INDECENCY,86
OTHER NARCOTIC VIOLATION,80
NON-CRIMINAL,12
RITUALISM,12
OTHER OFFENSE ,6
NON - CRIMINAL,2
NON-CRIMINAL (SUBJECT SPECIFIED),2

This works but it’s quite a bit slower than when we were doing the aggregation across partitions so it’s not ideal.

Instead, what we can do is make use of one of Hadoop’s merge functions which squashes part files together into a single file.

First we import Hadoop into our SBT file:

libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.5.2"

Now let’s bring our merge function into the Spark shell:

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs._
 
def merge(srcPath: String, dstPath: String): Unit =  {
  val hadoopConfig = new Configuration()
  val hdfs = FileSystem.get(hadoopConfig)
  FileUtil.copyMerge(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, null)
}

And now let’s make use of it:

val file = "/tmp/primaryTypes.csv"
FileUtil.fullyDelete(new File(file))
 
val destinationFile= "/tmp/singlePrimaryTypes.csv"
FileUtil.fullyDelete(new File(destinationFile))
 
val counts = partitions.
reduceByKey {case (x,y) => x + y}.
sortBy {case (key, value) => -value}.
map { case (key, value) => Array(key, value).mkString(",") }
 
counts.saveAsTextFile(file)
 
merge(file, destinationFile)

And now we’ve got the best of both worlds:

$ cat /tmp/singlePrimaryTypes.csv
THEFT,859197
BATTERY,757530
NARCOTICS,489528
CRIMINAL DAMAGE,488209
BURGLARY,257310
OTHER OFFENSE,253964
ASSAULT,247386
MOTOR VEHICLE THEFT,197404
ROBBERY,157706
DECEPTIVE PRACTICE,137538
CRIMINAL TRESPASS,124974
PROSTITUTION,47245
WEAPONS VIOLATION,40361
PUBLIC PEACE VIOLATION,31585
OFFENSE INVOLVING CHILDREN,26524
CRIM SEXUAL ASSAULT,14788
SEX OFFENSE,14283
GAMBLING,10632
LIQUOR LAW VIOLATION,8847
ARSON,6443
INTERFERE WITH PUBLIC OFFICER,5178
HOMICIDE,4846
KIDNAPPING,3585
INTERFERENCE WITH PUBLIC OFFICER,3147
INTIMIDATION,2471
STALKING,1985
OFFENSES INVOLVING CHILDREN,355
OBSCENITY,219
PUBLIC INDECENCY,86
OTHER NARCOTIC VIOLATION,80
RITUALISM,12
NON-CRIMINAL,12
OTHER OFFENSE ,6
NON - CRIMINAL,2
NON-CRIMINAL (SUBJECT SPECIFIED),2

The full code is available as a gist if you want to play around with it.

Categories: Blogs

AutoMapper 3.3 released

Jimmy Bogard - Sat, 11/29/2014 - 18:40

View the release notes:

AutoMapper 3.3 Release Notes

And download it from NuGet. Some highlights in the release include:

  • Open generic support
  • Explicit LINQ expansion
  • Custom constructors for LINQ projection
  • Custom type converter support for LINQ projection
  • Parameterized LINQ queries
  • Configurable member visibility
  • Word/character replacement in member matching

In this release, I added documentation for every new feature (linked in the release notes), and pertinent improvements.

This will likely be the last 3.x release, as for the next release I’ll be focusing on refactoring for custom convention support, plus supporting the new .NET core runtime (and therefore support on Mac/Linux in addition to the 6 existing runtimes I support).

Happy mapping!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Focus. Slides from my keynote at BrewingAgile, Gothenburg

Henrik Kniberg's blog - Sat, 11/29/2014 - 11:31

Here are the slides from my keynote “Focus” at BrewingAgile Gotheburg. It was about how to achieve more by working less.

Feel free to reuse :)

Categories: Blogs

Announcement: PMI Chapter Talk – The Agile Enterprise

Learn more about our Scrum and Agile training sessions on WorldMindware.com

On Tuesday Dec. 2, Mishkin Berteig will be speaking about The Agile Enterprise and the five different approaches to implementing Agile at the enterprise level.  The talk will also include some details about two frameworks used at the enterprise level: SAFe (Scaled Agile Framework) and RAP (Real Agility Program).

This talk is hosted by the South Western Ontario chapter of the PMI.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Wer ist eigentlich … Damla Koc?

Scrum 4 You - Fri, 11/28/2014 - 08:24

Wie beschreibst du deinen Eltern und Freunden deinen Arbeitsalltag bei Boris Gloger Consulting?
Ich bin Beraterin für agile Produktentwicklung. Ich helfe Teams mithilfe von Scrum, ihre Projekte strukturiert, on time und qualitativ hochwertig zu produzieren. Erstaunlich finden meine Familie und Freunde, dass ich selten in einem Büro arbeite oder teilweise keinen fixen Arbeitsplatz habe. Als Consultant habe ich immer mein Notebook dabei und kann von überall arbeiten, ob im Zug, am Flughafen oder an einem Beistelltisch bei meinem Team.

Welchen Beruf wolltest du als Kind unbedingt ergreifen?
Ich wollte Menschen helfen, wenn es ihnen nicht so gut geht, aber nicht Ärztin sein. Meine Lehrerin hat mir damals vorgeschlagen, es doch als Psychologin zu versuchen. Meine Vision, die ich mit 8 Jahren hatte, habe ich letztendlich erreicht: Ich bin Arbeits- und Organisationspsychologin geworden.

Wie siehst du deine bisherige Karriere? Viel Zufall oder überwiegend sorgfältige Planung?
Sowohl als auch. Meine Ausbildung habe ich geplant und auch dementsprechend verfolgt. Ich wollte Arbeits- und Organisationspsychologin werden und habe meine Fächer sowie meine Praktika alle in diesem Bereich absolviert. Auch meine Diplomarbeit habe ich in der Arbeits- und Organisationspsychologie geschrieben.

Zufall: Ich bin durch Zufall in die Politik gekommen. Ich wollte schon immer die Welt verändern und bin in meinem Bezirk politisch aktiv geworden. Durch eine Weiterempfehlung bekam ich die Chance, auf der Liste der Bezirksvertretung im 15. Wiener Bezirk zu sein und durch die Bezirksbevölkerung gewählt zu werden. Ich wurde 2005 als jüngste Bezirksrätin Wiens gewählt.

Zufall und Planung: Boris Gloger Consulting
Ich wollte Consulting im agilen Bereich machen und habe durch eine Kollegin erfahren, dass Boris Gloger Consulting eine Stelle frei hätte. Bewusst gewählt hatte ich den Bereich agile Beratung und durch Zufall habe ich erfahren, dass ich bei Boris Gloger Consulting die Möglichkeit dazu hätte.

Damla_Koc

Gibt es ein bestimmtes Ritual in der Arbeitswoche, auf das du nicht verzichten könntest?
Meine morgendlichen To Do´s. Ich arbeite sehr gerne strukturiert, und um immer einen Überblick über meine To Do´s zu haben, schreibe ich oder gehe ich gedanklich meine Termine und To Do´s in der Früh durch. Ich kann auf diese 5-10 Minuten in der Früh kaum verzichten.

Unter all den Projekten, die du für Boris Gloger Consulting realisiert hast: Welches ist dein All-Time-Favourite?
Ich hatte bei einem Projekt ein Team, das Scrum anfangs sehr skeptisch gegenüberstand und das auch beim Intro-Training immer wieder bekundete. Wir starteten die Entwicklung des Produkts mit Scrum. Der erste Sprint war ein voller Erfolg und das Team erreichte in seinem 1. Sprint weitaus mehr, als es sich vorgenommen hatte. Kurz vor dem Review war das ganze Team vor Vorfreude so nervös, seine Leistungen dem Management und anderen Gästen präsentieren zu dürfen. Es ist für mich immer wieder schön zu sehen, wenn Teams in Reviews stolz ihre Leistungen präsentieren.

Drängende Kundenanfragen, E-Mail-Flut, Meeting-Marathon oder als Consultant jeden Tag an einem anderen Ort: Was ist dein Geheimrezept, um im hektischen Arbeitsalltag einen kühlen Kopf zu bewahren?
Ich priorisiere meine Aufgaben und To Do´s, somit habe ich immer einen Überblick und vor allem einen klaren Kopf. Ich versuche immer fokussiert zu bleiben und eine Sache zu beenden, bevor ich zur nächsten übergehe.

Was ist für dich das Besondere an der Arbeit bei Boris Gloger Consulting?
Für mich sind es zwei Aspekte:
Unsere Projekte, die immer wieder verschieden sind, geben mir die Möglichkeit, in verschiedenen Bereichen sowie mit verschiedenen Teams zu arbeiten. Ich genieße die Abwechslung in meinen Projekten und unsere Erfolge.
Unser Team, in dem wir trotz Entfernung und unterschiedlichen Standorten immer Kontakt haben und halten. Wir sind ein buntes Team mit unterschiedlichen Schwerpunkten und Stärken, in dem es an Ideen und Hilfestellungen niemals mangelt und es macht Spaß, miteinander zu arbeiten.

Gibt es eine Marotte, mit der du deine Kollegen regelmäßig auf die Palme bringst?
Ich schreibe teilweise Blogs und Texte, ohne einen einzigen Beistrich zu setzen. Ich habe dann zum Glück Kollegen, die meine Texte reviewen und gefühlte 1000 Beistriche setzen. Ich bemühe mich inzwischen, mehr Beistriche zu setzen, aber Altlasten loszuwerden gestaltet sich teilweise schwieriger als gedacht.

Was machst du in deiner Freizeit, um runterzukommen?
Wenn ich im Flugzeug sitze und weiß, ich lande in Kürze in Wien, komme ich bereits runter von meinem Projektstress. Nach Hause kommen ist für mich gleichzeitig runterkommen. Ich verbringe meine Freizeit sehr gerne mit Familie und Freunden. Ich bin Wienerin und liebe Wien einfach, für mich gibt es nichts Schöneres, als in der Stadt herumzuschlendern und einen Kaffee zu genießen.

Scrum ist für mich…
eine agile Methode, die es Teams ermöglicht, selbstorganisiert, zufrieden und erfolgreich Produkte zu entwickeln.

Categories: Blogs

Gratitude, for Gratitude

Evolving Excellence - Thu, 11/27/2014 - 20:02

solitudeThis Thanksgiving I am thankful for learning the power of being thankful.  More than ever I am convinced it is the most powerful personal and professional leadership habit.

For years I have had an increasingly refined and meaningful daily routine.  Each morning I begin the day with the following:

  1. Twenty minutes of meditation in classic Zen style using the counting of breaths to slow the mind and become truly aware of the present.  This is remarkably difficult to do - it took months of practice to get to even five minutes, a reflection of how voluminous the flow of ideas and thoughts really is.  Meditation is often confused with prayer, but it's very different although also very complementary.  It is an intentional slowing of the flow of thoughts in order to understand that flow of thoughts, to become mindful and aware.
  2. Five minutes of giving thanks and prayer, always trying to find one new person or thing to be thankful for.
  3. Five minutes of silent planning, identifying the three key tasks I want to complete today, in line with my personal and professional hoshin.  With practice, five minutes is more than enough time. I then write those down.  Once again, writing by hand into a notebook creates ownership and understanding - unlike typing into an electronic planner.

Only afterwards do I read The Wall Street Journal, have coffee, and check email.  At some point in the day there's a crossfit class, beach run, or other exercise.  Then in the evenings I have a complementary routine:

  1. Review the three key tasks I wrote down to see if they were completed.  It's amazing how much can get done if just three meaningful tasks are truly accomplished each and every day.
  2. A few minutes of hansei reflection on why or why not those tasks were accomplished and, most importantly, what I will change in order to do a better job at accomplishing them tomorrow.
  3. A few minutes of thanks and gratitude.  Lately this is done out under the stars in my new ofuro soaking tub, with a glass of rhone blend.  There is nothing quite as humbling as looking up at millions of stars, especially with a minor buzz.

The periods of reflection on gratitude at the beginning and end of each day create calm bookends to what can be chaos.  As problem solvers we are naturally dispositioned to focus on the negative, taking for granted the positive - to the extent that we often become oblivious and unaware of just how much positive there is in our lives.  Creating an intentional focus on gratitude realigns that perspective back to reality. Then expressing that gratitude in daily life by realizing the waste of complaining, complimenting and helping others, or just smiling, reinforces the power of being thankful.

Intentionally discovering gratitude, every day, has changed my perspective on life more than any other personal or professional leadership habit.  I've discovered I have a lot to be thankful for.

I am thankful for parents and family that continue to instill in me the ability to think independently and trust my instincts, act courageously and take appropriate risks, have a desire to see the world, and explore the strong spiritual foundation that they have surrounded me with.

I am thankful that this desire to explore has let me visit over fifty countries, going and seeing, to better understand.  This helps create reality where most just have perceptions, unfortunately generally incorrect, created by sound bites and the Facebook culture.

I am thankful that the strong spiritual foundation I was raised on has grown even stronger as I explored its nooks and crannies, morphed into forms I wouldn't have expected, and has become very real. I feel sad for those who have not felt the hand of God, very visibly and directly in my case, as that unmistakable reality creates incredible comfort and peace.

I am thankful for my wife, who accepts me for the sometimes strange creature I am, trusts me to make good decisions for our family, and is my enthusiastic partner in exploring the world.

I am thankful for the lessons learned from difficulty, in particular the struggles over years with a family member's medical condition that has helped me become much more understanding, compassionate, loving, and kind.

I am thankful for special friends that have been there for me during those times of difficulty, helping to guide and support me in many ways.  They ask for nothing in return, although I will spend the rest of my life trying to find ways to return the favor - to them and to others.

I am thankful for the opportunity to live where I do, in the peacefulness of a small town on the coast, being able to look at the sun setting over the ocean each evening.  The beauty of nature reflects God.

I am thankful for the ability to think abstractly, to wonder about what I don't know, and to embrace possibility.  As just one example I am fascinated by quantum entanglement theory and the potential ramifications on communication, the connections between life in the universe, and the soul itself.  Is this the link between science and God?

I am thankful for the wisdom of colleagues I have met over the years, in the lean world and beyond, who have taught me so much which has enabled my success.  Those colleagues, including readers of this blog, continually challenge me and help me grow.

I am thankful for my Gemba Academy business partners who align with my desire to teach, give back, and create a great company for our team members, instead of simply focusing on growth and profit.  Interestingly, more growth and profit seems to come by teaching, giving back, and creating a great company.  Funny how that happens...

I am thankful for our Gemba Academy team members, who are the foundation for our success, and are truly a pleasure to work with each day.  Every day I am energized by their creativity, talent, and drive.

But more than anything, I'm thankful, for being thankful.

Categories: Blogs

NeuroAgile Quick Links #7

Notes from a Tool User - Mark Levison - Thu, 11/27/2014 - 19:41

Quote - Sculpture of own brainCyberloafing at Work Makes You More Productive (PsyBlog) – web surfing (in moderation) can boost your performance at work

How guessing helps you learn, even if you guess wrong (Christian Jarrett )

Productivity for the Depressed (JBrains) – guidelines to deal with burnout and depression

A review of Susan Greenfield’s “Mind Change” (Vaughan Bell)

Secrets of the Creative Brain (Nancy Andreasen) – where does genius come from, and does IQ or mental illness have anything to do with it

The Neurochemistry of Positive Conversations (Judith E. Glaser and Richard D. Glaser) – why negative comments and conversations stick with us so much longer than positive ones

To boost brainpower, ignore “smart drugs” and focus on experiences (SharpBrains)

How to make stress your friend – TED talk by psychologist Kelly McGonigal: sketchnote by Clare Willcocks

Top 10 recent scientific studies on the value of mindfulness in education (SharpBrains)

Categories: Blogs

Docker/Neo4j: Port forwarding on Mac OS X not working

Mark Needham - Thu, 11/27/2014 - 14:28

Prompted by Ognjen Bubalo’s excellent blog post I thought it was about time I tried running Neo4j on a docker container on my Mac Book Pro to make it easier to play around with different data sets.

I got the container up and running by following Ognien’s instructions and had the following ports forwarded to my host machine:

$ docker ps
CONTAINER ID        IMAGE                 COMMAND                CREATED             STATUS              PORTS                                              NAMES
c62f8601e557        tpires/neo4j:latest   "/bin/bash -c /launc   About an hour ago   Up About an hour    0.0.0.0:49153->1337/tcp, 0.0.0.0:49154->7474/tcp   neo4j

This should allow me to access Neo4j on port 49154 but when I tried to access that host:port pair I got a connection refused message:

$ curl -v http://localhost:49154
* Adding handle: conn: 0x7ff369803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7ff369803a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 49154 (#0)
*   Trying ::1...
*   Trying 127.0.0.1...
*   Trying fe80::1...
* Failed connect to localhost:49154; Connection refused
* Closing connection 0
curl: (7) Failed connect to localhost:49154; Connection refused

My first thought was the maybe Neo4j hadn’t started up correctly inside the container so I checked the logs:

$ docker logs --tail=10 c62f8601e557
10:59:12.994 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@2edfbe28{/webadmin,jar:file:/usr/share/neo4j/system/lib/neo4j-server-2.1.5-static-web.jar!/webadmin-html,AVAILABLE}
10:59:13.449 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@192efb4e{/db/manage,null,AVAILABLE}
10:59:13.699 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@7e94c035{/db/data,null,AVAILABLE}
10:59:13.714 [main] INFO  o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet
10:59:13.715 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@3e84ae71{/browser,jar:file:/usr/share/neo4j/system/lib/neo4j-browser-2.1.5.jar!/browser,AVAILABLE}
10:59:13.807 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@4b6690b1{/,null,AVAILABLE}
10:59:13.819 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@495350f0{HTTP/1.1}{c62f8601e557:7474}
10:59:13.900 [main] INFO  o.e.jetty.server.ServerConnector - Started ServerConnector@23ad0c5a{SSL-HTTP/1.1}{c62f8601e557:7473}
2014-11-27 10:59:13.901+0000 INFO  [API] Server started on: http://c62f8601e557:7474/
2014-11-27 10:59:13.902+0000 INFO  [API] Remote interface ready and available at [http://c62f8601e557:7474/]

Nope! It’s up and running perfectly fine which suggested the problemw was with port forwarding.

I eventually found my way to Chris Jones’ ‘How to use Docker on OS X: The Missing Guide‘ which explained the problem:

The Problem: Docker forwards ports from the container to the host, which is boot2docker, not OS X.

The Solution: Use the VM’s IP address.

So to access Neo4j on my machine I need to use the VM’s IP address rather than localhost. We can get the VM’s IP address like so:

$ boot2docker ip
 
The VM's Host only interface IP address is: 192.168.59.103

Let’s strip out that surrounding text though:

$ boot2docker ip 2> /dev/null
192.168.59.103

Now if we cURL using that IP instead:

$ curl -v http://192.168.59.103:49154
* About to connect() to 192.168.59.103 port 49154 (#0)
*   Trying 192.168.59.103...
* Adding handle: conn: 0x7fd794003a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fd794003a00) send_pipe: 1, recv_pipe: 0
* Connected to 192.168.59.103 (192.168.59.103) port 49154 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 192.168.59.103:49154
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=UTF-8
< Access-Control-Allow-Origin: *
< Content-Length: 112
* Server Jetty(9.0.5.v20130815) is not blacklisted
< Server: Jetty(9.0.5.v20130815)
<
{
  "management" : "http://192.168.59.103:49154/db/manage/",
  "data" : "http://192.168.59.103:49154/db/data/"
* Connection #0 to host 192.168.59.103 left intact

Happy days!

Chris has solutions to lots of other common problems people come across when using Docker with Mac OS X so it’s worth having a flick through his post.

Categories: Blogs

Creating Agile Learning Games

TV Agile - Thu, 11/27/2014 - 12:27
One of the things Agile Learning Labs is known for is our use of simulations and learning games. Learning games are effective because they engage across all four of the key learning styles: Kinetic/Tactile Spatial/Visual Auditory Logical. Because they operate at so many different levels, learning games are highly effective at conveying complex concepts like […]
Categories: Blogs

Einer von uns beiden spinnt – ich bin mir nur nicht sicher, ob du oder ich

Scrum 4 You - Thu, 11/27/2014 - 08:59

Sehr geehrte Leser und Leserinnen, ich hoffe Sie sitzen gut, Ihre Rückenlehne ist aufrecht und Sie sind angeschnallt. In diesem Beitrag bewegen wir uns nämlich durch die Turbulenzen der Emotionen. Zumindest schneiden wir diese kurz an, für viel mehr reicht so ein Blog leider nicht aus.

In meiner Arbeit mit Menschen in den letzten Jahren und in meinen privaten Beziehungen ging mir dieses, in der Überschrift erwähnte Statement schon unzählige Male durch den Kopf. Erleichtert bin ich darüber, dass die erweiterte Form die schlichte Formulierung “Der spinnt“ abgelöst hat. Daran merke ich, dass die Jahre der Reflexion nicht spurlos an mir vorbeigegangen sind. Denn die Frage, wer spinnt und wer also die betreffende Situation vollkommen falsch einschätzt, ist nicht so einfach und oft nur mit “es kommt drauf an” zu beantworten.

Die Krux liegt in den unterschiedlichen Bedürfnissen, An- und Absichten einzelner Menschen. Diese entstehen durch Prägungen aus unserer Vergangenheit. Psychologen haben dazu ein gut gehütetes Geheimnis, dass ich Ihnen heute verrate und dass Ihnen vielleicht ein langwieriges Psychologiestudium erspart: Jeder Einzelne von uns ist, auch als Erwachsener, nur ein kleines verletztes Kind. Es gibt alte Muster, die heute noch immer wirken, und die sich auf unsere (Arbeits-)beziehungen verheerend auswirken können. So hat das Kind, dessen Eltern Abends zu spät nach Hause kommen, irrsinnige Verlustängste entwickelt und wird so als Erwachsener möglicherweise zutiefst verletzt sein, wenn ein von ihm geschätzter Kollege zu einem bestimmten Arbeitstermin nicht erscheint. Die übertragene Emotionalität aus der Kindheit wird dann die Arbeitsbeziehungen zwischen diesen Personen belasten. So viel zur schlechten Nachricht.

children-drawing-375615_1280

Kommen wir nun zur guten Nachricht: Jeder Einzelne von uns Menschen hat die Möglichkeit, diese Muster zu verändern. Einzige Voraussetzung dafür ist, dass wir uns diese alten Muster eingestehen und sie akzeptieren. Mit der Akzeptanz kommt auch die Lösung.

Wie kommt man aber an diese Muster heran? Eigentlich ganz einfach und daher unheimlich schwer und langwierig: Reflektieren Sie und seien Sie dabei ehrlich zu sich selbst. Methoden gibt es unzählige. Meditieren Sie, gehen Sie ins Kloster oder in Supervison (falls Ihnen der Begriff Psychotherapie zuviel Angst macht). Wenn Ihnen das jetzt zu anstrengend ist, kann ich als Motivation noch vom Goldtopf am Ende des Regenbogens erzählen, der auf Sie wartet, wenn Sie diesen Weg gehen: Sie erhalten Freiheit. Die wirkliche und wahrhafte Freiheit zu entscheiden, was Ihnen gut tut. Sie befreien sich davon, Dinge tun zu müssen, weil Sie ein inneres unbefriedigtes Bedürfnis nach Anerkennung, Wohlstand, Macht oder Beziehungen haben. Dadurch gewinnen Sie eine Lebensqualität, die Ihnen anders nie zuteil werden kann und Sie bekommen die Möglichkeit, Ihr Leben mit einem wachen Auge zu betrachten.

Um den Kreis zu schließen: Wenn Sie das nächste Mal wieder sagen wollen „Der spinnt”, werden Sie vielleicht darüber nachdenken können, welches Bedürfnis bei Ihnen gerade nicht erfüllt wird und welches Bedürfnis beim anderen gerade getriggert wird. Und mit diesem Verständnis der Situation können Sie auch Lösungen für Ihre Beziehungen finden, die Ihnen anders nie zugänglich wären.

In diesem Sinne hoffe ich, Sie haben den Flug gut überstanden und ich entlasse Sie durch die Kabinentür wieder ins Freie. Welchen Weg Sie nun wieder einschlagen, bleibt Ihnen überlassen, aber vielleicht hinterfragen Sie zukünftig, WARUM Sie genau jenen Weg einschlagen wollen. Damit gewinnen Sie ein wenig mehr Freiheit für sich.

Categories: Blogs

the struggle to slow down and stop kicking butt

Derick Bailey - new ThoughtStream - Wed, 11/26/2014 - 12:00

It’s hard not to get sucked in to the constant race of business and the world, never ending and never slowing down. So, take a moment and look at drawing my 7yr old son did a while back:

darth-vader

He called this Darth Vader – you know, the big evil bad guy from Star Wars?

I have no idea what the spiky things are, or why Darth Vader is smiling… but I love this drawing. It hangs above the entryway to my bedroom with other drawings that my kids have done. And it’s the little things like this drawing, the way my son demands (and screams if he doesn’t get it) that I tuck him in to bed at night, or the way my daughter wanders in to my room at 1:30am crying “i want to lay down with you!!!” because she had a bad dream… these are the things that are really important, and are a constant reminder of the good life that I live – that is, when I stop to remember them.

A Kick In The Butt, To Stop Kicking Butt

A few months ago, Jarrod Drysdale wrote a piece on “Living the life” – asking the question, “If you were living the good life, would you even notice?”

It was this email / post that got me really thinking about my life and where I am today. It helps to have someone else kick you in the butt like this, sometimes, and I’m glad I received this email from Jarrod. I ended up emailing him back and forth for a while, about this subject. It was good to talk with him and have someone help me to see the good things that I have going, and the privileges that I enjoy.

It’s easy to complain, to think the world owes you something, and to become a bitter and cynical person (which generally describes my outlook on life, anyways…) but when you slow down a bit, and stop trying to kick something’s butt 24 hours a day because you think that’s what the world demands, it becomes easier to see how good your life is.

Struggles and Privilege

There’s a strong chance that you have a very good life compared to most of the world. I can say this with confidence because you are reading this. You have access to the internet in some manner, and likely have some life related to technology and/or software development.

Yes, we all have struggles. Some of us more than others. Many of you more than me, no doubt. Yes, our problems and our struggle are real, but our lives are simple and privileged by comparison to most. I may have problems keeping my bank account positive, at times, but I have a bank account and a (somewhat) successful business or three. I may have a special needs son, but he is happy and loves to spend time with me. I may not have as many customers as I want for WatchMeCode, but I am incredibly privileged to be able to sell educational material to begin with.

Sometimes it’s hard to recognize that we are living a good life – a privileged life – in the middle of what we consider to be difficult struggles. But the standards by which we live mean that we have a better life than most. I know I do… and sometimes, I even remember it.

- Derick

Categories: Blogs