Skip to content

Blogs

What is BDD?

lizkeogh.com - Elizabeth Keogh - Fri, 03/27/2015 - 12:24

At #CukeUp today, there’s going to be a panel on defining BDD, again.

BDD is hard to define, for good reason.

First, because to do so would be to say “This is BDD” and “This is not BDD”. When you’ve got a mini-methodology that’s derived from a dozen other methods and philosophies, how do you draw the boundary? When you’re looking at which practices work well with BDD (Three Amigos, Continuous Integration and Deployment, Self-Organising Teams, for instance), how do you stop the practices which fit most contexts from being adopted as part of the whole?

I’m doing BDD with teams which aren’t software, by talking through examples of how their work might affect the world. Does that mean it’s not really BDD any more, because it can’t be automated? I’m talking to people over chat windows and sending them scenarios so they can be checked, because we’re never in the same room. Is that BDD? I’m writing scenarios for my own code, on my own, talking to a rubber duck. Is that it? I’m still using scenarios in conversation to explore and define requirements, with all the patterns that I’ve learnt from the BDD world. I’m still trying to write software that matters. It still feels like BDD to me. I can’t even say that BDD’s about “writing software that matters” any more, because I’ve been using scenarios to explore life for a while now.

I expect in a few years the body of knowledge we call “BDD” will also include adoption patterns, non-software contexts, and a whole bunch of other things that we’re looking at but which haven’t really been explored in depth. BDD is also the community; it’s the people who are learning about this and reporting their learning and asking questions, and the common questions and puzzles are also part of BDD, and they keep changing as our understanding grows and the knowledge becomes easier to access and other methods like Scrum and Kanban are adopted and enable BDD to thrive in different places.

Rather than thinking of BDD as some set of well-defined practices, I think of it as an anchor term. If you look up anything around BDD, you’re likely to find conversation, collaboration, scenarios and examples at its core, together with suggestions for how to automate them. If you look further, you’ll find Three Amigos and Outside-In and the Given / When / Then syntax and Cucumber and Selenium and JBehave and Capybara and SpecFlow and a host of other tools. Further still we have Cynefin and Domain Driven Design and NLP, which come with their own bodies of knowledge and are anchor terms for those, and part of their teaching overlaps part of what I teach, as part of BDD, and that’s OK.

That’s why, when I’m asked to define BDD, I say something like, “Using examples in conversation to illustrate behaviour.” It’s where all this started, for me. That’s the anchor. It’s where everything else comes from, but it doesn’t define the boundaries. There are no boundaries. The knowledge, and the understanding, and especially the community that we call “BDD” will keep on growing.

One day it will be big enough that there will be new names for bits of it, and maybe those new names will be considered part of BDD, and maybe they won’t. And when that happens, that should be OK, too.

NB: I reckon the only reason that other methods are defined more precisely is so they could be taught consistently at scale, especially where certification is involved. Give me excellence, diversity and evolution over consistency any day. I’m pretty sure I can sell them more easily… and so can everyone else.


Categories: Blogs

Neo4j: Generating real time recommendations with Cypher

Mark Needham - Fri, 03/27/2015 - 08:59

One of the most common uses of Neo4j is for building real time recommendation engines and a common theme is that they make use of lots of different bits of data to come up with an interesting recommendation.

For example in this video Amanda shows how dating websites build real time recommendation engines by starting with social connections and then introducing passions, location and a few other things.

Graph Aware have a neat framework that helps you to build your own recommendation engine using Java and I was curious what a Cypher version would look like.

This is the sample graph:

CREATE
    (m:Person:Male {name:'Michal', age:30}),
    (d:Person:Female {name:'Daniela', age:20}),
    (v:Person:Male {name:'Vince', age:40}),
    (a:Person:Male {name:'Adam', age:30}),
    (l:Person:Female {name:'Luanne', age:25}),
    (c:Person:Male {name:'Christophe', age:60}),
 
    (lon:City {name:'London'}),
    (mum:City {name:'Mumbai'}),
 
    (m)-[:FRIEND_OF]->(d),
    (m)-[:FRIEND_OF]->(l),
    (m)-[:FRIEND_OF]->(a),
    (m)-[:FRIEND_OF]->(v),
    (d)-[:FRIEND_OF]->(v),
    (c)-[:FRIEND_OF]->(v),
    (d)-[:LIVES_IN]->(lon),
    (v)-[:LIVES_IN]->(lon),
    (m)-[:LIVES_IN]->(lon),
    (l)-[:LIVES_IN]->(mum);

We want to recommend some potential friends to ‘Adam’ so the first layer of our query is to find his friends of friends as there are bound to be some potential friends amongst them:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
RETURN me, potentialFriend, COUNT(*) AS friendsInCommon
 
==> +--------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | friendsInCommon |
==> +--------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 1               |
==> +--------------------------------------------------------------------------------------+
==> 3 rows

This query gives us back a list of potential friends and how many friends we have in common.

Now that we’ve got some potential friends let’s start building a ranking for each of them. One indicator which could weigh in favour of a potential friend is if they live in the same location as us so let’s add that to our query:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
RETURN  me,
        potentialFriend,
        SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation
 
==> +-----------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation |
==> +-----------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            |
==> +-----------------------------------------------------------------------------------+
==> 3 rows

Next we’ll check whether Adams’ potential friends have the same gender as him by comparing the labels each node has. We’ve got ‘Male’ and ‘Female’ labels which indicate gender.

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
RETURN  me,
        potentialFriend,
        SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
        LABELS(me) = LABELS(potentialFriend) AS gender
 
==> +--------------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation | gender |
==> +--------------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            | true   |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            | false  |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            | false  |
==> +--------------------------------------------------------------------------------------------+
==> 3 rows

Next up let’s calculate the age different between Adam and his potential friends:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
RETURN me,
       potentialFriend,
       SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
       abs( me.age - potentialFriend.age) AS ageDifference,
       LABELS(me) = LABELS(potentialFriend) AS gender,
       friendsInCommon
 
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation | ageDifference | gender | friendsInCommon |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            | 10.0          | true   | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            | 10.0          | false  | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            | 5.0           | false  | 1               |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> 3 rows

Now let’s do some filtering to get rid of people that Adam is already friends with – there wouldn’t be much point in recommending those people!

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
WITH me,
     potentialFriend,
     SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
     abs( me.age - potentialFriend.age) AS ageDifference,
     LABELS(me) = LABELS(potentialFriend) AS gender,
     friendsInCommon
 
WHERE NOT (me)-[:FRIEND_OF]-(potentialFriend)
 
RETURN me,
       potentialFriend,
       SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
       abs( me.age - potentialFriend.age) AS ageDifference,
       LABELS(me) = LABELS(potentialFriend) AS gender,
       friendsInCommon
 
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | me                             | potentialFriend                   | sameLocation | ageDifference | gender | friendsInCommon |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> | Node[1007]{name:"Adam",age:30} | Node[1006]{name:"Vince",age:40}   | 0            | 10.0          | true   | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1005]{name:"Daniela",age:20} | 0            | 10.0          | false  | 1               |
==> | Node[1007]{name:"Adam",age:30} | Node[1008]{name:"Luanne",age:25}  | 0            | 5.0           | false  | 1               |
==> +------------------------------------------------------------------------------------------------------------------------------+
==> 3 rows

In this case we haven’t actually filtered anyone out but for some of the other people we would see a reduction in the number of potential friends.

Our final step is to come up with a score for each of the features that we’ve identified as being important for making a friend suggestion.

We’ll assign a score of 10 if the people live in the same location or have the same gender as Adam and 0 if not. For the ageDifference and friendsInCommon we’ll apply apply a log curve so that those values don’t have a disproportional effect on our final score. We’ll use the formula defined in the ParetoScoreTransfomer to do this:

    public <OUT> float transform(OUT item, float score) {
        if (score < minimumThreshold) {
            return 0;
        }
 
        double alpha = Math.log((double) 5) / eightyPercentLevel;
        double exp = Math.exp(-alpha * score);
        return new Double(maxScore * (1 - exp)).floatValue();
    }

And now for our completed recommendation query:

MATCH (me:Person {name: "Adam"})
MATCH (me)-[:FRIEND_OF]-()-[:FRIEND_OF]-(potentialFriend)
 
WITH me, potentialFriend, COUNT(*) AS friendsInCommon
 
WITH me,
     potentialFriend,
     SIZE((potentialFriend)-[:LIVES_IN]->()<-[:LIVES_IN]-(me)) AS sameLocation,
     abs( me.age - potentialFriend.age) AS ageDifference,
     LABELS(me) = LABELS(potentialFriend) AS gender,
     friendsInCommon
 
WHERE NOT (me)-[:FRIEND_OF]-(potentialFriend)
 
WITH potentialFriend,
       // 100 -> maxScore, 10 -> eightyPercentLevel, friendsInCommon -> score (from the formula above)
       100 * (1 - exp((-1.0 * (log(5.0) / 10)) * friendsInCommon)) AS friendsInCommon,
       sameLocation * 10 AS sameLocation,
       -1 * (10 * (1 - exp((-1.0 * (log(5.0) / 20)) * ageDifference))) AS ageDifference,
       CASE WHEN gender THEN 10 ELSE 0 END as sameGender
 
RETURN potentialFriend,
      {friendsInCommon: friendsInCommon,
       sameLocation: sameLocation,
       ageDifference:ageDifference,
       sameGender: sameGender} AS parts,
     friendsInCommon + sameLocation + ageDifference + sameGender AS score
ORDER BY score DESC
 
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | potentialFriend                   | parts                                                                                                           | score             |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
==> | Node[1006]{name:"Vince",age:40}   | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -5.52786404500042, sameGender -> 10} | 19.33814370292112 |
==> | Node[1008]{name:"Luanne",age:25}  | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -3.312596950235779, sameGender -> 0} | 11.55341079768576 |
==> | Node[1005]{name:"Daniela",age:20} | {friendsInCommon -> 14.86600774792154, sameLocation -> 0, ageDifference -> -5.52786404500042, sameGender -> 0}  | 9.33814370292112  |
==> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

The final query isn’t too bad – the only really complex bit is the log curve calculation. This is where user defined functions will come into their own in the future.

The nice thing about this approach is that we don’t have to step outside of cypher so if you’re not comfortable with Java you can still do real time recommendations! On the other hand, the different parts of the recommendation engine all get mixed up so it’s not as easy to see the whole pipeline as if you use the graph aware framework.

The next step is to apply this to the Twitter graph and come up with follower recommendations on there.

Categories: Blogs

Getting to Done

Scrum Log Jeff Sutherland - Thu, 03/26/2015 - 15:54

One of the key principles in the Agile Manifesto is to have working software at the end of every sprint. Yet, only 20% of teams that call themselves ’agile’ actually do this. That is a lot of bad agile. It doesn’t have to be that way. Getting to Done isn't impossible, it just requires discipline and focus.

Scrum co-creator Jeff Sutherland shares share proven and often quite simple ways to move teams from mediocre to great. This online course will teach you how to quickly identify the reasons why teams aren’t getting to done, and tools to eliminate them one by one.

 

 Scrum Inc.’s online courses are eligible for Scrum Alliance SEUs and PMI PDUs. See FAQ for details.

  Getting to Done .001 Getting to Done .001 Getting to Done .002 Getting to Done .002 Getting to Done .003 Getting to Done .003 Getting to Done .004 Getting to Done .004 Getting to Done .005 Getting to Done .005 Getting to Done .006 Getting to Done .006 Getting to Done .007 Getting to Done .007 Getting to Done .008 Getting to Done .008 Getting to Done .009 Getting to Done .009 Getting to Done .010 Getting to Done .010 Getting to Done .011 Getting to Done .011 Getting to Done .012 Getting to Done .012 Getting to Done .013 Getting to Done .013 Getting to Done .014 Getting to Done .014 Getting to Done .015 Getting to Done .015 Getting to Done .016 Getting to Done .016 Getting to Done .017 Getting to Done .017 Getting to Done .018 Getting to Done .018 Getting to Done .019 Getting to Done .019 Getting to Done .020 Getting to Done .020 Getting to Done .021 Getting to Done .021 Getting to Done .022 Getting to Done .022 Getting to Done .023 Getting to Done .023 Getting to Done .024 Getting to Done .024 Getting to Done .025 Getting to Done .025 Getting to Done .026 Getting to Done .026 Getting to Done .027 Getting to Done .027 Getting to Done .028 Getting to Done .028 Getting to Done .029 Getting to Done .029 Getting to Done .030 Getting to Done .030 Getting to Done .031 Getting to Done .031 Getting to Done .032 Getting to Done .032 Getting to Done .033 Getting to Done .033 Getting to Done .034 Getting to Done .034 Getting to Done .035 Getting to Done .035 Getting to Done .036 Getting to Done .036 Getting to Done .037 Getting to Done .037

Getting to Done Slides

Teams that Finish Early Accelerate Faster: A Pattern Language for High Performing Scrum Teams

  J. Sutherland, N. Harrison, and  J. Riddle,  IEEE HICSS 47th Hawaii International Conference on System Sciences, Big Island, Hawaii, 2014.

Scrum and CMMI - Going from Good to Great: Are You Ready Ready to Be Done Done

C. Jakobsen and J. Sutherland, in Agile 2009, Chicago, 2009.

 

Back to All Topics

The post Getting to Done appeared first on Scrum Inc.

Categories: Blogs

Operationalizing Strategy with a Systems Perspective

Leading Agile - Mike Cottmeyer - Thu, 03/26/2015 - 14:58

While there are many books and much research on organizational development, this system view combined with some validated learning over time is a powerful way to look at organizational challenges as a coach/consultant.

OD Model

Let’s take a closer look to define these areas then apply some validated learning from my own experience.

Business Outcomes – the outcomes desired from the business strategy selected

Org Structure – the structure of power and authority to facilitate decision making

Incentive Systems – rewards for individual and group performance

Work Systems – how people get work done in the organization

Collaboration Systems – systems to overcome the friction to collaboration introduced by the org structure

People Systems – hiring, firing, development, HR systems – both tactical and strategic

Validated Learning (observations and experiences over time)
  • Business outcomes are required to think about the other dimensions; and interestingly, in my experience even some top leaders can struggle to articulate these, so it may require some elicitation and dialogue. I like to use the pithy term “operationalize strategy” when discussing this topic.
  • Incentive systems usually mirror org structure fairly closely.
  • The org structure will help determine both work systems and collaboration systems; however, collaboration systems have a stronger relationship because they must overcome the friction introduced by the structure itself.
  • Incentive systems and people systems strongly impact everything else except strategy.
  • People tend to focus first on org structure and work systems because they are the most visible, tangible, and even “fun” to work with.
  • Each organization design decision made will impact the other dimensions so as the design is created, the entire system must be reevaluated.
  • Organizations are typically good at people systems when it comes to tactical training and development, but more powerful levers are hiring, firing, and strategic training needs.
  • The most common constraint on change involves incentive systems.

What observations and experiences do you have using a systems perspective to view organizational challenges? Has the use of a systems perspective helped overcome these challenges? Leave your comment below so that we can get the conversation started.

The post Operationalizing Strategy with a Systems Perspective appeared first on LeadingAgile.

Categories: Blogs

Why Managers Ask for Estimates and What They Need to Know

Johanna Rothman - Thu, 03/26/2015 - 13:26

In many of my transitioning to agile clients, the managers want to know when the project will be done. Or, they want to know how much the project will cost. (I have a new book about this, Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule.)

Managers ask for estimates because they want to know something about their ability to recognize revenue this year. How many projects can they release? What is the projected effect on revenue; customer acquisition and retention; and on service revenue (training, support, all that kind of service). We pay managers big bucks so they can project out for “a while” and plan for the business.

You need to know this in your life, too. If you are an employee, you know approximately how much money you will make in a year. You might make more if you get a bonus. You might make less if you get laid off. But, you have an idea, which allows you to budget for mortgages, vacations, and kid’s braces.

Remember, in waterfall, there was no benefit until the end of the project. You couldn’t realize any benefit from a project until it was complete: not revenue, not capitalization, not any effect on what customers saw. Nothing.

When you use agile, you have options if you can release early. Remember the potential for release frequency?

If you can do continuous deployment or even release something more often, you can realize the benefits of the project before the end.

If you are agile, you don’t need to estimate a lot to tell them when they can first receive value from your work. You can capitalize software early. Your customers can see the benefits early. You might be able to acquire more customers early.

Agile changes the benefits equation for projects.

Agile is about the ability to change. We see this at the team level clearly. When the team finishes a feature, the team is ready for the next feature. It doesn’t matter if you work in flow or timeboxes, you can change the features either for the next feature (flow) or at the next timebox. You can change what the team does.

Agile is most successful when teams finish features every day (or more often). The faster you finish a feature, the faster the team gets feedback on the feature. The more flexibility the product owners has to update/change the backlog for the team (either for flow or the next timebox). The teams do have to complete their work on a feature in a reasonable amount of time. If your cycle time gets too high, no one sees the flow of features. If you don’t get to done at the end of the iteration, you don’t get the benefit of agile. Teams need to learn how to get to done quickly on small features, so they can demo and get feedback on their progress.

What does this fast delivery/fast feedback cycle do for senior managers?

It allows senior managers to change their questions. Instead of “When will this be done?” or “How much will it cost?” senior managers can ask, “When will I see the first bit of value? Can we turn that value into revenue? When can we capitalize the work?”

Those questions change the way teams and senior management work together.

When teams do agile/lean, and they have a constant flow of features, managers don’t need “assurances” or “commitments to estimates” from the teams. Instead, the team estimates a much smaller chunk of work–time to first delivery value.

You might not know precisely when you can deliver that first value. But, as soon as the team works together if they understand agile/lean, they can create a reasonable estimate. They can update that estimate if necessary.

What else can teams do?

  • Work to a target. If the teams and the product owners know that management has a desired release date, they can work to it. Teams can track their feature flow through their systems, understanding their cycle time. They can use timeboxes for focus. They can measure how close to done they are with a product backlog burnup chart.
  • Demo as you proceed. Always demo to the product owners. Depending on the pressure for revenue/whatever, ask the senior managers to participate in the demo. That way, they can see the product’s progress as you add more features.
  • Keep the backlog item size small. It doesn’t matter how much is in the backlog if the size of every item is small. The smaller the backlog items, the easier it is for teams to estimate. It’s also easier for teams to maintain a flow of features into the ever-evolving system. Who knows? You might be done earlier than you expect.

With agile, you don’t have to set the strategy for a year, fund the projects, and expect that the projects will complete within that year. A year is a long time in almost every market. Managers might want the ability to change their strategy, and still get to a first “delivery of value” date.

Our metrics need to change. Cost to release or time to release is inadequate. They are inadequate because we can change the backlog at any time.

Instead, consider these metrics:

  • Time to release value: How long will it take us to release something for revenue? (The smaller the number, the better.)
  • Frequency of release: How often can we release? (The higher the frequency, the better.)
  • Run rate (What the team costs per unit time)
  • When you capitalize software. I will admit too much ignorance here to provide you guidance.

I have other measurement suggestions for programs in Organizing An Agile Program, Part 5: Measurements That Might Mean Something to a Program.

It’s not about #noestimates. It’s about which estimates your managers need. Managers have a fiduciary responsibility to the organization. You have the responsibility to release often, at least internally. The more often you release, the fewer time/cost estimates your managers need. The more your managers can take advantage of capitalizing software and what the software can do for the organization and the customers.

Your managers need estimates. And, they need to change the estimates they request. It’s all about your organization’s transition to agile.

Categories: Blogs

Python: matplotlib hangs and shows nothing (Mac OS X)

Mark Needham - Thu, 03/26/2015 - 02:02

I’ve been playing around with some of the matplotlib demos recently and discovered that simply copying one of the examples didn’t actually work for me.

I was following the bar chart example and had the following code:

import numpy as np
import matplotlib.pyplot as plt
 
N = 5
ind = np.arange(N)
fig, ax = plt.subplots()
menMeans = (20, 35, 30, 35, 27)
menStd =   (2, 3, 4, 1, 2)
width = 0.35       # the width of the bars
rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd)
 
plt.show()

When I execute this script from the command line it just hangs and I don’t see anything at all.

Via a combination of different blog posts (which all suggested different things!) I ended up with the following variation of imports which seems to do the job:

import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
 
N = 5
ind = np.arange(N)
fig, ax = plt.subplots()
menMeans = (20, 35, 30, 35, 27)
menStd =   (2, 3, 4, 1, 2)
width = 0.35       # the width of the bars
rects1 = ax.bar(ind, menMeans, width, color='r', yerr=menStd)
 
plt.show()

If I run this script a Python window pops up and contains the following image which is what I expected to happen in the first place!

2015 03 25 23 56 08

The thing to notice is that we’ve had to change the backend in order to use matplotlib from the shell:

With the TkAgg backend, which uses the Tkinter user interface toolkit, you can use matplotlib from an arbitrary non-gui python shell.

Current state: Wishing for ggplot!

Categories: Blogs

JSHint: Confusing Use of ! (not)

Derick Bailey - new ThoughtStream - Wed, 03/25/2015 - 21:15

I ran in to this error a moment ago, produced by JSHint:

NewImage

The error says: Confusing use of ‘!’

That was certainly a new one to me… but after a moment of thought, it made sense. 

The Potential Confusion

Here’s the original code that I wrote, which produced this problem:

In this code, the potential confusion is not from the JavaScript runtime. Rather, it is potentially confusing from the developer’s perspective.

Does this code say “(not id) in ticket”, or is it saying “not (id in ticket)”?

If you know the order of precedence in JavaScript, this code says … well… I don’t know the order of precedence between the ! and “in” operators. I’m assuming it would evaluate to “not (id in ticket)” but I honestly don’t know.

A Less Confusing Version

Let’s clean this up so it’s less confusing.

In this version of the code, I’m explicitly grouping the “in” statement before running the “!” (not) statement. The code is significantly more clear, already. I also avoided the JSHint error, which is what my ultimate goal was in this case.

If I want to take this one more step, however, I can make the code even more clear.

In this version, I explicitly make the “in” check and assign it to a variable. Now my if statement is even easier to understand because I have given a name to the operation being performed. 

Naming A Concept

Reading “!hasId” is easier for our mind to parse and understand as “not has id”, than “not id in ticket”. We’ve given a name to the concept that is expressed in the code.

This form of abstraction (named variable representing an operation result) is the basis by which we understand everything around us. You don’t look at a bottle of water and think about chains of polycarbons, hydrogen atoms, etc. You think “bottle of water”.

Named abstractions are an important part of the way we think, and should be reflected in our code.

Of course this can go too far. Sometimes there isn’t a proper name for a line of code, and many times that line of code is already wrapped in a named function, which covers the concept.

Taking this idea too far is just as bad as not taking it far enough. The key is to find the balance where names are meaningful and representative of what’s going on, not just names for the sake of naming things.

 

 

 

P.S. If you’re interested in seeing how I set up and use JSHint, check out my WatchMeCode episode on using JSHint from Grunt.

 

 

Categories: Blogs

Agile Results Refresher

J.D. Meier's Blog - Wed, 03/25/2015 - 17:19

We live in amazing times. 

The world is full of opportunity at your fingertips.

You can inspire your mind and the art of the possible with TED talks.

You can learn anything with all of the Open Courseware from MIT or Wharton, or Coursera, or you can build your skills with The Great Courses or Udemy.

You can read about anything and fill your kindle with more books than you can read in this lifetime.

You can invest in yourself.  You can develop your intellectual horsepower, your emotional intelligence, your personal effectiveness, your communication skills, your relationship skills, and your financial intelligence.

You can develop your career, expand your experience, build your network, and grow your skills and abilities.  You can take on big hairy audacious goals.  You can learn your limits, build your strengths, and reduce your liabilities.

You can develop your body and your physical intelligence, with 4-minute work outs, P90x3 routines, Fit Bits, Microsoft Band, etc.

You can expand your network and connect with people around the world, all four corners of the globe, from all walks of life, for all sorts of reasons.

You can explore the world, either virtually through Google Earth, or take real-world epic adventures.

You can fund your next big idea and bring it to the world with Kickstarter.

You can explore new hobbies and develop your talents, your art, your music, you name it.

But where in the world will you get time?

And how will you manage your competing priorities?

And how will you find and keep your motivation?

How will you wake up strong, with a spring in your step, where all the things you want to achieve in this lifetime, pull you forward, and help you rise above the noise of every day living?

That's not how I planned on starting this post, but it's a reminder of how the world is full of possibility, and how amazing your life can be when you come alive and you begin the journey to become all that you're capable of.

How I planned to start the post was this.  It's Spring.  It's time for a refresher in the art of Agile Results to help you bring out your best.

Agile Results is a simple system for meaningful results.  It combines proven practices for productivity, time management, and personal effectiveness to help you achieve more in less time, and enjoy the process.

It's a way to spend your best time and your best energy to get your best results.

Agile Results is a way to slow down to speed up, find more fulfillment, and put your ambition into practice.

Agile Results is a way to realize your potential, and to unleash your greatest potential.  Potential is a muscle that gets better through habits.

The way to get started with Agile Results is simple.  

  1. Daily Wins.  Each day, ask yourself what are Three Wins you want for today?   Maybe it's win a raving fan, maybe it's finish that thing that's been hanging over you, or maybe it's have a great lunch.  You can ask yourself this right here, right now -- what are Three Wins you want to accomplish today?
  2. Monday Vision.  On Mondays, ask yourself what are Three Wins you want for this week?  Imagine it was Friday and you are looking back on the week, what are Three Wins that you want under your belt.  Use these Three Wins for the Week to inspire you, all week long, and pull you forward.  Each week is a fresh start.
  3. Friday Reflection.  On Friday, ask yourself, what are three things going well, and what are three things to improve?  Use these insights to get better each week.  Each week is a chance to get better at prioritizing, focusing, and creating clarity around what you value, and what others value, and what's worth spending more time on.

For bonus, and to really find a path of fulfillment, there are three more habits you can add ...

  1. Three Wins for the Month.  At the start of each month, ask yourself, what are Three Wins you want for the month?  If you know your Three Wins for the Month, you can use these to make your months more meaningful.  In fact, a great practice is to pick a theme for the month, whatever you want your month to be about, and use that to make your month matter.  And, when you combine that with your Three Wins, not only will you enjoy the month more, but at the end of the month, you'll feel a better sense of accomplishment when you can talk about your Three Wins that you achieved, whether they are your private or public victories.  And, if the month gets off track, use your Three Wins to help you get back on track.  And, each month is a fresh start.
  2. Three Wins for the Year.  At the start of the year, ask yourself what are Three Wins you want to achieve for the year?  This could be anything from get to your fighting weight to take an epic adventure to write your first book.
  3. Ask better questions.   You can do this anytime, anywhere.  Thinking is just asking and answering questions.  If you want better answers, ask better questions.  You can exponentially improve the quality of your life by asking better questions.

A simple way that I remember this is I remember to think in Three Wins:

Think in terms of Three Wins for the Day, Three Wins for the Week, Three Wins for the Month, Three Wins for the Year

Those are the core habits of Agile Results in a nutshell. 

You can start with that and be well on your way to getting better results in work and life.

If you want to really master Agile Results, you can read the book, Getting Results the Agile Way: A Personal Results System for Work and Life.

It's been a best seller in time management, and it’s helped many people around the world create a better version of themselves.

If you want to take Agile Results to the next level, start a study group and share ways that you use Agile Results with each other around how you create work-life balance, create better energy, learn more faster, and unleash your potential, while enjoying the journey, and getting more from work and life.

Share your stories and results with me, with your friends, with your family, anyone, and everyone – help everybody live a little better.

Categories: Blogs

A Little Risk Goes A Long Way

Leading Agile - Mike Cottmeyer - Wed, 03/25/2015 - 14:51

Risk is a big topic, but generally for beginning agile teams (or any team) it can be calculated in a minimalist way as dependencies. To me, dependencies represent a binary probability. Either they are resolved or not.

When paired with value, risk becomes super important to look at because it can tell you how much of that value is in danger. Specifically, we can use it to make informed decisions about investment.

Regarding value

For now, I will take a look at value through the lens of Net Present Value (NPV) because the program I am currently coaching is already using it. I am not advocating for NPV, just meeting the program where they currently stand. Boiling it way down… Net Present Value is used in capital budgeting to represent the profitability of an investment over time. In agile, this might be an investment in an epic or a feature or a capability, etc. You also might use story points as a super simple starting place until you figure out what value means in your organization. Net Present Value and a few other valuation methods can be seen here.

Risk-adjusted Net Present Value (rNPV) has been used in pharmaceuticals to determine if a capital investment is worth pursuing. The basic reason for the use of this method is that new medications are expensive… Expensive expensive. Like $2.5 billion expensive to get one for which the market approves. There is a high probability that drug trials won’t work out and the drug won’t make it to market on time or at all.  For that reason, risk of failure is assessed and applied to the valuation. That can lead to investment decisions in large capital projects.

How About Doing That In The Small?

With proper breakdown of strategy, I  encourage organizations to do some form of analysis for the next three months at a tactical level to make sure that they inform themselves of the level of risk so they can determine the likelihood that they will realize a return on their investment.  The risk exposure is smaller than NPV is usually concerned with and we may be willing to take on a larger risk profile, but the problem is the same.  If you have a relatively significant investment, you want a reasonable shot at realizing your return.

Looping in Agile

Take a look at programs and portfolios. Sure there are plenty of risks that can prevent projects or features from succeeding, but to keep it simple, I am focusing on the binary dependency. It is done or not done. Therefore, the probability of the risk being removed is 50%. Continuing the math for a bit, for each dependency on the outcome we target, we have a factorial of possibilities… or in English:

If we have two dependencies, we can have four possible outcomes.
For Example:

  • Resolved | Resolved
  • Resolved | Failed
  • Failed | Resolved
  • Failed | Failed

That makes the probability 25% that we will have an outcome of Resolved | Resolved.

If we have three dependencies you can have 8 outcomes and thus a 12.5% probability and so forth and so on. So now forget about NPV. We don’t care. We need a way to make good investment decisions for things that will actually come true. We also need this method to be lightweight.

Given 100 points are in a release and we have 2 dependencies that need to happen in order for us to release the features. We have  a 25% probability that we will release 100 points. Your risk to point value that you can count on is 25 points assuming that all points are blocked by those dependencies. Granted, that’s not a value assessment, it’s a point assessment. But that’s an ok place to start.

So Now What?

With this new information, there is a definitive incentive to remove risk early. Really early in the release. As a matter of fact, I want relatively new teams that are just starting out to plan a release that has an 80% certainty of delivering it’s value. That means that before the release, we need to remove most of the dependencies. I have been toying with capturing this as a risk burndown. Here’s a quick wireframe of it.

burndown

 

In this figure, we are burning down the percent of the release that is impacted by risk. Like I said, I like relatively new programs that hit 80% of their value delivery. We can use the percent impacted to drive risk points down to below 20% before the start. It’s a pretty simple, good way to look at the probability that programs and portfolios are making responsible commitments and have the math to back it up. Speaking of math… this stuff can get a ton more complex. A great goal is to keep it simple so we can all relate. I know this is a little off math wise, but it’s enough to inform and get people started toward developing reliable release plans and roadmaps that mean something.

The post A Little Risk Goes A Long Way appeared first on LeadingAgile.

Categories: Blogs

i’m terrified right now… and I hope I stay that way

Derick Bailey - new ThoughtStream - Wed, 03/25/2015 - 12:00

In less than one week of writing this, I’ll be doing my first keynote talk at a conference – the 2015 SpaceCityJS conference.

Sitting here, typing this; I’m thinking about the talk that I’m going to do… and my stomach is turning. My heart is racing. I want to crawl under a desk and hide. I want to call the conference organizer and tell him I’m sick.

nervous-speaking

The spot-light on me… the entire conference looking at me, expecting me to say something meaningful. I’m terrified. I don’t want to do it. Don’t make me do it.

The Terrified, Stomach Turning Feeling

I’ve done more than a few conference sessions, musical performances in front of large audiences, acted in plays and other public things up on stage. I’m not new to this.

And I seriously hope I never lose this feeling of being terrified by what I’m about to do.

A Strong Correlation

In all the times I’ve been up in front of people, doing some sort of performance, I found found one particular correlation to hold up almost perfectly:

the higher my stress level, the better I do

There are multiple factors at play, here – it’s not one simple, single thing. But time and time again, I find this to be true – for both the benefit and detriment of my talks / performances.

A Miserable Failure

I know what failure looks like, standing in front of people. I screwed up so badly in front of an audience once, that half the people left the room after I had whizzed through my material in less than half the time I was allotted.

I planned the talk for the wrong audience and didn’t realize it until I was standing in front of everyone. I was unable to recover. It was so bad, the conference organizer asked me if I was ok afterward… and he wasn’t even in the session to see me fail. He heard about it from multiple other people.

A Outstanding Success

I recently found myself wanting to hide in a corner and cry just before a talk. I was incredibly nervous about doing a session with zero code, and no demonstrations. I had never done that before.

It turned out to be one of my best talks, ever.

The emotion, the passion, the conveyance of exactly what I was intending through the stories of my experience – it all came together near perfectly. There were more than a few people in the audience who told me that it was the best talk of the conference.

Stretching Myself

there are multiple reasons that my stress level is a good indicator of success – not the least of which is the fact that I am stretching myself to do new things. It’s part of why I keep changing up how I do my presentations, and which talk I give.

I don’t like to give the same talk over and over and over again. I prefer to practice one to death, become incredibly nervous about it, deliver it like a preacher at revival and record that one session so that I never have to give that talk again.

Doing things this way means I never have the comfort of giving the same talk twice, and that’s what I want.

I Want This Gut-Wrenching Feeling

Not because I enjoy the nervousness and sick feelings – that part is truly terrifying and awful. But when I get this way, I know it’s because I am stretching myself to do new things.

I want this horrible, sick feeling because it means I care about what I’m doing. It means I understand that I can really screw this up, and I really don’t want to. It means I care enough to make sure I have every detail right, because I am not confident enough to breeze through it (unlike the over-confidence that turned in to a total fail with the wrong audience).

This gut-wrenching sick feeling that I have right now, means I am growing in some new way and doing something new and potentially amazing … and potentially terrifying and horrible, too.

Not Just Speaking Engagements

A little over a year ago, I convinced my client to let me build a custom batch process scheduling and execution system. We had evaluated a lot of off-the-shelf solutions, and none of them did everything we needed. I recommended an architecture that I was unsure of, but with the confidence that I could learn it in time and create a solution that worked well. It was another gut-wrenching moment when I suggested that solution and they accepted.

Last week (from the time of writing this), that system went live and is working well. The go-live was another nausea-inducing scenario for me.

There are so many scenarios that I put myself in to, where I feel completely and totally out of my element. But I continue to do so because I believe I can step in to those roles and in to the knowledge and experience that I need for that situation.

No Reward Without Risk

I am giving my first keynote at a conference in less than a week. It’s terrifying, but it’s going to be worth it.

If I can pull this off, I can work my way in to bigger and more prestigious conferences. I can turn the exposure in to a larger audience for my mailing list. I can grow my own personal audience with other developers that think I have something valuable to say and to offer.

Yup. I can totally screw this up, and deliver a horrendously awkward keynote. I could put people to sleep or make them all incredibly uncomfortable. I may even have the wrong audience in mind, again. It scares me. A lot.

But it’s going to be worth it if I can pull it off.

But Not Too Much Risk

In spite of the risk involved, and in spite of my stomach tying itself in knots right now, this isn’t a huge leap for me.  I’ve done dozens of talks and more performances of other types than I can count.

This is not a giant leaping attempt to move in to something completely unknown to me. This is another step in the direction that I want to head.

Each step comes with another round of nausea and terror. But it’s only one step. Even if I fall off that step, I’ll probably land on the previous one or two steps where I have my current foundation. From there, I can figure out what I did wrong and where to go next (if I want to go somewhere different).

Never Lose That Terrified Sickness

If you do lost that desire to hide under your desk, you might be sitting in your comfort zone a little too much. You might not be valuing your own abilities or contributions. You might not be taking enough risk to move yourself forward.

Of course there are times when you need the comfort of familiarity. It’s a safe place to be, to recharge yourself and prepare for what lays ahead.  But don’t let yourself lose focus while you rest in your all-too-familiar surroundings.

The comfort you need today should be replaced with fear, uncertainty and doubt that you face head-on, tomorrow.

– Derick

 

Categories: Blogs

Topic Modelling: Working out the optimal number of topics

Mark Needham - Wed, 03/25/2015 - 00:33

In my continued exploration of topic modelling I came across The Programming Historian blog and a post showing how to derive topics from a corpus using the Java library mallet.

The instructions on the blog make it very easy to get up and running but as with other libraries I’ve used, you have to specify how many topics the corpus consists of. I’m never sure what value to select but the authors make the following suggestion:

How do you know the number of topics to search for? Is there a natural number of topics? What we have found is that one has to run the train-topics with varying numbers of topics to see how the composition file breaks down. If we end up with the majority of our original texts all in a very limited number of topics, then we take that as a signal that we need to increase the number of topics; the settings were too coarse.

There are computational ways of searching for this, including using MALLETs hlda command, but for the reader of this tutorial, it is probably just quicker to cycle through a number of iterations (but for more see Griffiths, T. L., & Steyvers, M. (2004). Finding scientific topics. Proceedings of the National Academy of Science, 101, 5228-5235).

Since I haven’t yet had the time to dive into the paper or explore how to use the appropriate option in mallet I thought I’d do some variations on the stop words and number of topics and see how that panned out.

As I understand it, the idea is to try and get a uniform spread of topics -> documents i.e. we don’t want all the documents to have the same topic otherwise any topic similarity calculations we run won’t be that interesting.

I tried running mallet with 10,15,20 and 30 topics and also varied the stop words used. I had one version which just stripped out the main characters and the word ‘narrator’ & another where I stripped out the top 20% of words by occurrence and any words that appeared less than 10 times.

The reason for doing this was that it should identify interesting phrases across episodes better than TF/IDF can while not just selecting the most popular words across the whole corpus.

I used mallet from the command line and ran it in two parts.

  1. Generate the model
  2. Work out the allocation of topics and documents based on hyper parameters

I wrote a script to help me out:

#!/bin/sh
 
train_model() {
  ./mallet-2.0.7/bin/mallet import-dir \
    --input mallet-2.0.7/sample-data/himym \
    --output ${2} \
    --keep-sequence \
    --remove-stopwords \
    --extra-stopwords ${1}
}
 
extract_topics() {
  ./mallet-2.0.7/bin/mallet train-topics \
    --input ${2} --num-topics ${1} \
    --optimize-interval 20 \
    --output-state himym-topic-state.gz \
    --output-topic-keys output/himym_${1}_${3}_keys.txt \
    --output-doc-topics output/himym_${1}_${3}_composition.txt
}
 
train_model "stop_words.txt" "output/himym.mallet"
train_model "main-words-stop.txt" "output/himym.main.words.stop.mallet"
 
extract_topics 10 "output/himym.mallet" "all.stop.words"
extract_topics 15 "output/himym.mallet" "all.stop.words"
extract_topics 20 "output/himym.mallet" "all.stop.words"
extract_topics 30 "output/himym.mallet" "all.stop.words"
 
extract_topics 10 "output/himym.main.words.stop.mallet" "main.stop.words"
extract_topics 15 "output/himym.main.words.stop.mallet" "main.stop.words"
extract_topics 20 "output/himym.main.words.stop.mallet" "main.stop.words"
extract_topics 30 "output/himym.main.words.stop.mallet" "main.stop.words"

As you can see, this script first generates a bunch of models from text files in ‘mallet-2.0.7/sample-data/himym’ – there is one file per episode of HIMYM. We then use that model to generate differently sized topic models.

The output is two files; one containing a list of topics and another describing what percentage of the words in each document come from each topic.

$ cat output/himym_10_all.stop.words_keys.txt
 
0	0.08929	back brad natalie loretta monkey show call classroom mitch put brunch betty give shelly tyler interview cigarette mc laren
1	0.05256	zoey jerry arthur back randy arcadian gael simon blauman blitz call boats becky appartment amy gary made steve boat
2	0.06338	back claudia trudy doug int abby call carl stuart voix rachel stacy jenkins cindy vo katie waitress holly front
3	0.06792	tony wendy royce back jersey jed waitress bluntly lucy made subtitle film curt mosley put laura baggage officer bell
4	0.21609	back give patrice put find show made bilson nick call sam shannon appartment fire robots top basketball wrestlers jinx
5	0.07385	blah bob back thanksgiving ericksen maggie judy pj valentine amanda made call mickey marcus give put dishes juice int
6	0.04638	druthers karen back jen punchy jeanette lewis show jim give pr dah made cougar call jessica sparkles find glitter
7	0.05751	nora mike pete scooter back magazine tiffany cootes garrison kevin halloween henrietta pumpkin slutty made call bottles gruber give
8	0.07321	ranjit back sandy mary burger call find mall moby heather give goat truck made put duck found stangel penelope
9	0.31692	back give call made find put move found quinn part ten original side ellen chicago italy locket mine show
$ head -n 10 output/himym_10_all.stop.words_composition.txt
#doc name topic proportion ...
0	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/1.txt	0	0.70961794636687	9	0.1294699168584466	8	0.07950442338871108	2	0.07192178481473664	4	0.008360809510263838	5	2.7862560133367015E-4	3	2.562409242784946E-4	7	2.1697378721335337E-4	1	1.982849604752168E-4	6	1.749937876710496E-4
1	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/10.txt	2	0.9811551470820473	9	0.016716882136209997	4	6.794128563082893E-4	0	2.807350575301132E-4	5	2.3219634098530471E-4	8	2.3018997315244256E-4	3	2.1354177341696056E-4	7	1.8081798384467614E-4	1	1.6524340216541808E-4	6	1.4583339433951297E-4
2	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/100.txt	2	0.724061485807234	4	0.13624729774423758	0	0.13546964196228636	9	0.0019436342339785994	5	4.5291919356563914E-4	8	4.490055982996677E-4	3	4.1653183421485213E-4	7	3.5270123154213927E-4	1	3.2232165301666123E-4	6	2.8446074162457316E-4
3	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/101.txt	2	0.7815231689893246	0	0.14798271520316794	9	0.023582384458063092	8	0.022251052243582908	1	0.022138209217973336	4	0.0011804626661380394	5	4.0343527385745457E-4	3	3.7102343418895774E-4	7	3.1416667687862693E-4	6	2.533818368250992E-
4	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/102.txt	6	0.6448245189567259	4	0.18612146979166502	3	0.16624873439661025	9	0.0012233726722317548	0	3.4467218590717303E-4	5	2.850788252495599E-4	8	2.8261550915084904E-4	2	2.446611421432842E-4	7	2.2199909869250053E-4	1	2.028774216237081E-
5	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/103.txt	8	0.7531586740033047	5	0.17839539108961253	0	0.06512376460651902	9	0.001282794040111701	4	8.746645156304241E-4	3	2.749100345664577E-4	2	2.5654476523149865E-4	7	2.327819863700214E-4	1	2.1273153572848481E-4	6	1.8774342292520802E-4
6	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/104.txt	7	0.9489502365148181	8	0.030091466847852504	4	0.017936457663121977	9	0.0013482824985091328	0	3.7986419553884905E-4	5	3.141861834124008E-4	3	2.889445824352445E-4	2	2.6964174000656E-4	1	2.2359178288566958E-4	6	1.9732799141958482E-4
7	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/105.txt	8	0.7339694064061175	7	0.1237041841318045	9	0.11889696041555338	0	0.02005288536233353	4	0.0014026751618923005	5	4.793786828705149E-4	3	4.408655780020889E-4	2	4.1141370625324785E-4	1	3.411516484151411E-4	6	3.0107890675777946E-4
8	file:/Users/markneedham/projects/mallet/mallet-2.0.7/sample-data/himym/106.txt	5	0.37064909999661005	9	0.3613559917055785	0	0.14857567731040344	6	0.09545466082502917	4	0.022300625744661403	8	3.8725629469313333E-4	3	3.592484711785775E-4	2	3.3524900189121E-4	7	3.041961449432886E-4	1	2.779945050112539E-4

The output is a bit tricky to understand on its own so I did a bit of post processing using pandas and then ran the results of that through matplotlib to see the distribution of documents for different topics sizes with different stop words. You can see the script here.

I ended up with the following chart:

2015 03 24 22 08 48

On the left hand side we’re using more stop words and on the right just the main ones. For most of the variations there are one or two topics which most documents belong to but interestingly the most uniform distribution seems to be when we have few topics.

These are the main words for the most popular topics on the left hand side:

15 topics

8       0.50732 back give call made put find found part move show side ten mine top abby front fire full fianc

20 topics

12      0.61545 back give call made put find show found part move side mine top front ten full cry fire fianc

30 topics

22      0.713   back call give made put find show part found side move front ten full top mine fire cry bottom

All contain more or less the same words which at first glance seem like quite generic words so I’m surprised they weren’t excluded.

On the right hand side we haven’t removed many words so we’d expect common words in the English language to dominate. Let’s see if they do:

10 topics

1       3.79451 don yeah ll hey ve back time guys good gonna love god night wait uh thing guy great make

15 topics

5       2.81543 good time love ll great man guy ve night make girl day back wait god life yeah years thing
 
10      1.52295 don yeah hey gonna uh guys didn back ve ll um kids give wow doesn thing totally god fine

20 topics

1       3.06732 good time love wait great man make day back ve god life years thought big give apartment people work
 
13      1.68795 don yeah hey gonna ll uh guys night didn back ve girl um kids wow guy kind thing baby

30 topics

14      1.42509 don yeah hey gonna uh guys didn back ve um thing ll kids wow time doesn totally kind wasn
 
24      2.19053 guy love man girl wait god ll back great yeah day call night people guys years home room phone
 
29      1.84685 good make ve ll stop time made nice put feel love friends big long talk baby thought things happy

Again we have similar words across each run and as expected they are all quite generic words.

My take away from this exploration is that I should vary the stop word percentages as well and see if that leads to an improved distribution.

Taking out very common words like we do with the left hand side charts seems to make sense although I need to work out why there’s a single outlier in each group.

The authors suggest that having the majority of our texts in a small number of topics means we need to create more of them so I will investigate that too.

The code is all on github along with the transcripts so give it a try and let me know what you think.

Categories: Blogs

Basecamp 5

Leading Agile - Mike Cottmeyer - Wed, 03/25/2015 - 00:02

The post Basecamp 5 appeared first on LeadingAgile.

Categories: Blogs

Basecamp 4

Leading Agile - Mike Cottmeyer - Wed, 03/25/2015 - 00:02

The post Basecamp 4 appeared first on LeadingAgile.

Categories: Blogs

Basecamp 3

Leading Agile - Mike Cottmeyer - Wed, 03/25/2015 - 00:02

The post Basecamp 3 appeared first on LeadingAgile.

Categories: Blogs

Basecamp 2

Leading Agile - Mike Cottmeyer - Wed, 03/25/2015 - 00:02

The post Basecamp 2 appeared first on LeadingAgile.

Categories: Blogs

Basecamp 1

Leading Agile - Mike Cottmeyer - Wed, 03/25/2015 - 00:00

The post Basecamp 1 appeared first on LeadingAgile.

Categories: Blogs

How to Capture Scrum Master Feedback

Illustrated Agile - Len Lagestee - Tue, 03/24/2015 - 23:30

As a part of the Scrum Master Performance Review series of posts, an emphasis was placed on obtaining feedback for the Scrum Masters reporting to you. Step 5 focused on receiving feedback from the product owner while step 6 focused on gathering feedback from members of the team.

In the past, I have used a set of questions to help foster the conversation. I would typically ask all of the product owners and a sample group of team members a subset or variation of the questions listed below.

Before reviewing these questions, I would like to reiterate a couple of key points:

Make asking for feedback a part of normal conversation. While it may be necessary in your organization to capture feedback in a formal setting, find ways to leverage informal venues and settings. This is often a more powerful approach than a questionnaire or survey. Take a product owner or team member out for a coffee to gain their perspective on team dynamics and use a few of the questions below as a conversation starter.

See for yourself. Take the time to observe a Scrum Master in action first hand. This will help you understand the true purpose and impact of the role and to begin to empathize with the Scrum Masters current situation. Here are a few posts to help you know what to look for: 3 Things to Observe in a Sprint Review, A Manager Guide to Attend Agile Team Events.

Build triads. Ask the individual providing feedback to speak directly to the Scrum Master. This is especially true when good things are happening. Developing this habit of openly sharing with each other is a powerful sign of the strength of the culture. The encouragement could be just what your Scrum Master needs as well.

Check yourself. If are an actual Scrum Master, use a few of these questions to periodically self-assess your own growth and impact in the role. Perhaps you can use these questions to start a conversation with your manager to develop areas for improvement and coaching.

Now, on to the questions. Like the Scrum Master Interview Questions and the Scrum Master Performance Review posts, each section aligns to the Role of the Scrum Master diagram.

SHAPE TEAM EXPERIENCES
Do your sprint ceremonies feel effective and worthwhile or is the team just “going through the motions?”

Is every voice on the team heard? Does everyone participate and engage?

If your team has remote members, are they fully integrated and connected to the team?

Do the roles on your team work together or are your sprints “mini-waterfalls”?

Does your team generally have a celebratory feel to it?

On a scale of 1-10, how would describe the relationships on your team?
1 – Dysfunctional, broken  5 – Normal, friendly  10 – Very close, strong bonds

RADIATE INFORMATION
Are your task walls well-organized, effective, and emitting a sense of constant movement and activity?

Is your team actively updating and sharing progress indicators (i.e. burn-down, burn-up, cumulative flow charts)?

On a scale of 1-10, rate the effectiveness of your task wall and progress indicators.
1 – Non-existent  5 – Good information but not being used  10 – Well-organized and very useful

FOSTER TEAM HEALTH
Does your team have a sense of self-healing and are able to resolve their own internal team issues?

If so, share an example of how the team recently self-healed.

Does your team pull together to resolve issues or challenging tasks?

On a scale of 1-10, rate the effectiveness of the team retrospectives.
1 – Not happening  5 – Somewhat effective (driving internal team change)  10 – Very effective (driving organizational change)

MAINTAIN FLOW
How frequently does your team have stories carry over to the next sprint? Is this a problem?

Does your team need to consistently perform “heroic” efforts to live into sprint forecasts/commitments?

When frequently does you team deploy or deliver product actual users/customers?

How well does the team thrive when the product backlog changes or priorities shift?

On a scale of 1-10, how satisfied are you with the ability of your Scrum Master to coach and facilitate your team into a state of flow.
1 – Not at all satisfied  5 – Somewhat satisfied  10 – Extremely satisfied

CHANGE YOUR COMMUNITY
Are retrospective items emerging from the team helping to drive broader organization improvement? If so, how?

How well is the Scrum Master working with other teams to coordinate dependencies and reduce friction?

On a scale of 1-10, how much influence does your Scrum Master have in driving organizational improvement and positive culture change.
1 – Negligible  5 – Average  10 – Powerhouse

REMOVE IMPEDIMENTS
Are there lingering impediments or dysfunctions keeping your team from being in flow? If so, what are they?

Is your Scrum Master providing the necessary energy level to effectively remove team and organizational impediments?

On a scale of 1-10, rate your Scrum Masters effectiveness at relentlessly removing impediments.
1 – Not effective  5 – Somewhat effective  10 – Very effective

OVERALL
Are you proud of how your Scrum Master has been able to help coach and facilitate your team to greater agility?

On a scale of 1 -10, rate your overall team productivity.
1 – Very low  5 – Average  10 – Extremely productive

On a scale of 1 -10, rate your overall happiness with your product and team.
1 – Miserable  5 – OK  10 – Ecstatic

Feel free to add your suggestions below!

Becoming a Catalyst - Scrum Master Edition

The post How to Capture Scrum Master Feedback appeared first on Illustrated Agile.

Categories: Blogs

Playmakers Assemble!

Portia Tung - Selfish Programming - Tue, 03/24/2015 - 23:06
Play at Work The Meaning of Life

Did you know that in Chinese, the word “business” is made up of two characters that translate to “meaning” and “life”?

Given that we spend more than 70% of our waking hours doing work or work-related activities, don’t you wish we could all make more of our meaning of life?

Come join me in an interactive presentation on Play to help take you and your team from good to great:

Learn more about how to transform work through play complete with my talk and presentation at the Playmaking site!

Thanks to everyone at Playcamp London for their enthusiasm for play and learning. Play, like Change, begins with ourselves.

Categories: Blogs

Coding Kids, it is about Democracy

Dear Junior
These days it seems like a lot of people think it is important to teach kids to code. I would like to chip in my idea about why this is important. To me, it boils down to a question about democracy, but let us not start there.
Code as a Profession
The first argument is about job market. There is no doubt that today we are already short on good system developers, and all predictions and forecasts say that this is going to get worse. So, going for a career in system development seems like a good hedge. This is the argument seen from the perspective of the individual kid. On a side note, I'd like to add that I personally think that system development is an interesting mix of creativity, intellectual stringency and cooperation; so apart from being a job that will probably be in demand it is also a job that is quite fun, when done right.
This argument also have the macro-economic side: companies need system developers to be able to continue their development. With a lack of skilled developers, the companies innovation pace might come to a halt, which would harm society at scale.
So, learning to code is obviously a good way to take the first steps towards a job in systems development.
A historical parallell is that there was a time when reading and writing was only relevant for a small 
were jobs where reading and writing were essential skills: handling written text where in those days mainly done by priests and medieval administrators where in those days that where handling written text. Written text was irrelevant to the rest of society.
This as definitely been true for code. Code was essential for the coding professions - a limited number of engineers and researchers - but not outside those circles. Code was irrelevant to the rest of society. 
However, I think that has changed. Code is no longer important only for those that work as programmers. Today the ability to code to some degree is fruitful to other professions.
Code in more Professions
Thus, second argument is that more jobs will involve code. If we look outside the profession of system developers, there are already places where coding helps people do their job better. Imagine Susan in HR, who is working on gender equality issues and is curious about how the salary level compares across different department, different ages with respect to gender. 
Susan has all the data at hand, perhaps in a few spread-sheets, perhaps in some simple database. Now, she could ask the analysis department to do some statistics which she could analyse to see if there is something to her hypothesis. That would probably take a day or two before they come back. But, if Susan herself can put together some SQL-queries or some scripts, she can get the result the same afternoon. The difference in how well she can do her job - analyse if there are salary differences that should not be there - is significant.
Knowledge in coding is not just a profession in and of itself. It is also already a helpful tool in other professions, and I think it will soon be essential.
A historical parallell could be at the beginning of the industrial revolution. At this point written text was not something that was relevant for the text-professions priests and adminstrators. Suddenly the factory workers where supposed to be able read, reading written instructions or work orders. To start with only the foremen needed this, but soon enough literacy was expected from all workers.
I think this is where code and society is today.
Code in Society
Looking forward, I think it will not stop here. I think understanding of code will become even more important.
Let me start with the historical parallell. After industrial revolution society evolved together with the usage of text. Text weaved into the fabric of society. People who could read were able to follow the news in the spreading news-papers. People who could write were able to participate, write letters to the editors. In parallell civil associations arouse such as workers unions or the anti-drinking movements. These were all possible because their member were able to read and write, to document their meetings in protocols, to create pampflettes to spread their ideas etc. The raise of modern democracy was fuelled by people knowing text.
People who could not read or write were soon standing at the side watching society evolve, more or less unable to participate.
I personally had a revolving experience. An a journey in China in the mid-90s I found myself standing at a cross road in the middle of Beijing. Everywhere I looked I could see text, massive amounts of text. Every shop I could see had signs promoting their wares, there were banners and posters, there were road signs, there were advertisements for papers. Texts everywhere, massive amount of information, a world that everyone around me navigated trough without effort. And I did not understand a single thing. 
Here and there I could see a sign I could recognise, the sign for "north", the sign for "water" etc. But I was feeling totally alienated. Watching people around me naturally navigating on this ocean of information, it felt like being surrounded by magic.
It was not hard to relate to what it must have felt like being an illiterate while society quickly evolved, suddenly just assuming that people in general would understand text and could participate.
I think this is where code is going. 
A World of Automated Processes
Already today we are surrounded by automated processes - powered by code. Most of them are pretty trivial, for example paying a parking using your credit card by checking in and checking out. Some of them are a little bit more complicated, e g when you order goods online and you can track the goods until it arrives at your post office, coupled with status-notification to your mail or phone. Some are pretty complicated. In Sweden the taxation rules for partnership incorporated companies are somewhat complicated - there are several set of rules to chose from and they give different results. However, when declaring taxes on-line on the tax authority web site - then the system helps you chose the set of rules that are most beneficial to you. Even though I know it is just code, it feels a little bit like magic.
We are probably only in the beginning of this evolution. Within a few decades, automated processes will be in much more places, they will be much more complex, and they will probably be interlinked. They will be part of the fabric that constitute society.
Those people that know code, they will be able to understand, to participate, to protest, to support. The people who do not understand code will be standing at the side, watching society evolve, more or less unable to participate.
I realise this might seem like jumping to conclusions. But think back. When industrialisation was new and reading was mostly used by foremen to read work orders and instructions - did it seem probable that reading and writing would transform society, making those skills essential to participate in and effect society at large? 
I think that understanding code will be the literacy of tomorrow.
Democracy
Teaching kids to code is not just about opening the door to the programming professions; it is not just about giving them a better chance to do their non-programming jobs well; at stake are giving them the foundations they need to participate in society; it is about democracy.
That seems pretty important to me.
Yours
  Dan








Categories: Blogs

Clean Tests: Isolation with Fakes

Jimmy Bogard - Tue, 03/24/2015 - 17:58

Other posts in this series:

So far in this series, I’ve walked through different modes of isolation – from internal state using child containers and external state with database resets and Respawn. In my tests, I try to avoid fakes/mocks as much as possible. If I can control the state, isolating it, then I’ll leave the real implementations in my tests.

There are some edge cases in which there are dependencies that I can’t control – web services, message queues and so on. For these difficult to isolate dependencies, fakes are acceptable. We’re using AutoFixture to supply our mocks, and child containers to isolate any modifications. It should be fairly straightforward then to forward mocks in our container.

As far as mocking frameworks go, I try to pick the mocking framework with the simplest interface and the least amount of features. More features is more headache, as mocking frameworks go. For me, that would be FakeItEasy.

First, let’s look at a simple scenario of creating a mock and modifying our container.

Manual injection

We’ve got our libraries added, now we just need to add a way to create a fake and inject it into our child container. Since we’ve built an explicit fixture object, this is the perfect place to put our code:

public T Fake<T>()
{
    var fake = A.Fake<T>();

    Container.EjectAllInstancesOf<T>();
    Container.Inject(typeof(T), fake);

    return fake;
}

We create the fake using FakeItEasy, then inject the instance into our child container. Because we might have some existing instances configured, I use “EjectAllInstancesOf” to purge any configured instances. Once we’ve injected our fake, we can now both configure the fake and use our container to build out an instance of a root component. The code we’re trying to test is:

public class InvoiceApprover : IInvoiceApprover
{
    private readonly IApprovalService _approvalService;

    public InvoiceApprover(IApprovalService approvalService)
    {
        _approvalService = approvalService;
    }

    public void Approve(Invoice invoice)
    {
        var canBeApproved = _approvalService.CheckApproval(invoice.Id);

        if (canBeApproved)
        {
            invoice.Approve();
        }
    }
}

In our situation, the approval service is some web service that we can’t control and we’d like to stub that out. Our test now becomes:

public class InvoiceApprovalTests
{
    private readonly Invoice _invoice;

    public InvoiceApprovalTests(Invoice invoice,
        SlowTestFixture fixture)
    {
        _invoice = invoice;

        var mockService = fixture.Fake<IApprovalService>();
        A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

        var invoiceApprover = fixture.Container.GetInstance<IInvoiceApprover>();

        invoiceApprover.Approve(invoice);
        fixture.Save(invoice);
    }

    public void ShouldMarkInvoiceApproved()
    {
        _invoice.IsApproved.ShouldBe(true);
    }

    public void ShouldMarkInvoiceLocked()
    {
        _invoice.IsLocked.ShouldBe(true);
    }
}

Instead of using FakeItEasy directly, we go through our fixture instead. Once our fixture creates the fake, we can use the fixture’s child container directly to build out our root component. We configured the child container to use our fake instead of the real web service – but this is encapsulated in our test. We just grab a fake and start going.

The manual injection works fine, but we can also instruct AutoFixture to handle this a little more intelligently.

Automatic injection

We’re trying to get out of creating the fake and root component ourselves – that’s what AutoFixture is supposed to take care of, creating our fixtures. We can instead create an attribute that AutoFixture can key into:

[AttributeUsage(AttributeTargets.Parameter)]
public sealed class FakeAttribute : Attribute { }

Instead of building out the fixture items ourselves, we go back to AutoFixture supplying them, but now with our new Fake attribute:

public InvoiceApprovalTests(Invoice invoice, 
    [Fake] IApprovalService mockService,
    IInvoiceApprover invoiceApprover,
    SlowTestFixture fixture)
{
    _invoice = invoice;

    A.CallTo(() => mockService.CheckApproval(invoice.Id)).Returns(true);

    invoiceApprover.Approve(invoice);
    fixture.Save(invoice);
}

In order to build out our fake instances, we need to create a specimen builder for AutoFixture:

public class FakeBuilder : ISpecimenBuilder
{
    private readonly IContainer _container;

    public FakeBuilder(IContainer container)
    {
        _container = container;
    }

    public object Create(object request, ISpecimenContext context)
    {
        var paramInfo = request as ParameterInfo;

        if (paramInfo == null)
            return new NoSpecimen(request);

        var attr = paramInfo.GetCustomAttribute<FakeAttribute>();

        if (attr == null)
            return new NoSpecimen(request);

        var method = typeof(A)
            .GetMethod("Fake", Type.EmptyTypes)
            .MakeGenericMethod(paramInfo.ParameterType);

        var fake = method.Invoke(null, null);

        _container.Configure(cfg => cfg.For(paramInfo.ParameterType).Use(fake));

        return fake;
    }
}

It’s the same code as inside our context object’s “Fake” method, made a tiny bit more verbose since we’re dealing with type metadata. Finally, we need to register our specimen builder with AutoFixture:

public class SlowTestsCustomization : ICustomization
{
    public void Customize(IFixture fixture)
    {
        var contextFixture = new SlowTestFixture();

        fixture.Register(() => contextFixture);

        fixture.Customizations.Add(new FakeBuilder(contextFixture.Container));
        fixture.Customizations.Add(new ContainerBuilder(contextFixture.Container));
    }
}

We now have two options when building out fakes – manually through our context object, or automatically through AutoFixture. Either way, our fakes are completely isolated from other tests but we still build out our root components we’re testing through our container. Building out through the container forces our test to match what we’d do in production as much as possible. This cuts down on false positives/negatives.

That’s it for this series on clean tests – we looked at isolating internal and external state, using Fixie to build out how we want to structure tests, and AutoFixture to supply our inputs. At one point, I wasn’t too interested in structuring and refactoring test code. But having been on projects with lots of tests, I’ve found that tests retain their value when we put thought into their design, favor composition over inheritance, and try to keep them as tightly focused as possible (just like production code).

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs