Skip to content

Blogs

Dipping Toes in Other Development Communities

My dad, even with a serious head injury that ended his working life, regularly attended a computer club where he could chat with a community of fellow geeks who accepted him head injury and all. I was always glad he had somewhere he could be accepted, but I never realized I’d end up attending and helping to run programming user groups in my own career.

I first went to a Java Users Group in Sacramento, SacJUG. I had been getting deeper into Java and figured it was time to see what the local community was like. I found a home with some fellow geeks and I attended regularly for the next ten years or so. I even eventually hired multiple developers I met at SacJUG and I appreciated being able to geek out on the language and share war stories. A few years later I got deeply into Ruby in my spare time with the advent of Rails and eventually helped setup the Sacramento Ruby Meetup. It would be a few more years until I got paid to do Ruby, but met some great developers along the way and I still regularly attend. Only a year or so later I helped found the Sacramento Groovy User’s group which continues today as essentially a JVM languages group.

All this experience with user groups has led me to experiment with visiting other user groups from time to time. A few months ago I showed up for an Angular group and met a lot of front-end specific developers I don’t mix with regularly.

If you haven’t tried out attending a user group I encourage you to try it. It only costs you an hour or two and the benefits are worth it. Things like:

  • Seeing the size of say the node.js community in your town and getting a sense of how a new language or toolset is catching on.
  • Getting exposure to something new with a group of programmers.
  • Meeting fellow developers who are trying to come up to speed or stay on top of technologies.
  • Finding a great new candidate for your shop. The developers who regularly attend user groups tend to be more motivated and engaged employees and you can get a pretty good sense of their skills just from chatting.
  • Getting out of the house since many of us are introverts. With the shared context it’s much easier than say the annual holiday party.
  • Practice speaking if you work up the nerve in a low stress atmosphere.
  • A sense of whether a particular language/community is on the rise or fall.
Categories: Blogs

Scrum Mythbusters

TV Agile - Mon, 07/13/2015 - 18:52
As Scrum popularity continues to grow, so do associated myths and misunderstandings. This session will debunk many of these common misinterpretations of the world’s most popular Agile Framework. Some of the myths that will be debunked during this session include: * Scrum can’t work for production support * Scrum can’t work for teams practicing continuous […]
Categories: Blogs

Vision to User Stories – What is the Best Flow?

Notes from a Tool User - Mark Levison - Mon, 07/13/2015 - 16:16

In a recent Product Owner Course I was asked to provide a picture of the flow from Vision to User Stories, with all the steps in between.

I think the attendee was hoping for something like:

User Stories Flow

There are a couple of challenges.  Scrum, being a framework, doesn’t tell the Product Owner or the Dev Team (aka Doers) how to do their work. As a result, none of these particular tools are required and some excellent ones (such as Personas and Impact Mapping) are missing from this simple picture.

The deeper problem is that a picture like this implies a strict linear flow. But there is no standard flow or right order. Each tool is independent of one another.

A picture, were we to draw it, would be more like an interconnected web. Each tool leading to every other and, just as important, gaining feedback from the tool that comes after it.

Some examples to help:

In the first Sprint, the Team might decide to go from a Vision straight to a few User Stories with Acceptance Criteria. In the same sprint, they build out and deploy two of these stories, finally testing them with their target market (aka Lean Startup and Lean UX). Once they get some initial feedback, they hold a Product Backlog Refinement meeting and start populating their initial Story Map backbone. They use the early version of their Story Map to explore the flow from the key persona’s viewpoint. Next, they create the first few User Stories for each item in the Story Map to make sure they understand it well. By now the Team have created thirty Stories and the Product Owner wants to make sure she is prioritizing the stories that will have the greatest effect for the key personas. She turns to Impact Mapping (effectively Mind Maps for User Stories that explore the Why, Who, How and finally What – only exploring the What after we understand the context).

All the while the Team is building software and validating it with market as they go.

In a separate but parallel universe…

Our Team might start by building a vision using a collaborative game like Product Box with their stakeholders (including actual customers). They move from this to designing Personas that represent their core constituencies and then create a Story Map backbone. They take a couple of extra days to create the initial User Stories and initial acceptance criteria. Then, like the first example, they start building their application and validating in the marketplace. As they gain feedback, they update their existing Vision, Personas, and Story Map(s) to reflect their new understanding of the world.

Aside from my personal preference to the first version of the Team, which is wasting the minimum extra time before testing whether their ideas meet actual customer needs, there is no correct version. All approaches are valid.

Instead what matters:

  • Continuous Collaboration between the Product Owner, Dev Team, and Stakeholders
  • Frequently updating their artifacts (Vision, Personas, Story Maps) as they discover meaningful change. Since we don’t want to reinvent waterfall, this implies that all artifacts are kept small and light.
  • We don’t assume that our Personas, Story Maps, etc. are true versions of reality and we’re always trying to test them with working software validated in the marketplace.

So instead of trying to visualize the “flow” to write User Stories, which suggests something linear without feedback, maybe it would be more helpful to imagine it as a mesh or interconnected web.

Categories: Blogs

Link: Meeting Check-Ins

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Very nice article called Why I Always Start a Meeting with a Check-In.  From the article by Ted Lord, senior partner, The Giving Practice:

The greatest benefit of working in a group is our diversity of viewpoints and approaches; groups hobble themselves when they don’t continually give attention to creating a container of trust and shared identity that invites truth-telling, hard questions, and the outlier ideas that can lead to innovation

One antidote to over-designed collaboration is the check-in.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Link: Meeting Check-Ins appeared first on Agile Advice.

Categories: Blogs

Agile is fundamentally just...

Is Agile fundamentally just X (where X can be any one trait)?

For example is Agile fundamentally just...

  • ..."violent transparency"?
  • ...getting smart people together and getting out of their way?
  • ...removing the gap between developers and customers?
  • [insert your version here]?
I've seen too many failed Agile teams and situations to be that naive.
Sociotechnical systems are complex.  For a complex system, there is no "fundamentally just" one particular thing.

I have argued that Agile can be generated from just a handful of key concepts (rather than 12).  For example:


I can't say for sure that I'm right about any of these attempts, but I'm quite confident in saying that I'm sure there is no one concept that generates all of Agile AND that there is no one trait that fundamentally represents all of Agile.
Categories: Blogs

R: I write more in the last week of the month, or do I?

Mark Needham - Sun, 07/12/2015 - 11:53

I’ve been writing on this blog for almost 7 years and have always believed that I write more frequently towards the end of a month. Now that I’ve got all the data I thought it’d be interesting to test that belief.

I started with a data frame containing each post and its publication date and added an extra column which works out how many weeks from the end of the month that post was written:

> df %>% sample_n(5)
                                                               title                date
946  Python: Equivalent to flatMap for flattening an array of arrays 2015-03-23 00:45:00
175                                         Ruby: Hash default value 2010-10-16 14:02:37
375               Java/Scala: Runtime.exec hanging/in 'pipe_w' state 2011-11-20 20:20:08
1319                            Coding Dojo #18: Groovy Bowling Game 2009-06-26 08:15:23
381                   Continuous Delivery: Removing manual scenarios 2011-12-05 23:13:34
 
calculate_start_of_week = function(week, year) {
  date <- ymd(paste(year, 1, 1, sep="-"))
  week(date) = week
  return(date)
}
 
tidy_df  = df %>% 
  mutate(year = year(date), 
         week = week(date),
         week_in_month = ceiling(day(date) / 7),
         max_week = max(week_in_month), 
         weeks_from_end = max_week - week_in_month,
         start_of_week = calculate_start_of_week(week, year))
 
> tidy_df %>% select(date, weeks_from_end, start_of_week) %>% sample_n(5)
 
                    date weeks_from_end start_of_week
1023 2008-08-08 21:16:02              3    2008-08-05
800  2014-01-31 06:51:06              0    2014-01-29
859  2014-08-14 10:24:52              3    2014-08-13
107  2010-07-10 22:49:52              3    2010-07-09
386  2011-12-20 23:57:51              2    2011-12-17

Next I want to get a count of how many posts were published in a given week. The following code does that transformation for us:

weeks_from_end_counts =  tidy_df %>%
  group_by(start_of_week, weeks_from_end) %>%
  summarise(count = n())
 
> weeks_from_end_counts
Source: local data frame [540 x 4]
Groups: start_of_week, weeks_from_end
 
   start_of_week weeks_from_end year count
1     2006-08-27              0 2006     1
2     2006-08-27              4 2006     3
3     2006-09-03              4 2006     1
4     2008-02-05              3 2008     2
5     2008-02-12              3 2008     2
6     2008-07-15              2 2008     1
7     2008-07-22              1 2008     1
8     2008-08-05              3 2008     8
9     2008-08-12              2 2008     5
10    2008-08-12              3 2008     9
..           ...            ...  ...   ...

We group by both ‘start_of_week’ and ‘weeks_from_end’ because we could have posts published in the same week but different month and we want to capture that difference. Now we can run a correlation on the data frame to see if there’s any relationship between ‘count’ and ‘weeks_from_end':

> cor(weeks_from_end_counts %>% ungroup() %>% select(weeks_from_end, count))
               weeks_from_end       count
weeks_from_end     1.00000000 -0.08253569
count             -0.08253569  1.00000000

This suggests there’s a slight negative correlation between the two variables i.e. ‘count’ decreases as ‘weeks_from_end’ increases. Let’s plug the data frame into a linear model to see how good ‘weeks_from_end’ is as a predictor of ‘count':

> fit = lm(count ~ weeks_from_end, weeks_from_end_counts)
 
> summary(fit)
 
Call:
lm(formula = count ~ weeks_from_end, data = weeks_from_end_counts)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-2.0000 -1.5758 -0.5758  1.1060  8.0000 
 
Coefficients:
               Estimate Std. Error t value Pr(>|t|)    
(Intercept)     3.00000    0.13764  21.795   <2e-16 ***
weeks_from_end -0.10605    0.05521  -1.921   0.0553 .  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.698 on 538 degrees of freedom
Multiple R-squared:  0.006812,	Adjusted R-squared:  0.004966 
F-statistic:  3.69 on 1 and 538 DF,  p-value: 0.05527

We see a similar result here. The effect of ‘weeks_from_end’ is worth 0.1 posts per week with a p value of 0.0553 so it’s on the border line of being significant.

We also have a very low ‘R squared’ value which suggests the ‘weeks_from_end’ isn’t explaining much of the variation in the data which makes sense given that we didn’t see much of a correlation.

If we charged on and wanted to predict the number of posts likely to be published in a given week we could run the predict function like this:

> predict(fit, data.frame(weeks_from_end=c(1,2,3,4,5)))
       1        2        3        4        5 
2.893952 2.787905 2.681859 2.575812 2.469766

Obviously it’s a bit flawed since we could plug in any numeric value we want, even ones that don’t make any sense, and it’d still come back with a prediction:

> predict(fit, data.frame(weeks_from_end=c(30 ,-10)))
        1         2 
-0.181394  4.060462

I think we’d probably protect against that with a function wrapping our call to predict that doesn’t allow ‘weeks_from_end’ to be greater than 5 or less than 0.

So far it looks like my belief is incorrect! I’m a bit dubious about my calculation of ‘weeks_from_end’ though – it’s not completely capturing what I want since in some months the last week only contains a couple of days.

Next I’m going to explore whether it makes any difference if I calculate that value by counting the number of days back from the last day of the month rather than using week number.

Categories: Blogs

R: Filling in missing dates with 0s

Mark Needham - Sun, 07/12/2015 - 10:30

I wanted to plot a chart showing the number of blog posts published by month and started with the following code which makes use of zoo’s ‘as.yearmon’ function to add the appropriate column and grouping:

> library(zoo)
> library(dplyr)
> df %>% sample_n(5)
                                                  title                date
888        R: Converting a named vector to a data frame 2014-10-31 23:47:26
144  Rails: Populating a dropdown list using 'form_for' 2010-08-31 01:22:14
615                    Onboarding: Sketch the landscape 2013-02-15 07:36:06
28                        Javascript: The 'new' keyword 2010-03-06 15:16:02
1290                Coding Dojo #16: Reading SUnit code 2009-05-28 23:23:19
 
> posts_by_date  = df %>% mutate(year_mon = as.Date(as.yearmon(date))) %>% count(year_mon)
> posts_by_date %>% head(5)
 
    year_mon  n
1 2006-08-01  1
2 2006-09-01  4
3 2008-02-01  4
4 2008-07-01  2
5 2008-08-01 38

I then plugged the new data frame into ggplot to get the chart:

> ggplot(aes(x = year_mon, y = n), data = posts_by_date) + geom_line()

2015 07 12 09 07 47

The problem with this chart is that it’s showing there being 4 posts per month for all the dates between September 2006 and February 2008 even though I didn’t write anything! It’s doing the same thing between February 2008 and July 2008 too.

We can fix that by filling in the gaps with 0s.

First we’ll create a vector containing every month in the data range contained by our data frame:

> all_dates = seq(as.Date(as.yearmon(min(df$date))), as.Date(as.yearmon(max(df$date))), by="month")
 
> all_dates
  [1] "2006-08-01" "2006-09-01" "2006-10-01" "2006-11-01" "2006-12-01" "2007-01-01" "2007-02-01" "2007-03-01"
  [9] "2007-04-01" "2007-05-01" "2007-06-01" "2007-07-01" "2007-08-01" "2007-09-01" "2007-10-01" "2007-11-01"
 [17] "2007-12-01" "2008-01-01" "2008-02-01" "2008-03-01" "2008-04-01" "2008-05-01" "2008-06-01" "2008-07-01"
 [25] "2008-08-01" "2008-09-01" "2008-10-01" "2008-11-01" "2008-12-01" "2009-01-01" "2009-02-01" "2009-03-01"
 [33] "2009-04-01" "2009-05-01" "2009-06-01" "2009-07-01" "2009-08-01" "2009-09-01" "2009-10-01" "2009-11-01"
 [41] "2009-12-01" "2010-01-01" "2010-02-01" "2010-03-01" "2010-04-01" "2010-05-01" "2010-06-01" "2010-07-01"
 [49] "2010-08-01" "2010-09-01" "2010-10-01" "2010-11-01" "2010-12-01" "2011-01-01" "2011-02-01" "2011-03-01"
 [57] "2011-04-01" "2011-05-01" "2011-06-01" "2011-07-01" "2011-08-01" "2011-09-01" "2011-10-01" "2011-11-01"
 [65] "2011-12-01" "2012-01-01" "2012-02-01" "2012-03-01" "2012-04-01" "2012-05-01" "2012-06-01" "2012-07-01"
 [73] "2012-08-01" "2012-09-01" "2012-10-01" "2012-11-01" "2012-12-01" "2013-01-01" "2013-02-01" "2013-03-01"
 [81] "2013-04-01" "2013-05-01" "2013-06-01" "2013-07-01" "2013-08-01" "2013-09-01" "2013-10-01" "2013-11-01"
 [89] "2013-12-01" "2014-01-01" "2014-02-01" "2014-03-01" "2014-04-01" "2014-05-01" "2014-06-01" "2014-07-01"
 [97] "2014-08-01" "2014-09-01" "2014-10-01" "2014-11-01" "2014-12-01" "2015-01-01" "2015-02-01" "2015-03-01"
[105] "2015-04-01" "2015-05-01" "2015-06-01" "2015-07-01"

Now we need to create a data frame containing those dates and merge it with the original:

posts_by_date_clean = merge(data.frame(date = all_dates),
                            posts_by_date,
                            by.x='date',
                            by.y='year_mon',
                            all.x=T,
                            all.y=T)
 
> posts_by_date_clean %>% head()
        date  n
1 2006-08-01  1
2 2006-09-01  4
3 2006-10-01 NA
4 2006-11-01 NA
5 2006-12-01 NA
6 2007-01-01 NA

We’ve still got some ‘NA’ values in there which won’t plot so well. Let’s set those to 0 and then try and plot our chart again:

> posts_by_date_clean$n[is.na(posts_by_date_clean$n)] = 0
> ggplot(aes(x = date, y = n), data = posts_by_date_clean) + geom_line()
2015 07 12 09 17 10

Much better!

Categories: Blogs

Agile was invented by consultants to make money?

Was Agile invented by consultants to make money?

This question has two parts:

  1. Was Agile invented by consultants?
  2. Was it invented to make money?

If you look at the authors of the Agile Manifesto... it sure does look like a lot of consultants.

Let's think about this.

Was there a problem that Agile was solving?  If no, then it's more likely it was invented just to make money.  If yes, then it's more that problem-solving is a way to make money.

The predecessors of the aggregate "Agile" brand were known as "lightweight methodologies".  The name gives you a hint that they were being compared to "heavyweight methodologies", typically at that time, associated with excessive UML and the Capability Maturity Model.  I would argue that this indicates the "lightweight methodologies" were invented to solve the problem of "heavyweight methodologies" and therefore "Agile" being an aggregate brand of "lightweight methodologies" was invented to solve problems, not as a blatant ploy to make money.

So why would anyone believe that Agile was invented by consultants to make money?  I don't believe that it's simply due to cynicism.

One of the problems of the mainstreaming of Agile, and specifically the growth of certifications, is the corruption of easy money.

The needle was closer to "solve problems" in the early days of Agile or really before the brand "Agile" was invented.  I will grant that this needle has moved toward the "make easy money" side of things.

However, I don't grant the idea that "Agile was invented by consultants to make money" is factual.

"Lightweight methodologies" were an honest expression of what people thought was better.  That core is still real.  Apologies for the rest of the crap.
Categories: Blogs

Early Agile Scaling at Adobe

Agile For All - Bob Hartman - Sat, 07/11/2015 - 22:46

Scaling agile is a hot topic these days. I’ve recently given a presentation at a few local user groups about the experience of scaling agile at Adobe. The presentation describes early scaling as well as our experience helping a large shared technology group adopt a model using the Spotify Approach as a template. Below is a synopsis of the early scaling approach which naturally emerged at Adobe.

Viral Agile Adoption vs. Mandated Agile Adoption

Agile at Adobe was a grass roots effort. There was never a mandate from an executive that “thou shalt use Scrum”, and I think that this is an effective adoption pattern. Executive support is critical, but an executive tell can be a death knell for any initiative. Starting with a few product teams in 2005, teams adopted scrum when their peer teams noticed the benefits that the early adopters were having.

viral scrum adoption at adobe

“Proof”

One of the challenges with convincing later adopters is that they are typically looking for more concrete evidence before risking a change. A challenge with providing evidence of the effectiveness of scrum is that many metrics are different between traditional development and agile development. One metric that remains the same for most projects is open defects. On a traditional project, the total number of open defects will grow during feature development, then rapidly decrease as the team focuses on fixing and/or deferring open bugs leading up to the ship date. On a scrum project, every sprint the total number of defects should approach zero as the team gets the iteration “potentially shippable”. We can compare the total open defects from the release cycle prior to adoption an agile approach with the first release cycle using scrum and see the difference. Below are the charts from five Adobe products that adopted scrum showing this data:

pre post scrum bug curves

What these graphs don’t show is the impact the lower defect counts had on the team. After adopting scrum, the Audition team’s final months leading up to the release of their product required no weekend work, no late nights; essentially a low-stress, high confidence release. Other teams that had not yet moved to scrum noticed this and started asking what the Audition team were up to. Seeing our defect curves and the fact that we were spending time with friends and family while they were working weekends trying to get their bug count down was a major impetus to them investigating and eventually adoption scrum.

The other impact of these defect rates was on the quality of the product. When you are trying to get a huge number of bugs reduced down to zero, that often means painfully deferring bugs that you really think ought to be fixed in order to meet the date. Without this time pressure, bugs were much more frequently fixed rather than deferred, resulting in an overall improvement in the final quality of the release. There is also some data that suggests that waiting to fix a bug substantially increases the overall cost of fixing it, and so waiting until the end of a multi-month release means that the overall cost of fixing bugs was much higher before moving to scrum.

Typical Adobe Product Team Structure

Moving into the scaling topic, most Adobe product teams are larger than seven people, and so scaling patterns were built into our approach from the earliest implementations. The picture below illustrates a typical scaling approach for an Adobe product team.

typical adobe team structure

With one notable exception, this is a standard scaled scrum approach. We have multiple scrum teams working on related sets of features within the larger product context. The primary difference is how the Product Owner role is played. Prior to adopting scrum, the Audition team had a management group called the Feature Council. This group comprised an engineering leader, a test leader, a product manager, a program manager, and a user experience leader. They collaboratively made decisions regarding the vision, staffing, features, and other related areas. When we learned about scrum and the “single set of vocal chords” product owner role, there was no exact fit. While our product manager was awesome, we had never seen him as the single voice for product decisions. We felt that his was one very important perspective, but that we got tremendous benefits from including other perspectives (technology, quality, design, and process). We decided to keep the feature council intact and have it collectively play the Product Owner role. We decided that if we couldn’t reach consensus on a prioritization decision, our Product Manager would then make the call as the “Ultimate Product Owner”. In several years, we never saw this happen.

The scrum teams handled technical coordination through weekly meetings of the various competencies (Engineering, Testing, etc.), similar to the Spotify Chapter Meetings.

Upcoming Classes

Peter regularly provides Certified Scrum Master and Certified Scrum Product Owner courses throughout the western United States. If you are a member of a local Scrum User Group, you are eligible for a 20% discount – please contact Peter for a discount code. Check out his next courses in the Salt Lake City area August 17-18 (CSM) and August 19-20 (CSPO):

August 17-18 Certified Scrum Master

August 19-20 Certified Scrum Product Owner

The post Early Agile Scaling at Adobe appeared first on Agile For All.

Categories: Blogs

Skilled for Life

J.D. Meier's Blog - Sat, 07/11/2015 - 21:15

A while back, a colleague challenged me to find something simple and sticky for the big idea behind Sources of Insight.  After trying several phrases, here’s the one that stuck:

Skilled for Life

He liked it because it had punch.  It also had a play on words, and you could read it two different ways.

I like it because it captured the big idea behind Sources of Insight.   The whole purpose behind the site is to help as many people improve the quality of their life as possible.

With skill.

I’ve found that skills can make or break somebody’s chance for success.   And, I don’t just mean from a career perspective.   To be effective in all areas of our life, we need skills across several domains:

  • Mind
  • Body
  • Emotions
  • Career
  • Finance
  • Relationships
  • Fun

Skilled for Life is meant to be a very simple phrase, with a very intentional outcome:

Equip you with the skills you need to survive and thrive in today’s world.

It’s all about personal empowerment.

Not everybody gets the right mentors, or the right training, or the right breaks.   So Sources of Insight is designed from the ground up to be your personal success library that helps you make your own breaks, create your opportunities, and own your destiny.

How?

By sharing the world’s best insight and action for work and life.  By providing you with very real skills for mastering emotional intelligence, intellectual horsepower, creative brilliance, interpersonal relationships, career growth, health, and happiness (yeah, happiness is a skill you can learn).  And by providing you with principles, patterns, and practices for a smarter, more creative, and more capable you.

To give you one simple example of how happiness is a skill, let me tell you about the three paths of happiness according to Dr. Martin Seligman:

  1. The Pleasant Life
  2. The Good Life
  3. The Meaningful Life

You can think of them like this:  The Pleasant Life is all about pleasures, here and now.  The Good Life is about spending more time in your values.  The Meaningful Life is about fulfillment by helping the greater good, using your unique skills.   It’s giving our best where we have our best to give, and moving up Maslow’s stack.

When you know the three paths of happiness, you can more effectively build your happiness muscles.  For example, you can Discover Your Values, so that you can spend more time in them, and live life on your terms.

That’s just one example of how you can improve your self-efficacy with skill.

There is a vast success library of everything from inspirational quotes to inspirational heroes, as well as principles, patterns, and practices for skills to pay the bills and lead a better life.  Sources of Insight is a dojo of personal development, and your jump start for realizing your potential.

I invite you to check out the following page on Sources of Insight, where I share what Skilled for Life is all about:

Skilled for Life

Skills empower you.

Categories: Blogs

R: Date for given week/year

Mark Needham - Sat, 07/11/2015 - 00:01

As I mentioned in my last couple of blog posts I’ve been looking at the data behind this blog and I wanted to plot a chart showing the number of posts per week since the blog started.

I started out with a data frame with posts and publication date:

> library(dplyr)
> df = read.csv("posts.csv")
> df$date = ymd_hms(df$date)
 
> df %>% sample_n(10)
                                                                                title                date
538                                    Nygard Big Data Model: The Investigation Stage 2012-10-10 00:00:36
341                                                            The read-only database 2011-08-29 23:32:26
1112                                  CSS in Internet Explorer - Some lessons learned 2008-10-31 15:24:51
143                                                       Coding: Mutating parameters 2010-08-26 07:47:23
433  Scala: Counting number of inversions (via merge sort) for an unsorted collection 2012-03-20 06:53:18
618                                    neo4j/cypher: SQL style GROUP BY functionality 2013-02-17 21:05:27
1111                                 Testing Hibernate mappings: Setting up test data 2008-10-30 13:24:14
462                                       neo4j: What question do you want to answer? 2012-05-05 13:20:41
1399                                       Book Club: Design Sense (Michael Feathers) 2009-09-29 14:42:29
494                                    Bash Shell: Reusing parts of previous commands 2012-07-05 23:42:35

The first step was to add a couple of columns representing the week and year for the publication date. The ‘lubridate’ library came in handy here:

byWeek = df %>% 
  mutate(year = year(date), week = week(date)) %>% 
  group_by(week, year) %>% summarise(n = n()) %>% 
  ungroup() %>% arrange(desc(n))
 
> byWeek
Source: local data frame [352 x 3]
 
   week year  n
1    33 2008 14
2    35 2008 11
3    53 2012 11
4     9 2013 10
5    12 2013  9
6    21 2009  9
7    22 2009  9
8    38 2013  9
9    40 2008  9
10   48 2012  9
..  ...  ... ..

The next step is to calculate the start date of each of those weeks so that we can plot the counts on a continuous date scale. I spent a while searching how to do this before realising that the ‘week’ function I used before can set the week for a given data as well. Let’s get to work:

calculate_start_of_week = function(week, year) {
  date <- ymd(paste(year, 1, 1, sep="-"))
  week(date) = week
  return(date)
}
 
> calculate_start_of_week(c(1,2,3), c(2015,2014,2013))
[1] "2015-01-01 UTC" "2014-01-08 UTC" "2013-01-15 UTC"

And now let’s transform our data frame and plot the counts:

ggplot(aes(x=start_of_week, y=n, group=1), 
       data = byWeek %>% mutate(start_of_week = calculate_start_of_week(week, year))) + 
  geom_line()

2015 07 10 22 43 54

It’s a bit erratic as you can see. Some of this can be explained by the fact that I do in fact post in an erratic way while some of it is explained by the fact that some weeks only have a few days if they start on the 29th onwards.

Categories: Blogs

Exercise: Pair Programming Simulation using Tangrams

Agile Complexification Inverter - Fri, 07/10/2015 - 22:37
Yesterday (July, 2015) we did a lunch-n-learn at GameStop HQ on pair programming.  I think it was a great success, largely because we serve food, and I've been told that everything goes better when people are sharing a meal together (and even better with adult beverages).


Are you interested in Pair Programming?  I'll confess, the term is a bit misleading.  I was asked by multiple people if the topic was just for programmers.  No - no it's not just a programming technique. It is also for any kind of knowledge work.  Such as testing, or analysis, or writing stories, or ... yes coding, scripting, excel spreadsheets, etc.



The Agenda: Pair Programming Simulation
Start with a warm up exercise (totally non-related to the topic).  This allows all the late arrivals to find a seat and not miss out on the real start of the session.  I've found this technique (soft start) to be a required technique for companies that have not adopted basic meeting protocols, such as finishing prior to the start of the next meeting.  IF one does not finish on time, how can one start on time?

We used Thiagi's warm up of Buying Happiness

Flipped this lesson.  Although the experiment resulted in a - How Fascinating (failure).  No one actually participated in the homework to read the lesson before the experience session.  We continued without doing any actual lecture.

PDF - Pair Programming - Lessons

Query the audience - to share the common level of people with respect to the domain knowledge.  Ask a few questions - raise your hand if you have heard of pair programming, if you've done pair programming, if you only program in a pairs (every day)?  Look around - these are the experts.  Raise your hand if you are a beginner?  When you read the homework on pairing, you remember that pairing beginners with beginners is an anti-pattern.  So what shall we do about that?

Restructure the seating arrangements, have people select appropriate pair for their skill level.  Don't allow the beginners to sit together and the experts to create a click.

Part ONE.  Pair Drawing.Let's do the simplest thing that could possibly work... everyone has learned to draw/sketch.  Let's use this innate skill to explore pairing basics.

PDF - Pair Face Drawing
Part TWO.  Lunch.Typically what draws everyone to your meeting... food.  Don't do Lunch-n-Learn with out this.
Part Three.  Pair Puzzle Solving.Let's extend our learning into a harder problem domain... solving a puzzle - Tangrams.

PDF - Pair Puzzle - Tangram Solving

This exercise can touch upon the aspects of Test-First (TDD) practices.  Typically a topic for another Lunch-n-Learn.

Debrief.A great facilitator does the exercise / simulation just to get to the debrief.  Reflection is the only activity where double loop learning may occur.  Using metaphor and analogy to relate drawing faces or solving Tangrams to developing software is the job of the debrief.


In a large group with many subgroups this can be done by projecting the debrief question on the screen and having the subgroups (tables) debrief themselves.  Extra points given for summaries of learning points or action items discovered.
We did a debrief after each example problem.  Then ran out of time to debrief the whole workshop - but did get Level One feedback on the workshop.  It was a 8 or 9 (out of 10) with a few improvement to make for next time.
            

Categories: Blogs

Strategy Deployment as Organisational Improv

AvailAgility - Karl Scotland - Fri, 07/10/2015 - 16:14

IMG_0159At Agile Cymru this week Neil Mullarkey gave a superb keynote, introducing his rules of improv (left). He suggested that businesses can apply these rules to be more creative and collaborative, and that there is a lot of synergy with Agile. Like all the best keynotes, it got me thinking and making connections, in particular about how Strategy Deployment could be thought of as form of Organisational Improv.

I’ve blogged about Strategy Deployment a couple of times, in relation to the X-Matrix and Kanban Thinking, and Is Agile Working. Essentially it is a way for leaders to communicate their intent, so that employees are able decide how to execute. This seems just like an improv scene having a title (the intent), allowing the performers to decide how to play out the scene (the execution).

The title, and rules of the improve game, provide enabling constraints (as opposed to governing constraints) that allow many different possible outcomes to emerge. For example, we tried a game where in small groups of 4-5 people, we told a story, each adding one word at a time. The title was “The Day We Went To The Airport”. That gave us a “True North”, and the rules allowed a very creative story to emerge. Certainly something that no one person could have come up with individually!

B_SEWU8XIAIXsC5However, given our inexperience with improv, the story was extremely incoherent. I’m not sure we actually made to the airport by the time we had been sidetracked by the stewardesses, penguins and surfing giraffes (don’t ask). It was definitely skimming the edge of chaos, and I can’t help thinking some slightly tighter constraints could have helped. As an aside, I saw these Coyote/Roadrunner Rules recently (right). Adam Yuret pointed out that they were enabling constraints and I wonder if something like this would have helped with coherence?

What’s this got to do with Strategy Deployment? It occurred to me that good strategies provide the enabling constraints with which organisations improvise in collaborating and co-creating tactics to meet their True North. Clarity of strategy leads to improvisation of tactics, and if we take Neil’s Rules of Improv we can tweak them such that an offer is an idea for a tactic, giving:

  • Listen actively for ideas for tactics
  • Accept ideas for tactics
  • Give ideas for tactics in return
  • Explore assumptions (your own and others’)
  • Re-incorporate previous ideas for tactics
Categories: Blogs

New Video: Myths of Scrum – ScrumMaster Assigns Tasks

Learn more about our Scrum and Agile training sessions on WorldMindware.com

In a few weeks, I will be posting a more detailed written follow-up to this video.  This is one of the most damaging and most common myths (or pitfalls) that ScrumMasters fall into….

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post New Video: Myths of Scrum – ScrumMaster Assigns Tasks appeared first on Agile Advice.

Categories: Blogs

R: dplyr – Error: cannot modify grouping variable

Mark Needham - Thu, 07/09/2015 - 07:55

I’ve been doing some exploration of the posts made on this blog and I thought I’d start with answering a simple question – on which dates did I write the most posts?

I started with a data frame containing each post and the date it was published:

> library(dplyr)
> df %>% sample_n(5)
                                                title                date
1148 Taiichi Ohno's Workplace Management: Book Review 2008-12-08 14:14:48
158     Rails: Faking a delete method with 'form_for' 2010-09-20 18:52:15
331           Retrospectives: The 4 L's Retrospective 2011-07-25 21:00:30
1035       msbuild - Use OutputPath instead of OutDir 2008-08-14 18:54:03
1181                The danger of commenting out code 2009-01-17 06:02:33

To find the most popular days for blog posts we can write the following aggregation function:

> df %>% mutate(day = as.Date(date)) %>% count(day) %>% arrange(desc(n))
 
Source: local data frame [1,140 x 2]
 
          day n
1  2012-12-31 6
2  2014-05-31 6
3  2008-08-08 5
4  2013-01-27 5
5  2009-08-24 4
6  2012-06-24 4
7  2012-09-30 4
8  2012-10-27 4
9  2012-11-24 4
10 2013-02-28 4

So we can see a couple of days with 6 posts, a couple with 5 posts, a few more with 4 posts and then presumably loads of days with 1 post.

I thought it’d be cool if we could blog a histogram which had on the x axis the number of posts and on the y axis how many days that number of posts occurred e.g. for an x value of 6 (posts) we’d have a y value of 2 (occurrences).

My initial attempt was this:

> df %>% mutate(day = as.Date(date)) %>% count(day) %>% count(n)
Error: cannot modify grouping variable

Unfortunately that isn’t allowed. I tried ungrouping and then counting again:

 df %>% mutate(day = as.Date(date)) %>% count(day) %>% ungroup() %>% count(n)
Error: cannot modify grouping variable

Still no luck. I did a bit of googlign around and came across a post which suggested using a combination of group_by + mutate or group_by + summarize.

I tried the mutate approach first:

> df %>% mutate(day = as.Date(date)) %>% 
+     group_by(day) %>% mutate(n = n()) %>% ungroup() %>% sample_n(5)
                                                        title                Source: local data frame [5 x 4]
 
                                    title                date        day n
1 QCon London 2009: DDD & BDD - Dan North 2009-03-13 15:28:04 2009-03-13 2
2        Onboarding: Sketch the landscape 2013-02-15 07:36:06 2013-02-15 1
3                           Ego Depletion 2013-06-04 23:16:29 2013-06-04 1
4                 Clean Code: Book Review 2008-09-15 09:52:33 2008-09-15 1
5            Dreyfus Model: More thoughts 2009-08-10 10:36:51 2009-08-10 1

That keeps around the ‘title’ which is a bit annoying. We can get rid of it using a distinct on ‘day’ if we want and if we also implement the second part of the function we end up with the following:

> df %>% mutate(day = as.Date(date)) %>% 
    group_by(day) %>% mutate(n = n()) %>% distinct(day) %>% ungroup() %>% 
    group_by(n) %>%
    mutate(c = n()) %>%
    distinct(n)  
 
Source: local data frame [6 x 5]
Groups: n
 
                                                title                date        day n   c
1       Functional C#: Writing a 'partition' function 2010-02-01 23:34:02 2010-02-01 1 852
2                            Willed vs Forced designs 2010-02-08 22:48:05 2010-02-08 2 235
3                            TDD: Testing collections 2010-07-28 06:05:25 2010-07-28 3  41
4  Creating a Samba share between Ubuntu and Mac OS X 2012-06-24 00:40:35 2012-06-24 4   8
5            Gamification and Software: Some thoughts 2012-12-31 10:57:19 2012-12-31 6   2
6 Python/numpy: Selecting specific column in 2D array 2013-01-27 02:10:10 2013-01-27 5   2

Annoyingly we’ve still got the ‘title’, ‘date’ and ‘day’ columns hanging around which we’d need to get rid of with a call to ‘select’. The code also feels quite icky, especially the use of distinct in a couple of places.

In fact we can simplify the code if we use summarize instead of mutate:

> df %>% mutate(day = as.Date(date)) %>% 
    group_by(day) %>% summarize(n = n()) %>% ungroup() %>% 
    group_by(n) %>% summarize(c = n())
 
 
Source: local data frame [6 x 2]
 
  n   c
1 1 852
2 2 235
3 3  41
4 4   8
5 5   2
6 6   2

And we’ve got also rid of the extra columns in the bargain which is great! And now we can plot our histogram:

> library(ggplot2)
> post_frequencies = df %>% mutate(day = as.Date(date)) %>% 
    group_by(day) %>% summarize(n = n()) %>% ungroup() %>% 
    group_by(n) %>% summarize(c = n())
> ggplot(aes(x = n, y = c), data = post_frequencies) + geom_bar(stat = "identity")

2015 07 09 06 44 47

In this case we don’t actually need to do the second grouping to create the bar chart since ggplot will do it for us if we feed it the following data:

. ggplot(aes(x = n), 
         data = df %>% mutate(day = as.Date(date)) %>% group_by(day) %>% summarize(n = n()) %>% ungroup()) +
    geom_bar(binwidth = 1) +
    scale_x_continuous(limits=c(1, 6))
2015 07 09 06 55 12

Still, it’s good to know how!

Categories: Blogs

Driving Self-Organization

Agile Tools - Thu, 07/09/2015 - 05:46

Bangalore Traffic

“Too bad the only people who know how to run the country are busy driving cabs and cutting hair.”

-George Burns

I learned to drive in Southern California. I’ve always been kind of proud of that fact. Driving in the southern land of pavement and potholes requires a special kind of aggressive driving in order to survive the freeway melee. You have to learn to barge into a lane when there isn’t any room, to turn left on a light after it turns red, to tailgate in order to keep others from cutting you off. That’s quite a litany of questionable driving practices. All in a typical day of driving in Cali. Don’t mess with me, I’m an expert.

That’s what I thought before I went to India.

Driving in a taxi in India was an eye opening experience. Silly little conventions like lanes are completely ignored. The entire road, from sidewalk to sidewalk, is your vehicular playground. Driving the wrong way into oncoming traffic is a matter of habit – how else would you get where you are going? I tried to count the number of times I was nearly in a head on collision, but I gave up – partly because I lost count, and (maybe) because I was distracted by my own screaming.

Don’t get me wrong: I was in complete and utter admiration. The level of self-organization and complexity was breathtaking! With what appeared to be a complete absence of rules, people managed to get to and from work every day amidst what appeared to be complete chaos. I very quickly resolved to never lecture anyone on the merits of self-organization ever again! Why? Because apparently I’m an amateur. If you want a lesson in professional level self-organization, don’t talk to me. Talk to a taxi driver in Bangalore.

Someone asked me if I thought I could drive in that traffic. My answer was yes, but not because I think I’m good. Quite the opposite in fact. The Indian driving system appeared to be remarkably tolerant of incompetence. The traffic ebbed and flowed around complete bumbling dolts with apparent ease. Contrast that with where I live in Seattle: one idiot in the left lane can shut down an entire freeway for hours.

Each day in India, I took a one hour commute to and from the office through complete chaos. We circumvented obstacles that would have shut down a US freeway for hours. The creativity on display was dazzling. And as an added bonus, I was thankful to be alive when I arrived at my destination!

Compare that to my commute in the US. Everyone lines up uniformly. We stay in our lanes. Creativity is discouraged. It’s not very exciting. My commute at home also takes an hour. It made me wonder: which system is more efficient?

Under what conditions is a system with fewer rules faster than a system with relatively rigid rules? It was tempting to look at the Bangalore traffic and speculate that perhaps it was faster in some ways. It was certainly more exciting (especially after a few beers late at night in an auto-rickshaw). However, a certain level of orderliness also has its benefits.

I find myself on my own humble commute now, cars stacked up in nice, orderly lines behind an endless parade of red tail lights – and I wonder, “What if we had fewer rules?”


Filed under: Agile, Swarming Tagged: Agile, driving, performance, self-organization
Categories: Blogs

Entity Framework extensions for AutoMapper

Jimmy Bogard - Wed, 07/08/2015 - 16:12

I pushed out a little library I’ve been using for the last couple years for helping to use AutoMapper and Entity Framework together. It’s a series of extension methods that cuts down the number of calls going from a DbSet to DTOs. Instead of this:

Mapper.CreateMap<Employee, EmployeeDto>()
  .ForMember(d => d.FullName, opt => opt.MapFrom(src => src.FirstName + " " + src.LastName));

var employees = await db.Employees.ProjectTo<EmployeeDto>().ToListAsync();

You do this:

public class Employee {
  [Computed]
  public string FullName { get { return FirstName + " " + LastName; } }
}
Mapper.CreateMap<Employee, EmployeeDto>();

var employees = await db.Employees.ProjectToListAsync<EmployeeDto>();

The extension methods themselves are not that exciting, it’s just code I’ve been copying from project to project:

public static async Task<List<TDestination>>
  ProjectToListAsync<TDestination>(this IQueryable queryable)
{
  return await queryable
    .ProjectTo<TDestination>()
    .DecompileAsync()
    .ToListAsync();
}

I have helper methods for:

  • ToList
  • ToArray
  • ToSingle
  • ToSingleOrDefault
  • ToFirst
  • ToFirstOrDefault

As well as all their async versions. You can find it on GitHub:

https://github.com/AutoMapper/AutoMapper.EF6

And on NuGet:

https://www.nuget.org/packages/automapper.ef6

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Pitfall of Scrum: Focus on Scrum Tools

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Many organizations try to find an electronic tool to help them manage the Scrum Process… before they even know how to do Scrum well! Use team rooms and manual and paper-based tracking for early Scrum use since it is easiest to get started. Finding a Scrum tool is usually just an obstacle to getting started.

The culture of most technology companies is to solve problems with technology. Sometimes this is good. However, it can go way overboard. Two large organizations have attempted to “go Agile” but at the same time have also attempted to “go remote”: to have everyone using electronic Scrum tools from home to work “together”. The problem with electronic Scrum tools is three-fold. They

  1. prevent the sharing of information and knowledge,
  2. reduce the fidelity of information and knowledge shared, and
  3. delay the transfer of information and knowledge.
Scrum Tools Prevent Information Sharing

Imagine you are sitting at your desk in a cubicle in an office. You have a question. It’s a simple question and you know who probably has the answer, but you also know that you can probably get away without knowing the answer. It’s non-critical. So, you think about searching the company directory for the person’s phone number and calling them up. Then you imagine having to leave a voice mail. And then you decide not to bother.

The tools have created a barrier to communicating. Information and knowledge are not shared.

Now imagine that the person who has the answer is sitting literally right next to you. You don’t have to bother with looking up their number nor actually using a phone to call. Instead, you simply speak up in a pretty normal tone of voice and ask your question. You might not even turn to look at them. And they answer.

Scrum tools are no different from these other examples of tools.  It takes much more energy and hassle to update an electronic tool with relevant, concise information… particularly if you aren’t good with writing text.  Even the very best Scrum tools should only be used for certain limited contexts.

As the Agile Manifesto says: “The most effective means of conveying information to and within a team is face-to-face communication.”

Scrum Tools Reduce Information Fidelity

How many times have you experienced this? You send an email and the recipient completely misunderstands you or takes it the wrong way. You are on a conference call and everyone leaves the call with a completely different concept of what the conversation was about. You read some documentation and discover that the documentation is out of date or downright incorrect. You are using video conferencing and its impossible to have an important side conversation with someone so you resort to trying to send text messages which don’t arrive on time to be relevant. You put a transcript of a phone call in your backlog tracking tool but you make a typo that changes the meaning.

The tools have reduced the fidelity of the communication. Information and knowledge are incorrect or limited.

Again, think about the difference between using all these tools and what the same scenarios would be like if you were sitting right beside the right people.  If you use Scrum tools such as Jira, Rally* or any of the others, you will have experienced this problem.  The information that gets forced into the tools is a sad shadow of the full information that could or should be shared.

As the Agile Manifesto says: “we have come to value: individuals and interactions over processes and tools.”

Scrum Tools Delay Information Transfer

Even if a person uses a tool and even if it is at the right level of fidelity for the information or knowledge to be communicated, it is still common that electronic tools delay the transfer of that information. This is obvious in the case of asynchronous tools such as email, text messages, voice mail, document repositories, content management systems, and version control. The delay in transfer is sometimes acceptable, but often it causes problems. Suppose you take the transcript of a conversation with a user and add it into your backlog tracking tool as a note. The Scrum Team works on the backlog item but fails to see the note until after they have gone in the wrong direction. You assumed they would see it (you put it in there), but they assumed that you would tell them more directly about anything important. Whoops. Now the team has to go back and change a bunch of stuff.

The Scrum tools have delayed the communication. Information and knowledge are being passed along, but not in a timely manner.

For the third time, think about how these delays would be avoided if everyone was in a room together having those direct, timely conversations.

As the Agile Manifesto says: “Business people and developers must work together daily throughout the project.”

Alternatives to Scrum Tools

Working in a team room with all the members of the Scrum Team present is the most effective means of improving communication. There are many photos available of good team rooms. To maximize communication, have everyone facing each other boardroom-style. Provide spacious walls and large whiteboards. Close the room off from other people in the organization. Provide natural light to keep people happy. And make sure that everyone in the room is working on the same thing! Using Scrum tools to replace a team room is a common Scrum pitfall.

Scrum Tools - Labelled Team Room Photo

The most common approach to helping a team track and report its work is to use a physical “Kanban” board. This is usually done on a wall in which space is divided into columns representing (at least) the steps of “to do”, “in progress” and “done”. On the board, all the work is represented as note cards each with a separate piece of work. The note cards are moved by the people who do the work. The board therefore represents the current state of all the work in an easy-to-interpret visual way. Using a tool to replace a task board is another variant of this common Scrum pitfall.

This article is a follow-up article to the 24 Common Scrum Pitfalls written back in 2011.

* Disclaimer: BERTEIG is a partner with a tool vendor: Version One.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Pitfall of Scrum: Focus on Scrum Tools appeared first on Agile Advice.

Categories: Blogs

You Can’t Break it Down Until You Understand it

Agile Tools - Wed, 07/08/2015 - 06:01

sea-beach-boat-grass

“Furious activity is no substitute for understanding”

– H. H. Williams

One of the first things that agile or iterative development demands of us is that we should break down our work into very small chunks. This challenge is one of the first hurdles faced by teams that are adopting sprints and trying to make all of their work fit into a tiny little 2 week time box. I’ve seen it over and over again. The first question folks ask is, “How can I break this down into pieces small enough to fit in the sprint?” My response, inadequate though it may be, is some variation on, “It’s not hard. People do it every day.” I know – not the best answer. I often get reactions that range anywhere from frank denial to outright disbelief. It just can’t be done!

I know. I get it. I really get it. If you aren’t used to it, the first time you deal with breaking work down into tiny chunks is like running into a cognitive wall. I remember the first iterative project that I did. I understood the model. The concepts made sense: break things down into small chunks and then iterate. Easy.

Only it wasn’t easy at all.

I remember sitting in front of my monitor thinking, “What meaningful piece of functionality could we do in a sprint?”

Nothing came to mind. I drew a total blank.

It was dreadful. We were working on a new product, something completely new to the market…and I had no idea what I was doing. We had no clue. That’s not to say that we were incompetent. Far from it. We all knew that we didn’t know much, and that was a big problem. Fortunately, we overcame those challenges. Unfortunately, like many people, I conveniently forgot most of those lessons and moved on.

I had another reminder the other day. I was working on the boat I’ve been building. So much of building a boat early on was big intimidating chunks of work. I had no idea what I was getting into. Everything seemed daunting. Weeks of effort. However, after 3 years of working on it in my garage, I now find myself doing something completely different. Now I can wander out and create a long list of tiny tasks quickly and spontaneously. I know much more now. I can see hundreds of little tasks that need to be done. Little stuff, literally just a few minutes of work.

So I didn’t know what I was doing when I started, but I learned, and as I learned I was able to break things down. So how did I learn? By getting started and making mistakes. Lots of mistakes. Sometimes I think mistakes are the only thing I’m good at. So now when people ask how to break things down, maybe my answer is, “Just get started, the answer will come to you as you learn the problem domain.”

Of course, if that fails, you can always take up boat building.
Filed under: Uncategorized
Categories: Blogs