Skip to content

Feed aggregator

Should I Patch Built-In Objects / Prototypes? (Hint: NO!)

Derick Bailey - new ThoughtStream - Tue, 06/30/2015 - 15:49

A question was asked via twitter:

@derickbailey @rauschma What do you think about adding methods on built-in prototypes (e.g. String.prototype)?

— Boris Kozorovitzky (@zbzzn) June 30, 2015

So, I built a simple flow chart to answer the question (created w/ draw.io)

Should i patch this

All joking aside, there’s only 1 situation where you should patch a built in object or prototype: if you are building a polyfill to accurately reproduce new JavaScript features for browsers and runtimes that do not support the new feature yet.

Any other patch of a built-in object is cause for serious discussion of the problems that will ensue.

And if you think you should be writing a polyfill, stop. Go find a community supported and battle-tested polyfill library that already provides the feature you need.

Want some examples of the problems?

Imagine this: you have a global variable in a browser, called “config”. Do you think anyone else has ever accidentally or purposely created a “config” variable? What happens when you run in to this situation, and your code is clobbered by someone else’s code because they use the same variable name?

Now imagine this being done on built-in objects and methods, where behaviors are expected to be consistent and stable. If I patch a “format” method on to the String.prototype, and then you load a library that patches it with different behavior, which code will continue working? How will I know why my format function is now failing? What happens when you bring in a new developer and you forget to educate them on the patched and hacked built-in objects in your system?

Go read up on “monkey-patching” in the Ruby community. They learned these lessons the hard way, YEARS ago. You will find countless horror stories and problems caused by this practice.

Here are some examples, to get you started.

But, what if …

NO! There is always a better way to get the feature you need. Decorator / wrapper objects are a good place to start. Hiding the implementation behind your API layer, where you actually need the behavior is also a good place to be.

The point is…

DO NOT PATCH THE BUILT-IN OBJECTS OR PROTOTYPES

Ever.

Your code, your team and your sanity will thank you.

Categories: Blogs

5 Best Practices of Successful Executive Sponsors

Agile Management Blog - VersionOne - Tue, 06/30/2015 - 14:30

 

Exec Sponsor

 

 

 

 

 

 

 

 

It is well know that executive sponsors can help a project to be successful, but not all projects with an executive sponsor succeed.

Why don’t they?

It is because there isn’t necessarily a training manual for how to be an executive sponsor or what pitfalls one must avoid.

So, how do you become a successful executive sponsor?

Build Trust & Communication

While the project manager is responsible for ensuring that the necessary work is being done so that a project will be successful, an executive sponsor’s role is to ensure the project is successful. While those may sound like the same thing, they are vastly different.

The project manager must focus on the day-to-day execution, while the executive sponsor should focus on the bigger picture, ensuring that the project stays aligned to the strategic goal and is being supported by other stakeholders and removing roadblocks.

In order to do this, the executive sponsor and project manager must have a candid relationship built on trust. Too often projects fail because people tend to hope for the best-case scenario and rely too much on best-case status updates. The communication between project manager and executive sponsor should be about openly discussing risks that the executive sponsor can help the team navigate.

Make Realistic Commitments

It goes without saying that commitment is a key component of being an executive sponsor, yet countless projects that have executive sponsors fail nevertheless. This isn’t to say that the failure is necessarily due to the executive sponsor, but as obvious as the importance of commitment is, there are many cases where the executive sponsor had an unrealistic expectation of their commitment. According to PMI’s annual Pulse of the Profession survey, one-third of projects fail because executive sponsors are unengaged.

Sometimes this has less to do with the individual and more to do with the organization. As more and more studies come out showing how executive sponsors increase the success of projects, companies want more executive sponsorship of projects. This has led to many executives being overextended across too many projects.

Before taking on a new project, sit down and determine the required time commitment and whether you have the bandwidth to meet that commitment. Your organization may be pressuring you to step up and take another project, but it won’t do them or you any good if the project fails.

Avoid Getting Overextended

We already discussed that the success of having an executive sponsor has led to many organizations overextending their executives. An in-depth study by the Project Management Institute found that executives sponsor three projects on average at any one time and they report spending an average of 13 hours per week per project, on top of their normal work.

Obviously, this isn’t sustainable and isn’t a recipe for success. The same study found several negative impacts from executive sponsors being overextended.

Project Mgt Statistics

 

 

 

 

 

 

 

The solution here is simple; you have to learn how to say no. That is, of course, easier said than done when you’re being pressured to take on a new project, but again, it won’t do them or you any good if the project fails.

Develop Project Management Knowledge

According to a PMI study, 74% of projects are successful at companies where sponsors have expert or advanced project management knowledge. Unfortunately, only 62% of companies provide executive sponsor education and development. Not every executive has necessarily been a project manager or gone through project management training.

The results speak for themselves; having advanced project management knowledge makes it far more likely that you will be successful. If your organization doesn’t provide executive sponsor development, take it upon yourself to become a project management expert. It will help your team, company and self. The Boston Consulting Group has found that successful executive sponsors focus on improving their skills in change leadership, influencing stakeholders and issue resolution.

Conclusion

I hope this has inspired you to develop your executive sponsor skills. While it may be difficult to find the time, the payoff will be well worth it for you, your team and your company!

What are some other important keys to being a successful executive sponsor?

Categories: Companies

Story Splitting: Where Do I Start?

Leading Agile - Mike Cottmeyer - Tue, 06/30/2015 - 14:16

I don’t always follow the same story splitting approach when I need to split a story. It has become intuitive for me so I might not be able to write about everything I do or what goes through my mind or how I know. But I can put here what comes to mind at the moment:

Look at your acceptance criteria. There is often some aspect of business value in each acceptance criteria that can be split out into a separate story that is valuable to the Product Owner.

Consider the tasks that need to be done. Can any of them be deferred (to a later sprint)? (And  no, testing is not a task that can be deferred to a later sprint.) If so, then consider whether any of them are separately valuable to the Product Owner. If so, perhaps that would be a good story to split out.

If there are lots of unknowns, if it’s a 13 point story because of unanswered questions, make a list of the questions and uncertainties. For each, ask whether it’s a Business Analyst (BA) to-do or a Tech to-do. Also ask for each whether it’s easy and should be considered “grooming”. If it’s significant enough and technical, maybe you should split that out as a Research Spike. Then make an assumption about the likely outcome of the spike, or the desired outcome of the spike, note the assumption in the original story, and reestimate the original story given the assumption.

Look in the story description for conjunctions since and’s and or’s are a clue that the story may be doing too much. Consider whether you can split the story along the lines of the conjunctions.

Other Story Splitting ideas:
  • Workflow steps: Identify specific steps that a user takes to accomplish the specific workflow, and then implement the work flow in incremental stages
  • Business Rule Variations
  • Happy path versus error paths
  • Simple approach versus more and more complex approaches
  • Variations in data entry methods or sources
  • Support simple data 1st, then more complex data later
  • Variations in output formatting, simple first, then complex
  • Defer some system quality (an “ility”). Estimate or interpolate first, do real-time later. Support a speedier response later.
  • Split out parts of CRUD. Do you really really really need to be able to Delete if you can Update or Deactivate? Do you really really really need to Update if you can Create and Delete? Sure, you may need those functions, but you don’t have to have them all in the same sprint or in the same story.

Some of the phrases in the above list may be direct quotes or paraphrases from Dean Leffingwell’s book “Agile Software Requirements”.

The post Story Splitting: Where Do I Start? appeared first on LeadingAgile.

Categories: Blogs

Product Owner Camp

Growing Agile - Tue, 06/30/2015 - 14:14
We recently attended the PO Camp in Switzerland (#POCam […]
Categories: Companies

How to create the smallest possible docker container of any image

Xebia Blog - Tue, 06/30/2015 - 11:46

Once you start to do some serious work with Docker, you soon find that downloading images from the registry is a real bottleneck in starting applications. In this blog post we show you how you can reduce the size of any docker image to just a few percent of the original. So is your image too fat, try stripping your Docker image! The strip-docker-image utility demonstrated in this blog makes your containers faster and safer at the same time!


We are working quite intensively on our High Available Docker Container Platform  using CoreOS and Consul which consists of a number of containers (NGiNX, HAProxy, the Registrator and Consul). These containers run on each of the nodes in our CoreOS cluster and when the cluster boots, more than 600Mb is downloaded by the 3 nodes in the cluster. This is quite time consuming.

cargonauts/consul-http-router      latest              7b9a6e858751        7 days ago          153 MB
cargonauts/progrium-consul         latest              32253bc8752d        7 weeks ago         60.75 MB
progrium/registrator               latest              6084f839101b        4 months ago        13.75 MB

The size of the images is not only detrimental to the boot time of our platform, it also increases the attack surface of the container.  With 153Mb of utilities in the  NGiNX based consul-http-router, there is a lot of stuff in the container that you can use once you get inside. As we were thinking of running this router in a DMZ, we wanted to minimise the amount of tools lying around for a potential hacker.

From our colleague Adriaan de Jonge we already learned how to create the smallest possible Docker container  for a Go program. Could we repeat this by just extracting the NGiNX executable from the official distribution and copying it onto a scratch image?  And it turns out we can!

finding the necessary files

Using the utility dpkg we can list all the files that are installed by NGiNX

docker run nginx dpkg -L nginx
...
/.
/usr
/usr/sbin
/usr/sbin/nginx
/usr/share
/usr/share/doc
/usr/share/doc/nginx
...
/etc/init.d/nginx
locating dependent shared libraries

So we have the list of files in the package, but we do not have the shared libraries that are referenced by the executable. Fortunately, these can be retrieved using the ldd utility.

docker run nginx ldd /usr/sbin/nginx
...
	linux-vdso.so.1 (0x00007fff561d6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd8f17cf000)
	libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fd8f1598000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fd8f1329000)
	libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fd8f10c9000)
	libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fd8f0cce000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd8f0ab2000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd8f0709000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fd8f19f0000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd8f0505000)
Following and including symbolic links

Now we have the executable and the referenced shared libraries, it turns out that ldd normally names the symbolic link and not the actual file name of the shared library.

docker run nginx ls -l /lib/x86_64-linux-gnu/libcrypt.so.1
...
lrwxrwxrwx 1 root root 16 Apr 15 00:01 /lib/x86_64-linux-gnu/libcrypt.so.1 -> libcrypt-2.19.so

By resolving the symbolic links and including both the link and the file, we are ready to export the bare essentials from the container!

getpwnam does not work

But after copying all essentials files to a scratch image, NGiNX did not start.  It appeared that NGiNX tries to resolve the user id 'nginx' and fails to do so.

docker run -P  --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;"
...
2015/06/29 21:29:08 [emerg] 1#1: getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2
nginx: [emerg] getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2

It turned out that the shared libraries for the name switch service reading /etc/passwd and /etc/group are loaded at runtime and not referenced in the shared libraries. By adding these shared libraries ( (/lib/*/libnss*) to the container, NGiNX worked!

strip-docker-image example

So now, the strip-docker-image utility is here for you to use!

    strip-docker-image  -i image-name
                        -t target-image-name
                        [-p package]
                        [-f file]
                        [-x expose-port]
                        [-v]

The options are explained below:

-i image-name           to strip
-t target-image-name    the image name of the stripped image
-p package              package to include from image, multiple -p allowed.
-f file                 file to include from image, multiple -f allowed.
-x port                 to expose.
-v                      verbose.

The following example creates a new nginx image, named stripped-nginx based on the official Docker image:

strip-docker-image -i nginx -t stripped-nginx  \
                           -x 80 \
                           -p nginx  \
                           -f /etc/passwd \
                           -f /etc/group \
                           -f '/lib/*/libnss*' \
                           -f /bin/ls \
                           -f /bin/cat \
                           -f /bin/sh \
                           -f /bin/mkdir \
                           -f /bin/ps \
                           -f /var/run \
                           -f /var/log/nginx \
                           -f /var/cache/nginx

Aside from the nginx package, we add the files /etc/passwd, /etc/group and /lib/*/libnss* shared libraries. The directories /var/run, /var/log/nginx and /var/cache/nginx are required for NGiNX to operate. In addition, we added /bin/sh and a few handy utilities, just to be able to snoop around a little bit.

The stripped image has now shrunk to an incredible 5.4% of the original 132.8 Mb to just 7.3Mb and is still fully operational!

docker images | grep nginx
...
stripped-nginx                     latest              d61912afaf16        21 seconds ago      7.297 MB
nginx                              1.9.2               319d2015d149        12 days ago         132.8 MB

And it works!

ID=$(docker run -P -d --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;")
docker run --link $ID:stripped cargonauts/toolbox-networking curl -s -D - http://stripped
...
HTTP/1.1 200 OK

For HAProxy, checkout the examples directory.

Conclusion

It is possible to use the official images that are maintained and distributed by Docker and strip them down to their bare essentials, ready for use! It speeds up load times and reduces the attack surface of that specific container.

Checkout the github repository for the script and the manual page.

Please send me your examples of incredibly shrunk Docker images!

Categories: Companies

AgileCymru. Cardiff Bay, UK, July 7-8 2015

Scrum Expert - Tue, 06/30/2015 - 09:35
AgileCymru is a two-days Agile conference that takes place in Wales. It offers practical advice, techniques and lessons from practitioners, experts and beginners in the field of Agile software development and project management with Scrum. In the agenda of AgileCymru you can find topics like “How to fail a software project fast and efficiently?”, “Game of Scrums: Tribal behaviors with Agile at Scale”, “Dreaming – how business intent drives your Agile initiatives”, “From Agile projects to an Agile organization – The Journey”, “User Needs Driven Development – The Evolution of ...
Categories: Communities

Dev Lunch as a Power Tool

At my current company I’ve been going out to lunch pretty much every day I’m in the office. I know a lot of developers bring their lunches and like to eat alone in silence, but I’m probably less on the introvert side. I’ve always made it a point to have lunches with developers on a regular basis, but my current standing lunches with have been an evolution of that.

Breaking bread and catching up on stories, families, etc are a great way to bond with developers you don’t work with on a regular basis. Most of the engineers I go to lunch with work on another team, so I’m constantly keeping up to date with their current work and letting them know about what my team is doing. As a bonus I get out of the office and eat some hot food, though not necessarily the healthiest fare. Sharing and communication between teams really helps even in a flat startup organization like ours. I know some companies have meals catered in the office which works as well, but if you’re at a regular place, lunch out can work just as well.

As a suggestion if you tend to eat your lunches alone or at your desk, try to make a habit of eating out with some other devs or even other employees at least once a week.

Categories: Blogs

Do What Works… Even If It’s Not Agile.

Leading Agile - Mike Cottmeyer - Mon, 06/29/2015 - 22:19

I think I’ve come to the conclusion that ‘agile’ as we know it isn’t the best starting place for everyone that wants to adopt agile. Some folks, sure… everyone, probably not.

For many companies something closer to a ‘team-based, iterative and incremental delivery approach, using some agile tools and techniques, wrapped within a highly-governed Lean/Kanban based program and portfolio management framework’ is actually a better place to start.

Why?

Well, many organizations really struggle forming complete cross-functional teams, building backlogs, producing working tested software on regular intervals, and breaking dependencies. In the absence of these, agile is pretty much impossible.

Scrum isn’t impossible, mind you.

Agile is impossible.

So how does a ‘team-based, iterative and incremental delivery approach, using some agile tools and techniques, wrapped within a highly-governed Lean/Kanban based program and portfolio management framework’ actually work?

Let me explain.

First, I want to form Scrum-like teams around as much of the delivery organization as I can. I’ll form teams around shared components, feature teams, services teams, etc. Ideally, I’d like to form teams around objects that would be end up being real Scrum teams in some future state.

These Scrum-like teams operate under the same rules as a normal Scrum team, they are complete and cross-functional, internally self-organizing, same roles, ceremonies, and artifacts as a normal Scrum team, but with much higher focus on stabilizing velocity and less on adaptation.

Why ‘Scrum-like’ teams?

Dependencies. #dependenciesareevil

These teams have business process and requirements dependencies all around them. They have architectural dependencies between them. They have organizational dependencies due to the current management structure and likely some degree of matrixing.

Until those dependencies are broken, it’s tough to operate as a small independent Scrum team that can inspect and adapt their way into success. Those dependencies have to be managed until they can be broken. We can’t pretend they aren’t there.

How do I manage dependencies?

This is where the ‘Lean/Kanban’ based program and portfolio governance comes in. Explicitly model the value stream from requirements identification all the way through delivery. Anyone that can’t be on a Scrum team gets modeled in this value stream.

We like to form small, dedicated, cross-functional teams (explicitly not Scrum teams) to decompose requirements, deal with cross-cutting concerns, flow work into the Scrum-like teams, and coordinate batch size along with all the upstream and downstream coordination.

Early on, we might be doing 3-6-9 even 12-18 month roadmapping, creating 3-6 month feature level release plans, and fine grained, risk adjusted release backlogs at the story level. The goal is to nail quarterly commitments and to start driving visibility for longer term planning.

Not agile?

Don’t really care, this is a great first step toward untangling a legacy organization that is struggling to get a foot hold adopting agile. Many companies we work with, this is agile enough, but ideally this is only the first step toward greater levels of agile maturity.

How do I increase maturity?

Goal #1 was to stabilize the system and build trust with the organization. This isn’t us against them, it’s not management against the people, it’s working within the constraints of the existing system to get better business results… and fast.

Over time, you want to continue working to reduce batch size at the enterprise level, you want to progressively reduce dependencies between teams, you want to start funding teams and business capabilities rather than projects, you want to invest to learn.

Lofty goals, huh?

That said, those are lofty goals for an organization that can’t form a team, build a backlog, or produce working tested software every few weeks. Those are lofty goals for an organization that is so mired in dependencies they can’t move, let alone self-organize.

Our belief is that we are past the early adopters. We are past the small project teams in big companies. We are past simply telling people to self-organize and inspect and adapt. We need a way to crack companies apart, to systematically refactor them into agile organizations.

Once we have the foundational structures, principles, guidelines in place, and a sufficient threshold of people is bought into the new system and understands how it operates, then we can start letting go, deprecating control structures, and really living the promise of agile.

The post Do What Works… Even If It’s Not Agile. appeared first on LeadingAgile.

Categories: Blogs

Book Review: Managing Humans

thekua.com@work - Mon, 06/29/2015 - 21:31

I remember hearing about Managing Humans several years ago but I only got around to buying it and getting through reading it.

Managing Humans

It is written by the well-known Michael Lopp otherwise known as Rands, who blogs at Rands and Repose.

The title is a clever take on working in software development and Rands shares his experiences working as a technical manager in various companies through his very unique perspective and writing style. If you follow his blog, you can see it shine through in the way that he tells stories, the way that he creates names around stereotypes and situations you might find yourself in the role of a Technical Manager.

He offers lots of useful advice that covers a wide variety of topics such as tips for interviewing, resigning, making meetings more effective, dealing with specific types of characters that are useful regardless of whether or not you are a Technical Manager or not.

He also covers a wider breath of topics such as handling conflict, tips for hiring, motivation and managing upwards (the last particularly necessary in large corporations). I felt like some of the topics felt outside the topic of “Managing Humans” and the intended target audience of a Technical Manager such as tips for resigning (yourself, not handling it from your team) and joining a start up.

His stories describe the people he has worked with and situations he has worked in. A lot of it will probably resonate very well with you if you have worked, or work in large software development firm or a “Borland” of our time.

The book is easy to digest in chunks, and with clear titles, is easy to pick up at different intervals or going back for future reference. The book is less about a single message, than a series of essays that offer another valuable insight into working with people in the software industry.

Categories: Blogs

Dealing with People You Can’t Stand

J.D. Meier's Blog - Mon, 06/29/2015 - 16:46

“If You Want To Go Fast, Go Alone. If You Want To Go Far, Go Together” – African Proverb

I blew the dust off some olds posts to rekindle some of the most important information for work and life.

It’s about dealing with people you can’t stand.

Whether you think of them as jerks, bullies, or just difficult people, the better you can deal with difficult people, the better you can get things done and make things happen.

And the more you learn how to bring out the best, in people at their worst, the less you’ll find people you can’t stand.

How To Bring Out the Best in People at Their Worst (Including Yourself)

Everything I needed to learn about dealing with difficult people, I learned from the book Dealing with People You Can’t Stand: How to Bring Out the Best in People at Their Worst, by Dr. Rick Brinkman and Dr. Rick Kirschner.

It’s one of the most brilliant, thoughtful books I’ve ever read on interpersonal skills and dealing with all sorts of bad behaviors.

The real key to dealing with difficult behavior is more than just recognizing bad behaviors in other people.

It’s recognizing bad behaviors in yourself, the kind that contribute to and amplify other people’s bad behaviors.

The more you know, the more you grow, and this is truly one of those transformational books.

Learn How To Deal with Difficult People (and Gain Some Mad Interpersonal Skills)

I’ve completely re-written my pot that provides an overview of the big ideas in Dealing with People You Can’t Stand:

Dealing with People You Can’t Stand

Even better, I’ve re-written all of my posts that talk through the 10 Types of Difficult People, and what to do about them.

I have to warn you:  Once you learn the 10 Types of Difficult People, you’ll be using the labels to classify bad behaviors that you experience in the halls, in meetings, behind your back, etc.

With that in mind, here they are …

10 Types of Difficult People

Here are the 10 Types of Difficult People at a glance:

  1. Grenade Person – After a brief period of calm, the Grenade person explodes into unfocused ranting and raving about things that have nothing to do with the present circumstances.
  2. Know-It-Alls – Seldom in doubt, the Know-It-All person has a low tolerance for correction and contradiction. If something goes wrong, however, the Know-It-All will speak with the same authority about who’s to blame – you!
  3. Maybe Person – In a moment of decision, the Maybe Person procrastinates in the hope that a better choice will present itself.
  4. No Person – A No Person kills momentum and creates friction for you. More deadly to morale than a speeding bullet, more powerful than hope, able to defeat big ideas with a single syllable.
  5. Nothing Person – A Nothing Person doesn’t contribute to the conversation. No verbal feedback, no nonverbal feedback, Nothing. What else could you expect from … the Nothing Person.
  6. Snipers – Whether through rude comments, biting sarcasm, or a well-timed roll of the eyes, making you look foolish is the Sniper’s specialty.
  7. Tanks – The Tank is confrontational, pointed and angry, the ultimate in pushy and aggressive behavior
  8. Think-They-Know-It-Alls – Think-They-Know-It-All people can’t fool all the people all the time, but they can fool some of the people enough of the time, and enough of the people all of the time – all for the sake of getting some attention.
  9. Whiners – Whiners feel helpless and overwhelmed by an unfair world. Their standard is perfection, and no one and nothing measures up to it.
  10. Yes Person – In an effort to please people and avoid confrontation, Yes People say “yes” without thinking things through.

I warned you.  Are you already thinking about some Snipers in a few meetings that you have, or is there a Yes Person driving you nuts (or are you that Yes Person?)

Have you talked to a Think-They-Know-It-All lately, or worse, a Know-It—All?

Never fear, I’ve included actionable insights and recommendations for dealing with all the various bad behaviors you’ll encounter.

The Lens of Human Understanding

If all this talk about dealing with difficult people, and having silly labels seems like a gimmick, it’s not.  It’s actually deep insight rooted in a powerful, but simple framework that Dr. Rick Brinkman and Dr. Rick Kirschner refer to as the Lens of Human Understanding:

The Lens of Human Understanding

Once I learned The Lens of Human Understanding, so many things fell into place.

Not only did I understand myself better, but I could instantly see what was driving other people, and how my behavior would either create more conflict or resolve it.

But when you don’t know what makes people tick, it’s very easy to get ticked off, or to tick them off.

Here’s looking at you … and other people … and their behaviors … in a brand new way.

You Might Also Like

25 Books the Most Successful Microsoft Leaders Read and Do

Interpersonal Skills Books

Personal Development Hub on Sources of Insight

Personal Development Resources at Sources of Insight

The Great Leadership Quotes Collection

Categories: Blogs

Can You Replace User Stories with Use Cases?

Scrum Expert - Mon, 06/29/2015 - 15:55
Agile requirements are a key success factor for Scrum projects. Many people criticize the minimalist format of user stories, often forgetting that they are mainly a support for a conversation and don’t have the objective to fully document requirements. In this article, Paul Raymond discusses how classical use cases can be use to expand user stories during requirements elicitation in Scrum sprints. Paul Raymond, Inflectra Corporation, http://www.inflectra.com/ User Stories are often characterized by relatively short, uncomplicated and informal descriptions, whereas Use Cases are often longer, more formally structured descriptions of not only ...
Categories: Communities

R: Speeding up the Wimbledon scraping job

Mark Needham - Mon, 06/29/2015 - 07:36

Over the past few days I’ve written a few blog posts about a Wimbledon data set I’ve been building and after running the scripts a few times I noticed that it was taking much longer to run that I expected.

To recap, I started out with the following function which takes in a URI and returns a data frame containing a row for each match:

library(rvest)
library(dplyr)
 
scrape_matches1 = function(uri) {
  matches = data.frame()
 
  s = html(uri)
  rows = s %>% html_nodes("div#scoresResultsContent tr")
  i = 0
  for(row in rows) {  
    players = row %>% html_nodes("td.day-table-name a")
    seedings = row %>% html_nodes("td.day-table-seed")
    score = row %>% html_node("td.day-table-score a")
    flags = row %>% html_nodes("td.day-table-flag img")
 
    if(!is.null(score)) {
      player1 = players[1] %>% html_text() %>% str_trim()
      seeding1 = ifelse(!is.na(seedings[1]), seedings[1] %>% html_node("span") %>% html_text() %>% str_trim(), NA)
      flag1 = flags[1] %>% html_attr("alt")
 
      player2 = players[2] %>% html_text() %>% str_trim()
      seeding2 = ifelse(!is.na(seedings[2]), seedings[2] %>% html_node("span") %>% html_text() %>% str_trim(), NA)
      flag2 = flags[2] %>% html_attr("alt")
 
      matches = rbind(data.frame(winner = player1, 
                                 winner_seeding = seeding1, 
                                 winner_flag = flag1,
                                 loser = player2, 
                                 loser_seeding = seeding2,
                                 loser_flag = flag2,
                                 score = score %>% html_text() %>% str_trim(),
                                 round = round), matches)      
    } else {
      round = row %>% html_node("th") %>% html_text()
    }
  } 
  return(matches)
}

Let’s run it to get an idea of the data that it returns:

matches1 = scrape_matches1("http://www.atpworldtour.com/en/scores/archive/wimbledon/540/2014/results")
 
> matches1 %>% filter(round %in% c("Finals", "Semi-Finals", "Quarter-Finals"))
           winner winner_seeding winner_flag           loser loser_seeding loser_flag            score          round
1    Milos Raonic            (8)         CAN    Nick Kyrgios          (WC)        AUS    674 62 64 764 Quarter-Finals
2   Roger Federer            (4)         SUI   Stan Wawrinka           (5)        SUI     36 765 64 64 Quarter-Finals
3 Grigor Dimitrov           (11)         BUL     Andy Murray           (3)        GBR        61 764 62 Quarter-Finals
4  Novak Djokovic            (1)         SRB     Marin Cilic          (26)        CRO  61 36 674 62 62 Quarter-Finals
5   Roger Federer            (4)         SUI    Milos Raonic           (8)        CAN         64 64 64    Semi-Finals
6  Novak Djokovic            (1)         SRB Grigor Dimitrov          (11)        BUL    64 36 762 767    Semi-Finals
7  Novak Djokovic            (1)         SRB   Roger Federer           (4)        SUI 677 64 764 57 64         Finals

As I mentioned, it’s quite slow but I thought I’d wrap it in system.time so I could see exactly how long it was taking:

> system.time(scrape_matches1("http://www.atpworldtour.com/en/scores/archive/wimbledon/540/2014/results"))
   user  system elapsed 
 25.570   0.111  31.416

About 30 seconds! The first thing I tried was downloading the file separately and running the function against the local file:

> system.time(scrape_matches1("data/raw/2014.html"))
   user  system elapsed 
 25.662   0.123  25.863

Hmmm, that’s only saved us 5 seconds so the bottleneck must be somewhere else. Still there’s no point making a HTTP request every time we run the script so we’ll stick with the local file version.

While browsing rvest’s vignette I noticed a function called html_table which I was curious about. I decided to try and replace some of my code with a call to that:

matches2= html("data/raw/2014.html") %>% 
  html_node("div#scoresResultsContent table.day-table") %>% html_table(header = FALSE) %>% 
  mutate(X1 = ifelse(X1 == "", NA, X1)) %>%
  mutate(round = ifelse(grepl("\\([0-9]\\)|\\(", X1), NA, X1)) %>% 
  mutate(round = na.locf(round)) %>%
  filter(!is.na(X8)) %>%
  select(winner = X3, winner_seeding = X1, loser = X7, loser_seeding = X5, score = X8, round)
 
> matches2 %>% filter(round %in% c("Finals", "Semi-Finals", "Quarter-Finals"))
           winner winner_seeding           loser loser_seeding            score          round
1  Novak Djokovic            (1)   Roger Federer           (4) 677 64 764 57 64         Finals
2  Novak Djokovic            (1) Grigor Dimitrov          (11)    64 36 762 767    Semi-Finals
3   Roger Federer            (4)    Milos Raonic           (8)         64 64 64    Semi-Finals
4  Novak Djokovic            (1)     Marin Cilic          (26)  61 36 674 62 62 Quarter-Finals
5 Grigor Dimitrov           (11)     Andy Murray           (3)        61 764 62 Quarter-Finals
6   Roger Federer            (4)   Stan Wawrinka           (5)     36 765 64 64 Quarter-Finals
7    Milos Raonic            (8)    Nick Kyrgios          (WC)    674 62 64 764 Quarter-Finals

I had to do some slightly clever stuff to get the ’round’ column into shape using zoo’s na.locf function which I wrote about previously.

Unfortunately I couldn’t work out how to extract the flag with this version – that value is hidden in the ‘alt’ tag of an img and presumably html_table is just grabbing the text value of each cell. This version is much quicker though!

system.time(html("data/raw/2014.html") %>% 
  html_node("div#scoresResultsContent table.day-table") %>% html_table(header = FALSE) %>% 
  mutate(X1 = ifelse(X1 == "", NA, X1)) %>%
  mutate(round = ifelse(grepl("\\([0-9]\\)|\\(", X1), NA, X1)) %>% 
  mutate(round = na.locf(round)) %>%
  filter(!is.na(X8)) %>%
  select(winner = X3, winner_seeding = X1, loser = X7, loser_seeding = X5, score = X8, round))
 
   user  system elapsed 
  0.545   0.002   0.548

What I realised from writing this version is that I need to match all the columns with one call to html_nodes rather than getting the row and then each column in a loop.

I rewrote the function to do that:

scrape_matches3 = function(uri) {
  s = html(uri)
 
  players  = s %>% html_nodes("div#scoresResultsContent tr td.day-table-name a")
  seedings = s %>% html_nodes("div#scoresResultsContent tr td.day-table-seed")
  scores   = s %>% html_nodes("div#scoresResultsContent tr td.day-table-score a")
  flags    = s %>% html_nodes("div#scoresResultsContent tr td.day-table-flag img") %>% html_attr("alt") %>% str_trim()
 
  matches3 = data.frame(
    winner         = sapply(seq(1,length(players),2),  function(idx) players[[idx]] %>% html_text()),
    winner_seeding = sapply(seq(1,length(seedings),2), function(idx) seedings[[idx]] %>% html_text() %>% str_trim()),
    winner_flag    = sapply(seq(1,length(flags),2),    function(idx) flags[[idx]]),  
    loser          = sapply(seq(2,length(players),2),  function(idx) players[[idx]] %>% html_text()),
    loser_seeding  = sapply(seq(2,length(seedings),2), function(idx) seedings[[idx]] %>% html_text() %>% str_trim()),
    loser_flag     = sapply(seq(2,length(flags),2),    function(idx) flags[[idx]]),
    score          = sapply(scores,                    function(score) score %>% html_text() %>% str_trim())
  )
  return(matches3)
}

Let’s run and time that to check we’re getting back the right results in a timely manner:

> matches3 %>% sample_n(10)
                   winner winner_seeding winner_flag               loser loser_seeding loser_flag         score
70           David Ferrer            (7)         ESP Pablo Carreno Busta                      ESP  60 673 61 61
128        Alex Kuznetsov           (26)         USA         Tim Smyczek           (3)        USA   46 63 63 63
220   Rogerio Dutra Silva                        BRA   Kristijan Mesaros                      CRO         62 63
83         Kevin Anderson           (20)         RSA        Aljaz Bedene          (LL)        GBR      63 75 62
73          Kei Nishikori           (10)         JPN   Kenny De Schepper                      FRA     64 765 75
56  Roberto Bautista Agut           (27)         ESP         Jan Hernych           (Q)        CZE   75 46 62 62
138            Ante Pavic                        CRO        Marc Gicquel          (29)        FRA  46 63 765 64
174             Tim Puetz                        GER     Ruben Bemelmans                      BEL         64 62
103        Lleyton Hewitt                        AUS   Michal Przysiezny                      POL 62 6714 61 64
35          Roger Federer            (4)         SUI       Gilles Muller           (Q)        LUX      63 75 63
 
> system.time(scrape_matches3("data/raw/2014.html"))
   user  system elapsed 
  0.815   0.006   0.827

It’s still quick – a bit slower than html_table but we can deal with that. As you can see, I also had to add some logic to separate the values for the winners and losers – the players, seeds, flags come back as as one big list. The odd rows represent the winner; the even rows the loser.

Annoyingly we’ve now lost the ’round’ column because that appears as a table heading so we can’t extract it the same way. I ended up cheating a bit to get it to work by working out how many matches each round should contain and generated a vector with that number of entries:

raw_rounds = s %>% html_nodes("th") %>% html_text()
 
> raw_rounds
 [1] "Finals"               "Semi-Finals"          "Quarter-Finals"       "Round of 16"          "Round of 32"         
 [6] "Round of 64"          "Round of 128"         "3rd Round Qualifying" "2nd Round Qualifying" "1st Round Qualifying"
 
rounds = c( sapply(0:6, function(idx) rep(raw_rounds[[idx + 1]], 2 ** idx)) %>% unlist(),
            sapply(7:9, function(idx) rep(raw_rounds[[idx + 1]], 2 ** (idx - 3))) %>% unlist())
 
> rounds[1:10]
 [1] "Finals"         "Semi-Finals"    "Semi-Finals"    "Quarter-Finals" "Quarter-Finals" "Quarter-Finals" "Quarter-Finals"
 [8] "Round of 16"    "Round of 16"    "Round of 16"

Let’s put that code into the function and see if we end up with the same resulting data frame:

scrape_matches4 = function(uri) {
  s = html(uri)
 
  players  = s %>% html_nodes("div#scoresResultsContent tr td.day-table-name a")
  seedings = s %>% html_nodes("div#scoresResultsContent tr td.day-table-seed")
  scores   = s %>% html_nodes("div#scoresResultsContent tr td.day-table-score a")
  flags    = s %>% html_nodes("div#scoresResultsContent tr td.day-table-flag img") %>% html_attr("alt") %>% str_trim()
 
  raw_rounds = s %>% html_nodes("th") %>% html_text()
  rounds = c( sapply(0:6, function(idx) rep(raw_rounds[[idx + 1]], 2 ** idx)) %>% unlist(),
              sapply(7:9, function(idx) rep(raw_rounds[[idx + 1]], 2 ** (idx - 3))) %>% unlist())
 
  matches4 = data.frame(
    winner         = sapply(seq(1,length(players),2),  function(idx) players[[idx]] %>% html_text()),
    winner_seeding = sapply(seq(1,length(seedings),2), function(idx) seedings[[idx]] %>% html_text() %>% str_trim()),
    winner_flag    = sapply(seq(1,length(flags),2),    function(idx) flags[[idx]]),  
    loser          = sapply(seq(2,length(players),2),  function(idx) players[[idx]] %>% html_text()),
    loser_seeding  = sapply(seq(2,length(seedings),2), function(idx) seedings[[idx]] %>% html_text() %>% str_trim()),
    loser_flag     = sapply(seq(2,length(flags),2),    function(idx) flags[[idx]]),
    score          = sapply(scores,                    function(score) score %>% html_text() %>% str_trim()),
    round          = rounds
  )
  return(matches4)
}
 
matches4 = scrape_matches4("data/raw/2014.html")
 
> matches4 %>% filter(round %in% c("Finals", "Semi-Finals", "Quarter-Finals"))
           winner winner_seeding winner_flag           loser loser_seeding loser_flag            score          round
1  Novak Djokovic            (1)         SRB   Roger Federer           (4)        SUI 677 64 764 57 64         Finals
2  Novak Djokovic            (1)         SRB Grigor Dimitrov          (11)        BUL    64 36 762 767    Semi-Finals
3   Roger Federer            (4)         SUI    Milos Raonic           (8)        CAN         64 64 64    Semi-Finals
4  Novak Djokovic            (1)         SRB     Marin Cilic          (26)        CRO  61 36 674 62 62 Quarter-Finals
5 Grigor Dimitrov           (11)         BUL     Andy Murray           (3)        GBR        61 764 62 Quarter-Finals
6   Roger Federer            (4)         SUI   Stan Wawrinka           (5)        SUI     36 765 64 64 Quarter-Finals
7    Milos Raonic            (8)         CAN    Nick Kyrgios          (WC)        AUS    674 62 64 764 Quarter-Finals

We shouldn’t have added much to the time but let’s check:

> system.time(scrape_matches4("data/raw/2014.html"))
   user  system elapsed 
  0.816   0.004   0.824

Sweet. We’ve saved ourselves 29 seconds per page as long as the number of rounds stayed constant over the years. For the 10 years that I’ve looked at it has but I expect if you go back further the draw sizes will have been different and our script would break.

For now though this will do!

Categories: Blogs

R: dplyr – Update rows with earlier/previous rows values

Mark Needham - Mon, 06/29/2015 - 00:30

Recently I had a data frame which contained a column which had mostly empty values:

> data.frame(col1 = c(1,2,3,4,5), col2  = c("a", NA, NA , "b", NA))
  col1 col2
1    1    a
2    2 <NA>
3    3 <NA>
4    4    b
5    5 <NA>

I wanted to fill in the NA values with the last non NA value from that column. So I want the data frame to look like this:

1    1    a
2    2    a
3    3    a
4    4    b
5    5    b

I spent ages searching around before I came across the na.locf function in the zoo library which does the job:

library(zoo)
library(dplyr)
 
> data.frame(col1 = c(1,2,3,4,5), col2  = c("a", NA, NA , "b", NA)) %>% 
    do(na.locf(.))
  col1 col2
1    1    a
2    2    a
3    3    a
4    4    b
5    5    b

This will fill in the missing values for every column, so if we had a third column with missing values it would populate those too:

> data.frame(col1 = c(1,2,3,4,5), col2  = c("a", NA, NA , "b", NA), col3 = c("A", NA, "B", NA, NA)) %>% 
    do(na.locf(.))
 
  col1 col2 col3
1    1    a    A
2    2    a    A
3    3    a    B
4    4    b    B
5    5    b    B

If we only want to populate ‘col2′ and leave ‘col3′ as it is we can apply the function specifically to that column:

> data.frame(col1 = c(1,2,3,4,5), col2  = c("a", NA, NA , "b", NA), col3 = c("A", NA, "B", NA, NA)) %>% 
    mutate(col2 = na.locf(col2))
  col1 col2 col3
1    1    a    A
2    2    a <NA>
3    3    a    B
4    4    b <NA>
5    5    b <NA>

It’s quite a neat function and certainly comes in helpful when cleaning up data sets which don’t tend to be as uniform as you’d hope!

Categories: Blogs

An Illustrated Guide To Client Projects: Requirements vs Reality

Derick Bailey - new ThoughtStream - Sun, 06/28/2015 - 03:33

What the client claims the need vs what the end up getting are often two very different things. This isn’t always a bad thing, but it certainly can be. To that end, I wanted to illustrate what the reality of a client project often is vs what the client claimed it should have been.

How The Client Described The Project

In the client’s words, everything is an absolute must requirement. There are no features they cannot live without (in spite of them currently running their business without most of the “required” features).

Project requirements

The project is large, complex, expensive, and requires significant engineering effort.

How The Client Budgeted For The Project

Unless you are working with a client that has a history of using outsourced I.T. / development / design services, there is usually a very large gap between what the client claims they must have vs what they budgeted to get. The budget is usually enough to get the general shape of the requirements, but will be missing a lot of the fine detail.

Client budget

How The Project Was Delivered

Inevitably, there will be problems along the way. Technical issues will pop up. Interaction with the client will stall as they become busy. Questions will be left unanswered, and oh by the way, they won’t pay for “whistles and bells” or “design”. It just needs to be simple and work. With the constant battles that are fought, the project typically lacks important details even though it does solve the core problem.

Project delivery

What The Client Actually Needs

When it’s all said and done, the client might be reaching for this – with you, right there with them – as the answer to their actual problems.

What the client needs

Are You Sure You Need That Client?

Be careful who you take on as a client. It’s tempting to think that you can’t say no because you need the money, or frankly because you just don’t think you can ever say no. But you need to learn to say no and be picky about the clients you take. 

The beginning of a relationship with any client will set the tone. Are they argumentative, over-haggling and nit-picky about inane details? Or do they enjoy the conversation, speak openly and honestly, and look at you as a partner in the endeavor?

Your time and talent are worth more than you think. Be sure you are treated with the respect you deserve, and always treat the client with the respect that you wish you were getting.

Categories: Blogs

R: Command line – Error in GenericTranslator$new : could not find function “loadMethod”

Mark Needham - Sun, 06/28/2015 - 00:47

I’ve been reading Text Processing with Ruby over the last week or so and one of the ideas the author describes is setting up your scripts so you can run them directly from the command line.

I wanted to do this with my Wimbledon R script and wrote the following script which uses the ‘Rscript’ executable so that R doesn’t launch in interactive mode:

wimbledon

#!/usr/bin/env Rscript
 
library(rvest)
library(dplyr)
library(stringr)
library(readr)
 
# stuff

Then I tried to run it:

$ time ./wimbledon
 
...
 
Error in GenericTranslator$new : could not find function "loadMethod"
Calls: write.csv ... html_extract_n -> <Anonymous> -> Map -> mapply -> <Anonymous> -> $
Execution halted
 
real	0m1.431s
user	0m1.127s
sys	0m0.078s

As the error suggests, the script fails when trying to write to a CSV file – it looks like Rscript doesn’t load in something from the core library that we need. It turns out adding the following line to our script is all we need:

library(methods)

So we end up with this:

#!/usr/bin/env Rscript
 
library(methods)
library(rvest)
library(dplyr)
library(stringr)
library(readr)

And when we run that all is well!

Categories: Blogs

R: dplyr – squashing multiple rows per group into one

Mark Needham - Sun, 06/28/2015 - 00:36

I spent a bit of the day working on my Wimbledon data set and the next thing I explored is all the people that have beaten Andy Murray in the tournament.

The following dplyr query gives us the names of those people and the year the match took place:

library(dplyr)
 
> main_matches %>% filter(loser == "Andy Murray") %>% select(winner, year)
 
            winner year
1  Grigor Dimitrov 2014
2    Roger Federer 2012
3     Rafael Nadal 2011
4     Rafael Nadal 2010
5     Andy Roddick 2009
6     Rafael Nadal 2008
7 Marcos Baghdatis 2006
8 David Nalbandian 2005

As you can see, Rafael Nadal shows up multiple times. I wanted to get one row per player and list all the years in a single column.

This was my initial attempt:

> main_matches %>% filter(loser == "Andy Murray") %>% 
     group_by(winner) %>% summarise(years = paste(year))
Source: local data frame [6 x 2]
 
            winner years
1     Andy Roddick  2009
2 David Nalbandian  2005
3  Grigor Dimitrov  2014
4 Marcos Baghdatis  2006
5     Rafael Nadal  2011
6    Roger Federer  2012

Unfortunately it just gives you the last matching row per group which isn’t quite what we want.. I realised my mistake while trying to pass a vector into paste and noticing that a vector came back when I’d expected a string:

> paste(c(2008,2009,2010))
[1] "2008" "2009" "2010"

The missing argument was ‘collapse’ – something I’d come across when using plyr last year:

> paste(c(2008,2009,2010), collapse=", ")
[1] "2008, 2009, 2010"

Now, if we apply that to our original function:

> main_matches %>% filter(loser == "Andy Murray") %>% 
     group_by(winner) %>% summarise(years = paste(year, collapse=", "))
Source: local data frame [6 x 2]
 
            winner            years
1     Andy Roddick             2009
2 David Nalbandian             2005
3  Grigor Dimitrov             2014
4 Marcos Baghdatis             2006
5     Rafael Nadal 2011, 2010, 2008
6    Roger Federer             2012

That’s exactly what we want. Let’s tidy that up a bit:

> main_matches %>% filter(loser == "Andy Murray") %>% 
     group_by(winner) %>% arrange(year) %>%
     summarise(years  = paste(year, collapse =","), times = length(year))  %>%
     arrange(desc(times), years)
Source: local data frame [6 x 3]
 
            winner          years times
1     Rafael Nadal 2008,2010,2011     3
2 David Nalbandian           2005     1
3 Marcos Baghdatis           2006     1
4     Andy Roddick           2009     1
5    Roger Federer           2012     1
6  Grigor Dimitrov           2014     1
Categories: Blogs

Link: Slashdot on Mob Programming

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Slashdot has a post on mob programming that, as usual, has brought out the poorly socialized extreme introverts and their invective. There are always interesting and insightful comments as well.  I recommend checking it out.  The post links to some studies about mob programming that are also interesting.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Link: Slashdot on Mob Programming appeared first on Agile Advice.

Categories: Blogs

Scala development with GitHub's Atom editor

Xebia Blog - Sat, 06/27/2015 - 15:57
.code { font-family: monospace; background-color: #eeeeee; }

GitHub recently released version 1.0 of their Atom editor. This post gives a rough overview of its Scala support.

Basic features

Basic features such as Scala syntax highlighting are provided by the language-scala plugin.

Some work on worksheets as found in e.g. Eclipse has been done in the scala-worksheet-plus plugin, but this is still missing major features and not very useful at this time.

Navigation and completion Ctags

Atom supports basic 'Go to Declaration' (ctrl-alt-down) and 'Search symbol' (cmd-shift-r) support by way of the default ctags-based symbols-view.

While there are multiple sbt plugins for generating ctags, the easiest seems to be to have Ensime download the sources (more on that below) and invoke ctags manually: put this configuration in your home directory and run the 'ctags' command from your project root.

This is useful for searching for symbols, but limited for finding declarations: for example, when checking the declaration for Success, ctags doesn't know whether this is scala.util.Success, akka.actor.Status.Success, spray.http.StatusCodes.Success or some other 3rd-party or local symbol with that name.

Ensime

This is where the Ensime plugin comes in.

Ensime is a service for Scala IDE support, originally written for the Scala support in Emacs. The project metadata for Ensime can be generated with 'sbt gen-ensime' from the ensime-sbt sbt plugin.

Usage

Start the Ensime server from Atom with 'cmd-shift-p' 'Ensime: start'. After a small pause the status bar proclaims 'Indexer ready!' and you should be good to go.

At this point the main features are 'jump to definition' (alt-click), hover for type info, and auto-completion:

atom.io ensime completion

There are some rough edges, but this is a promising start based on a solid foundation.

Conclusions

While Atom is already a pleasant, modern, open source, cross platform editor, it is clearly still early days.

The Scala support in Atom is not yet as polished as in IDE's such as IntelliJ IDEA or as stable as in more mature editors such as Sublime Text, but is already practically useful and has serious potential. Startup is not instant, but I did not notice a 'sluggish feel' as reported by earlier reviewers.

Feel free to share your experiences in the comments, I will keep this post updated as the tools - and our experience with them - evolve.

Categories: Companies

R: ggplot – Show discrete scale even with no value

Mark Needham - Sat, 06/27/2015 - 00:48

As I mentioned in a previous blog post, I’ve been scraping data for the Wimbledon tennis tournament, and having got the data for the last ten years I wrote a query using dplyr to find out how players did each year over that period.

I ended up with the following functions to filter my data frame of all the mataches:

round_reached = function(player, main_matches) {
  furthest_match = main_matches %>% 
    filter(winner == player | loser == player) %>% 
    arrange(desc(round)) %>% 
    head(1)  
 
    return(ifelse(furthest_match$winner == player, "Winner", as.character(furthest_match$round)))
}
 
player_performance = function(name, matches) {
  player = data.frame()
  for(y in 2005:2014) {
    round = round_reached(name, filter(matches, year == y))
    if(length(round) == 1) {
      player = rbind(player, data.frame(year = y, round = round))      
    } else {
      player = rbind(player, data.frame(year = y, round = "Did not enter"))
    } 
  }
  return(player)
}

When we call that function we see the following output:

> player_performance("Andy Murray", main_matches)
   year          round
1  2005    Round of 32
2  2006    Round of 16
3  2007  Did not enter
4  2008 Quarter-Finals
5  2009    Semi-Finals
6  2010    Semi-Finals
7  2011    Semi-Finals
8  2012         Finals
9  2013         Winner
10 2014 Quarter-Finals

I wanted to create a chart showing Murray’s progress over the years with the round reached on the y axis and the year on the x axis. In order to do this I had to make sure the ’round’ column was being treated as a factor variable:

df = player_performance("Andy Murray", main_matches)
 
rounds = c("Did not enter", "Round of 128", "Round of 64", "Round of 32", "Round of 16", "Quarter-Finals", "Semi-Finals", "Finals", "Winner")
df$round = factor(df$round, levels =  rounds)
 
> df$round
 [1] Round of 32    Round of 16    Did not enter  Quarter-Finals Semi-Finals    Semi-Finals    Semi-Finals   
 [8] Finals         Winner         Quarter-Finals
Levels: Did not enter Round of 128 Round of 64 Round of 32 Round of 16 Quarter-Finals Semi-Finals Finals Winner

Now that we’ve got that we can plot his progress:

ggplot(aes(x = year, y = round, group=1), data = df) + 
    geom_point() + 
    geom_line() + 
    scale_x_continuous(breaks=df$year) + 
    scale_y_discrete(breaks = rounds)

2015 06 26 23 37 32

This is a good start but we’ve lost the rounds which don’t have a corresponding entry on the x axis. I’d like to keep them so it’s easier to compare the performance of different players.

It turns out that all we need to do is pass ‘drop = FALSE’ to scale_y_discrete and it will work exactly as we want:

ggplot(aes(x = year, y = round, group=1), data = df) + 
    geom_point() + 
    geom_line() + 
    scale_x_continuous(breaks=df$year) + 
    scale_y_discrete(breaks = rounds, drop = FALSE)

2015 06 26 23 41 01

Neat. Now let’s have a look at the performances of some of the other top players:

draw_chart = function(player, main_matches){
  df = player_performance(player, main_matches)
  df$round = factor(df$round, levels =  rounds)
 
  ggplot(aes(x = year, y = round, group=1), data = df) + 
    geom_point() + 
    geom_line() + 
    scale_x_continuous(breaks=df$year) + 
    scale_y_discrete(breaks = rounds, drop=FALSE) + 
    ggtitle(player) + 
    theme(axis.text.x=element_text(angle=90, hjust=1))
}
 
a = draw_chart("Andy Murray", main_matches)
b = draw_chart("Novak Djokovic", main_matches)
c = draw_chart("Rafael Nadal", main_matches)
d = draw_chart("Roger Federer", main_matches)
 
library(gridExtra)
grid.arrange(a,b,c,d, ncol=2)

2015 06 26 23 46 15

And that’s all for now!

Categories: Blogs

Creative Collaboration

Doc On Dev - Michael Norton - Fri, 06/26/2015 - 20:17
I had the pleasure of presenting at NDC Oslo last week and the additional privilege of co-presenting a collaboration workshop along with Denise Jacobs and Carl Smith.

Creative Collaboration: Tools for Teams from Doc Norton
In this workshop, we cover Fist to Five voting, 5x7 Prioritization, and Collaboration Contracts. We had around 30 attendees for the workshop, allowing us to create 4 groups of approximately 8 people each.

After some ice-breakers, groups came up with product ideas by mashing two random words together and using first to five voting to rapidly identify a product idea they could all agree on. This was easier for some groups than others. It was interesting to see the dynamics as some groups discussed each combination prior to voting, some groups created multiple options before voting, and other groups ripped through options and found their product in a manner of minutes (as intended). It is often difficult for us to give up old habits even in pursuit of a better way.

Next up was brainstorming and prioritizing a list of items that needed to be done in order to launch our new awesome concept at a key conference in only three months. We started with each individual member writing at least two items they thought were critically important to prepare for the conference. We then removed duplicate items for each group and used 5x7 prioritization to come up with the top most important items for each group. At the end of the process, teams agreed that the resultant priorities were good and many were surprised at how easy and equitable the process was.

Finally, each group took their top 4 items and ran collaboration contracts against them. We did this in two passes; running the basic contract and resolving conflicts. We had one group that ended up with no conflicts. The other groups worked through their conflicts in relatively short order and the quality of conversation was high throughout. One group realized that even after they resolved the obvious conflicts, they had one individual who was in a decision making role on all four items. While this is not technically a conflict on a contract, it does indicate an issue. After some additional discussion, they were able to adjust the overall contract to everyone's satisfaction and eliminate the potential bottleneck.

This was our first time delivering this workshop and I thought it went quite well.

I'm planning to add Parallel Thinking to the workshop along with a couple more games to create a solid half-day collaboration tools workshop that can work for teams or groups.

If you're interested in this workshop for your team, let me know. Maybe, if we're lucky, Denise and Carl can come along too.

Categories: Blogs