Suggie is a back end Clojure app that is responsible for maintaining eight collections of stutters. The collections live in Redis and are consumed by two front-end apps. What goes in which collections, and how, is governed by a number of business rules. For example, one kind of new stutter produces two entries in one of the collections: the first being the new stutter and the second being an old one selected by a Lucene search.
The collections and business rules were added one by one. I wasn’t vigilant enough about keeping the code clean as I added them. At a point where I had a little bit of slack, I decided to spend up to two ideal days cleaning up the code (and adding one small new feature). I failed and ended up reverting about 70% of my changes.
What have I learned (or, mostly, relearned)?
Let’s start before I started:
It’s clear that I let the code get too messy before reacting. I should have made a smaller effort, earlier, to clean it up.
In general, I find that switching out of the “coding register” into the “explaining register” (talking vs. typing) helps me realize I’m going into the weeds. Because we’re a two-programmer shop, with only one of us (not me) competent at the front end, and we’re under time pressure (first big release of product 3, going for series A funding), I worked on Suggie too much without discussing my changes with Colin.
Relatedly, pairing would have helped. Unfortunately, Colin and I are of different editor religions - he’s vim, I’m emacs - and that has a surprisingly negative effect on pairing. We need to figure out how to do better.
As I did the refactoring, I’d say I had two major failures.
I read over the code carefully and made diagrams with circles and arrows and a paragraph on the back of each one explaining what each one was. That was useful. But what I under-thought was the trajectory of the refactorings: which ones should come first, which next, so as to provide the most “You’re going askew!” information soonest. (Alternately: get the most innocuous and obvious changes out of the way first, so that they wouldn’t distract/tempt me as I was doing the more challenging ones.)
I realized there were four design issues with this code.
The terminology was out of date. (Bad names.)
There was the oh-so-common problem that all the communication with Redis had gotten lumped into a single namespace (think “class”). The same code that put stutters into Redis hashes put ordered sequences of references-to-stutters into Redis sorted sets - and also put references-to-stutters into Redis plain sets. The code cried out to be separated into four different namespaces. Alone, that would have been a straightforward refactoring. But…
But there was also the problem that the existing code was inefficient, in that it didn’t make good use of Redis’s pipelining. I want to be clear here: our initial move to Redis was motivated by real, measurable latency problems. And the switch to Redis was successful. But now that we were committed to Redis, I fooled myself in a particular way: “Efficiency’s good, all else being equal. We don’t know that we need pipelining here, but I see a pretty clear path toward just dropping it in during the refactoring that I’m doing anyway. So why not do it along the way?”
Why not? Because, as it turned out, I’d have gone a lot faster if I’d first solved either problem 1 or 2 and then made the changes required to add pipelining. (That’s what I’m doing now.)
Much of our Redis code is not atomic, which needs to be fixed. I decided I’d also fix that (for this app) at the same time I did everything else. As I write, that seems so obviously stupid that maybe I should find another profession. However, I convinced myself that this new refactoring would fall easily out of the pipeline refactoring (which would fall out of the rearrangement refactoring). In retrospect, I needed to think more carefully about atomicity without assuming that I really understood how it worked in Redis. But, again, I assumed I could learn that as I went.
So I mushed up many different things: renaming, moving code to the right place, introducing more pipelining, and keeping an eye out for atomicity. My brain proved too small to keep track of them. I should have sequenced them.
In addition to all that, I noticed some other things.
I would have done better to spend an hour a day over many days, rather than devoting full days to the refactoring. Because I have a compulsive personality, I must be forced to take time away from a problem to make me realize exactly how far down a rathole I’ve gone. (Alternately, I need a pair to reign me in.)
I kept all the tests passing, and I kept the system working, but I made a crucial mistake. There was a method called add-personal-X-plus-possible-Y. (The name alone is a clue that something’s gone wrong.) It was 16 lines of if-madness. Instead of modifying it (while keeping the tests passing and keeping the system working), I kept the system working by not changing it. I added a new function that was intended to be a drop-in replacement for it - come the glorious future when everything worked. So there was no connection between “system working” and “tests passing” while I was doing the replacement. The new function could have been completely broken, but the system would keep working, because the new function wasn’t used anywhere outside the tests.
This seems to me a rookie mistake, a variant of “throw it away and rewrite it”. But somehow I allowed myself to gradually slip into that trap.
I suffered a bit from relative inexperience with the full panoply of immutable/functional programming styles. What I’d written was C-style imperative code. Transforming it into object-oriented code would have been straightforward, given my familiarity with various design patterns. Figuring out how to do the equivalent transformation idiomatically in Clojure, given all the constraints I’d placed on myself, took me too long. I only really figured out how to do it after I’d pulled the Eject lever.
Here’s something that’s interesting to me. I spent many years as an independent process consultant. In my spare time, I wrote code. Because that was a part-time thing, I had a lot of leisure to put the code aside and listen to that small, still voice telling me I was going astray.
Things are different now. This real world job has only strengthened my belief in what I preached as a consultant. In particular, I believe that teams must have the discipline to go slow to get fast. And yet: I keep going too fast. These days, it’s markedly harder for me to attend to the small, still voice.
It’s an interesting problem.
Agile2014 is only 6 months away!
That might seem like a long ways to go, but if you’re submitting a presentation or workshop for the conference, you know that last night was your last chance to propose a session. I know because I actually finally did it. It took some convincing and discussing with some good friends, and a generous agreement by Jimi Fosdick of Fearless Agility to co-present with me, but I did it!
The topic is one I’m really passionate about – agile in an educational setting. I had the privilege of presenting an Agile/Scrum training to a group of high school teachers in October and November of 2013, and the experience was life-changing. The teachers were all Technology teachers, meaning they taught classes like digital media, computer science, web development, networking, etc. The premise was that agile methodologies like Scrum could benefit them in two ways:
1) As a tool when planning and executing their curriculum for the year/semester
2) Providing an agile framework they could teach to their students to improve their challenges with project-based learning. I.e., the students could apply it to the projects they work on for their classes, since they worked in groups/teams
The teachers were amazing. They were engaged, interested, and willing to experiment. I can’t wait to share details of the experiment with others and continue to improve and expand upon the effort.
Jimi, on the other hand, has worked with educational technology for 5 years, and has also been working with education professionals and scholars on finding ways to apply agility (specifically Scrum) in an educational context. His work in this field is a critical part of this presentation – taking it from the what-if to the here’s-how-it-can-be-done.
Now comes the waiting game part. There are some amazing speakers, presenters, and topics that have been submitted for this year’s conference, so no matter what happens, I can’t wait to attend the conference in Orlando and continue to learn, grow, and connect with friends (old and new) along this journey of agility!
Cross your fingers for me!
Here’s our submission:Agile for Educators Presenter: Hala Saleh Co-Presenter: Jimi Fosdick Track: Research Session Type: Talk Audience Level: Practicing Room Setup: No Preference Duration: 75 minutes Keywords: Agile, scrum, students, education, teachers, curriculum Abstract:
This session is a real-world, where-do-we-go-from-here exploration of applying agile concepts and frameworks in the classroom.
This presentation will cover two main aspects of agile in Education:
1) Applying agile concepts (specifically Scrum) for the planning and execution of educational curriculums and teaching plans
2) Teaching educators about agile frameworks (specifically Scrum) and how to apply them in their classrooms for student projects
Hala has hands-on experience with delivering Scrum training to high school teachers and working with them to investigate and coach them on the uses of Scrum with their classrooms. She is excited about the application of Scrum in a non-software context and wants to spread the Agile Manifesto for Education.
Jimi worked in educational technology for 5 years and learned more about education and educators than any non-teacher should ever know. He was been working with his sister (a program director for a school for special needs children) and his uncle (a professor at San Jose State University) in finding ways to apply agility, specifically Scrum, in educational contexts. This presentation represents a theoretical overview of his discoveries and, more importantly, why agility can and should be used outside of software.Information for Review Team:
The session will be split where 1/2 of the time will be spent covering research and work that has been done to figure out how agile concepts (specifically Scrum) can be used for the planning and execution of educational curriculums and teaching plans.
The second 1/2 of the class will cover a real-life scenario where Scrum was taught to a group of high school teachers, and they were given the assignment to use the framework with their own students. This will conclude with an exploration of the outcomes as reported by the teachers, and a review of artifacts shared by the teachers.
A working knowledge of agile concepts and main methodologies is encouraged. The session covers Scrum, so a knowledge of Scrum is helpful.Learning Outcomes:
- Part I:
- Describe the basic principles of primary, secondary and post secondary educational development projects
- Describe the applicability of agility in an education context
- Build a basic curriculum skeleton using an agile approach
- Part II:
- Experience Report of delivering Scrum training to high school teachers
- Overview of materials presented to teachers and most effective use of time when training
- Overview of format & outcomes of presentation
- Review of teacher feedback and results of check-ins with teachers on student progress and project results
Hala Saleh has presented at local PMI Chapters, as well as co-presented with Jimi Fosdick (Co-presenter) at Agile 2012.
Jimi has presented at most Scrum Gathering and Agile conferences in the past 5 years.
Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!
Marco Abis volunteered to help find a great venue in Turin. The entire organizing committee and I are excited to bring CITCON to Italy for the European leg of the conference this year.
CITCON Turin will round out the 8th year of CITCON’s around the world as the 21st instance of the conference. A lot has changed in 8 years. A few highlights come to mind:
- Fewer discussions about “What is CI”
- More discussion about “What is CD”
- More discussion about DevOps
- More in-depth Automated Testing discussion
I can remember as far back as 2002, colleagues at ThoughtWorks were discussing how to take CruiseControl and Continuous Integration to the next level. They had already played out the limits of traditional CI. They were looking for ways to take the same principles all the way down the pipe to production. The concept had a few names over the years – CI ++, Enterprise Continuous Integration, Continuous Deployment. Personally, I am grateful to Jez Humble and Dave Farley for getting their book published! Finally we have a name for it – Continuous Delivery.
I think Patrick Debois coined the term when he created DevOps Days after attending CITCON Brussels. A name can say a lot, as we see with Continuous Delivery. People are still trying to figure it all out, as indicated by Jez’s post on the oxymoronic phrase “A DevOps Team”. CITCON provides us professionals with another venue to hash it all out.
Testers (QA Engineers, etc) are stepping up in droves to integrate their work into CD pipelines. They are pushing the boundaries of platforms like Selenium using services like SauceLabs. They are expanding their natural language approach to testing with Cucumber, etc. Coming from my developer background, I’m excited to see the practice securely taking hold in the testing arena. That’s one of the main reasons for the “T” in CITCON. We felt that CI wasn’t enough. We need the testers to unify around a common vision of keep the code working. We mean truly working, not just compiling!
So, join us in Turin for CITCON Number 21. Help us keep moving the boundaries of professional software delivery! See you there!
It had been a while since I last facilitated a retrospective. So, I wanted to make sure that I can deliver and prepared properly. I had a few ideas of my own and few based on short discussions I had had what the retrospective should address.
The scope was quite large so I couldn’t use any specific way of gathering information and insights. I had to allow room for a very wide variety of topics.
Without further ado here is the skeleton I used for the retrospective.
- Get a mood reading (max 5min)
- Gather insights (max 10min)
- Decide what to do (max 35min)
- Form an improvement backlog (max 5min)
- End with a different goal in mind (max 5min)
I wanted to know and I wanted the participants to know what are the prejudices we are working with. I used the EVSP (Explorer-Shopper-Vacationer-Prisoner) as the ice breaker. With a closed vote everyone wrote down a single letter defining their current attitude towards retrospectives at the moment. Here are the results:
Then it was time to move forward and start to gather information to support our aims. Due to the amount of issues known in advance I decided to include last two sprints (we have 2 week sprints) in our time-line exercise. I had two different coloured sticky notes: green ones to signify memorable events and yellow ones to signify meaningful events. Time-line itself was very simple with only two dimensions: time and impact. Time runs from left to right and impact from top to bottom. The higher the note was placed the better the event was and the lower it was placed the worse the event was.
In small groups, 2 or 3 persons, people started and recalled the most important moments and placed them properly on the whiteboard with respect to both the time axis and the Good-Bad axis.
After we had looked into our near past it was time to turn the history into actionable issues. I had prepared one whiteboard with three columns: Stop Doing, Start doing and Keep Doing. I told the team to use the info we just had gathered and by themselves write sticky notes containing issues that mattered to them. I instructed them to keep the notes secret until placed on the whiteboard. The notes were placed one at a time so that we could discuss each and every note immediately.
Ta-daa! We had concrete issues we could address! Next I lead the team to find themes within the notes. You can see the notes that address the same issue connected on the image. This proved to be extremely fruitful and good discussion and analysis followed.
Then it was time to create our improvement backlog. I gave the team 5 minutes to create actions based on the discussions and notes on the three columns.
If you look closely, on the right corner of the image you can see a bunch of red notes. Those notes are the actions that the team found. They are prioritized based on importance and effort required. Effort is significant here, very often the actions are such that the team can not accomplish them alone. Thus, it is very important to have actions that can be implemented immediately.Ending the show
We had found really good candidates for the team to improve on and it was time to move on. We’ve had all kinds of retrospectives from good to bad to boring so it was appropriate to give the team a possibility to improve on the retrospectives. So, the last item was retrospective process improvement, what is good and what needs to be changed.
Got some very good feedback, ideas and improvement suggestions :)Required ingredients?
- Preparations in advance
- Storyline sketched
- Neutral facilitator
- Fearless people
I had prepared the team room in advance, 2 whiteboards and 2 large paper sheets, enough sticky notes and pens/markers. Timer for timekeeping and enforcing time-boxes. Healthy attitude so that I could leave myself out of the retrospective even though I am a team member. Gentle guidance to allow everyone the chance to speak up.
I was pleasantly surprised how well this went and I want to thank my team for excellent attitude and a sincere desire to improve and discuss even the most difficult subjects. There’s definitely improvement ahead!
None of this would have been possible without the help of the excellent book Agile Retrospectives: Making Good Teams Great by Esther Derby and Diana Larsen. I recommend it for everyone interested in retrospectives.
Most programmers know what is “The Zone”. It is a very focused state of mind where you feel that there are no problems that you cannot solve. You know, the state where your surroundings disappear and you feel almost omnipotent. You are very productive, take great leaps and progress fast.
I posit that this is a dangerous for a couple of reasons.Too tight focus
In my experience it is very hard to make corrective actions while in the zone. The Zone has the same effect as speeding, your sight narrows and with high enough speed you can see only forward. How do you verify that you are solving the correct problem? Are you really going to the right direction? Are you sure? When in “The Zone” do you ever stop and wonder “Is this the right thing to do?”. Do you ever question the task at hand? Is this really the most valuable feature? Could there be something that would provide even more value?
The Zone is a state of extreme focus and motivation. The problem is that your focus is very likely too sharp. This leads to blindness to your own errors and makes it really hard to change direction. This leads to false productivity as the risk of rework grows very fast. With great speeds even the slightest mistake become huge almost intantly. Think about a steering error while driving over 200km/h.Susceptible for interruptions
“The Zone” is fragile. Even the slightest disruption, an email, innocent short question from a colleague, can drop you out of it. Getting back is difficult so your productivity is going up and down very frequently. This can be really frustrating and a real motivation killer in the long run, especially when you consider the nature of software development which emphasizes team work and continuous communication.
You may have your own private room you can hide in but is it smart to shut out others? If everyone is locked up in their rooms what does it do to your team spirit? Privacy is ok and should be respected but isolation is not. With people spread out in their rooms how can you ensure that you are all pulling the same rope and in the same direction? Which leads us to the next pitfall.Silos
Everything that is done alone encourages silos. It is always harder to spread knowledge afterwards. Spend too long on one task and you become the sole source of information. In other words a single point of failure.
Isolating yourself to ensure uninterrupted “zone” prevents you from learning from others, gaining information from others and spreading your knowledge to others. Silos prevent sharing the code and the code wants to be shared. In other words you have too long feedback loops.Too long feedback loops
While in “The Zone” you are alone. Very alone. How can you be sure that you are doing the smartest possible thing? Is it the right thing to do? You practice TDD so technically you are on the right path but logically? Have you been derailed from your original goal? When is your next code review? Can you afford to wait that long? Are you sure that your domain knowledge is deep enough? While you worked in the zone and progressed did it actually move you forward in the right direction?
What if you had understood something incorrectly? Depending on the size of your task this can mean all you have delivered was waste (excluding your own learning). Did you write too much code? Did you try to anticipate the future?The antidote
You can overcome some of the issues of the zone by ensuring frequent communication with the business and making sure that all tasks are small enough in order to shorten the feedback loop. This still leaves you in a more fragile state than we would want.
But there exists a very effective practice that renders the negative aspects of the zone very close to non-existent.
In psychology terms “The Zone” sounds like flow. Flow is described as a state of focus, motivation and productivity. I want to make a semantic difference between these two states and add negative connotation to “The Zone” and positive connotation to “The Flow”.
Pairing gets you into a flow fast and it is easy to stay in the flow!
Pairing gives you enough focus but it isn’t too tight. It’s like having someone to read the map to you while driving to an unknown location for the first time. And it beats the GPS and other map utilities hands down. You always have someone there to question your train of thoughts, helping you to take the right exit or turn and to stop when it is time to stop.
While pairing interruptions don’t cost even nearly as much as they while in the zone. The pair can very quickly regroup and maintain their momentum. You can momentarily split up and continue after the interruption has been dealt with.
You can’t hide information while pairing and you can’t become a silo. You will learn, teach and gain information while pairing.
Pairing while programming gives you a feedback loop of seconds. Your thoughts are evaluated as soon as you articulate them. And articulating your problems is the best way to solve them.
Don’t keep zoning, start pairing
The Science Museum in London is currently showing an exhibition about Alan Turing. Kate and I wandered up there on Saturday. I found the exhibition itself a little superficial - which isn't so surprising given the breadth of material the curators had to draw on between his personal life and death, contributions to computing, the war effort and Bletchley park, and his work on morphogenesis.
But there were two little gems in there which I focused on: the first, one of the tortoises of W. Grey Walter: beautiful and tremendously simplistic devices which exhibit eerily animalistic behaviours. And secondly, a bombsight computer from a Lancaster bomber, on which I was chuffed to discover the manufacturer's mark of Sperry. For it was a Sperry machine which D P Henry used to create his spirograph...
So, down the right hand side of this page you'll see a block, marked "AdSense". That'll be me playing with advertising; I'm interesting in learning a bit more about how online advertising works from a practical perspective, and I like learning about stuff by doing stuff.
So there'll be a small ad - or it might be a large ad in future, who knows? - running there for a little while at least. I'll be donating all income from it to the WWF, to one or several of their efforts around preserving big cats.
In the animalia-and-tech link-pile today, an iPhone case which acts as an ECG for pets and cows that SMS the farmer. The latter's particularly poignant, as the Sardinian chap who for many years rented FP their offices approached us to build a similar thing for him.
I'm tapping this out from the gloriously sunny front patio of my uncle's home in Alicante. Just before we departed last week, a little birdie with the face of Ed Moore dropped me an email to let me know that Needz has gone live.
I've been following Needz for a little while, ever since seeing a demonstration of Agora, an early version of the product which they built in collaboration with Vodafone R&D. Needz is an interesting product, I think: Ed and his team have been looking at what a marketplace looks like when it's designed with mobile in mind, as opposed to being transplanted from the desktop web. I like the analogy of classifieds for this: location and convenience might be more important than getting the best price in some situations.
They're not the only people working on this problem, but they have an interesting take around building federated services which let providers run their own versions, which all appear to be the same service to end users.
Last week I promised to put my work from the last year at Sussex University online, in case anyone's interested in it. Here you go, course by course:
- Adaptive Systems: Modelling contextual feedback. I found Bret Victor's "Magic Ink" paper deeply inspiring, and managed to crowbar it into this course by modelling and investigating the feedback mechanisms necessary to provide a contextual UI. Whilst I was chuffed to find myself inadvertently referencing a 1998 paper by Marko Balabanovic, a former client at FP, ultimately I was less happy with this project than any other throughout the last year. I didn't feel I had learned much about the problem domain or adaptivity;
- Advanced Software Engineering: NearMe, a location-based social app for Android. This was an assigned, rather than chosen, project from the first term: a gift, given how many similar things I'd been involved with at FP. I was the team leader for this one and rather enjoyed the shift from being an authority figure at FP to managing without authority; hopefully team-mates Mariana de Rojas-Marao and Alan Donohoe didn't find it too painful. It also gave me a chance to play with an idea I'd had a while back: using hashes of MSISDNs in uploaded address books to find friendships automatically, without raising privacy issues. This was before Path got into trouble for not using such an approach. Source code is here, and a reflective essay here;
- Business and Project Management: a concept note, intended to be used to present a project proposal internally at a large company. I chose to suggest Google do something interesting with sensor data; it is not a suggestion I have any intention of pursuing. I started the course intending to get very involved with a project in this area, but a few things I learned put me off that idea;
- Human-Computer Interaction, for which we did a team presentation and report evaluating an Android game, X-Construction Lite; and an individual design project, in which I attempted to improve the alarm clock. I was a bit disappointed with this one: I played with some ideas, but didn't get to a final design and instead struggled with prototyping complex multi-touch interactions, despite able help from a load of commenters on my blog;
- Pervasive Computing: a literature review following the influence of a classic paper, "Instrumenting the city: developing methods for observing and understanding the digital cityscape", followed by an individual project where I went low-level with sensors, and examined some LEDs and light sensors - with a view to improving the performance of transmission of Morse Code across them. I really enjoyed this, never having played with anything so close to atoms before. Source code for the project lives here;
- Topics in Computer Science introduced us to a series of pet subjects by lecturers, and was where I first came across superoptimisation, the topic of my dissertation. I wrote a literature review of the subject, and then a research plan (which I ended up carrying out);
- Web applications and services (which was really distributed computing under a new name) had us building a fairly dull-as-dishwater J2EE application for managing a stock portfolio. Not exciting, but I can now more vividly empathise with the pain of people who do this sort of thing as a day job I guess;
- Finally, the biggie: my dissertation, Is superoptimisation a viable technique for virtual machines? I'll ruin the surprise for you and shatter Betteridge's Law by revealing that, yes, it is. I am very happy with this: I think I've showed that the approach delivers useful results, by finding versions of several core math functions which are more efficient than those which ship with the JVM, or are produced by the Java compiler. Source code for my implementation lives here, and I'll be doing a talk about the project at the Brighton Java night next week.
Missing from the above is Limits of Computation, and enjoyable (and, I found, challenging) course covering computability and complexity theory, taught by the memorable Bernhard Reus but featuring little in the way of reports or generated source code that I can share.
Other stuff: after about 5 years of Not Getting Around To It, I finally bit the bullet and learned a functional programming language, Clojure - practicing it in the Adaptive Systems project and using it again for the dissertation. It's good fun, and I'll probably turn to it for future noodlings. I can recommend it if you'd like to learn a functional language based on the JVM.
So, that's that; I handed in my dissertation today, and my academic career comes to a close tomorrow in the East Slope bar on Falmer campus.
Next month I'm joining Google, where I'll be working as a Product Manager in the London office. I'm super-chuffed; can't wait to start.
It's going to be quite a change. I'll have a boss for the first time in a dozen years, and be working inside in a far larger company than ever before. I'm looking forward to both those things, along with the other classic attractions of Google: a large group of extremely talented colleagues, the opportunity to work at a global scale, and an ambitious breadth of purpose.
Between now and then, I'll be handing in my dissertation and finishing what's been a fantastic year at the Sussex University. I've really enjoyed the Master's, even more than I expected to: it's been a chance to refresh and put in some practice in a few familiar places (software engineering, HCI, web apps and services, and business and project management), spend some time revisiting a theoretical side to CS which I've long lacked (computability and complexity theory), and think about a few completely new subjects (adaptive systems and pervasive computing). Plus, working on a brace of (mostly) self-selected projects has been extremely liberating; I've scratched plenty of itches. At some point I plan to post all my project work here, on the off-chance that you're interested or might find it useful.
Have you seen Green Goose? They're doing some beautiful stuff with sensors for consumers: turning everyday activities into games. I love the mundanity of it all: keeping the toilet seat up, brushing your teeth, walking the dog. One of their apps is called "BrushMonkey". It doesn't get any better than this.
They've been around for a little while - there's a story on RWW about them from early 2010, but they seem to have changed tack since then, away from financial monitoring and towards fun'n'games. Here's an interview with their founder from December last year. Green Goose seem to be spreading themselves thinly across many applications: "We’ve got about 50 or so other sensors in development right now that we will fairly quickly release over time".
Their hardware seems similar to Little Printer, in that they have their own gateway (the "station egg") which plugs into the spare port of a hub.
If you're at all interested in this sort of thing, you might want to wander along to the Smart Interconnected Devices Hackathon at Google Campus this coming Saturday: a chance for Android developers to Plug Things Together in interesting configurations...
I never refer to the daily scrum (or daily standup) meeting as a “status meeting.” The term “status meeting” is too pejorative for most of us. For me it conjures images of sitting around a table with each person giving an update to a project manager while everyone else feigns interest while either mentally preparing for their own upcoming update or wondering how much longer the meeting will last.
I prefer to think of the daily scrum as a synchronization meeting. Team members are synchronizing their work: Here’s what I did yesterday and what I think I’ll do today. How about you? Done well a daily scrum (daily standup) meeting will feel energizing. People will leave the meeting enthused about the progress they heard others make. This won’t happen every day for every team member, of course, but if team members dread going to the daily scrum, that is usually a sign of trouble.
I want to offer one of my favorite tips for an effective daily scrum: If you’re a ScrumMaster, don’t make eye contact with someone giving an update. Making eye contact is human nature. When we speak, we make eye contact with someone. It’s only natural that a team member will look at the ScrumMaster; call it a legacy of too many years under traditional management but a lot of people on Scrum teams do look at their ScrumMasters a bit like managers to whom they need to report status. By not making eye contact with someone giving an update, the ScrumMaster can, in a subtle way, prevent each report becoming a one-way status report to the ScrumMaster.
Each person’s report is, after all, intended for all other team members.
I’ve never been a micro-manager, especially not since using agile and Scrum. I could have turned into a micro-manager early in career, except I’ve always been too busy to spend my time checking up on people. But, while I’ve avoiding checking up on teams or people, I’ve never been reluctant to check in with them. I was recently reminded of this by reading an article about the importance of small wins.
While checking up and checking in may seem similar, there are four key things a good ScrumMaster or agile project manager can do to avoid crossing the line into micro-management while still checking in on a team:
1) Be sure the team has the full autonomy to solve whatever problem they’ve been given. A good ScrumMaster ensures the team is given complete autonomy to self-organize and achieve the goal its been given.
2) Don’t just ask team members about their progress; offer them real help. ScrumMasters do this, for example, by protecting the team from outside distractions and removing (or even anticipating) any impediments.
3) Avoid blaming individuals. Things will occasionally go wrong. Assigning blame when that happens will make people feel they are being checked up on rather than just being checked in with.
4) Don’t hoard information. Micromanagers tend to view information as a resource to be retained and only shared when needed. A good ScrumMaster will share anything learned by checking in with others who could benefit from it.
So, stop reading this blog and go check in with your agile team right now. Just don’t check up on them.
I’ve been wondering lately if Scrum is on the verge of getting a new standard meeting–the Backlog Grooming Meeting, which is a meeting an increasing number of teams are doing each sprint to make sure the product backlog is prepared and ready for the start of the next sprint.
To see why a Backlog Grooming Meeting may be a few years away from becoming a Generally Accepted Scrum Practice, or what I call a GASP, let’s revisit the early 2000s.
Back then, Scrum didn’t have a formal Sprint Retrospective Meeting. Prevailing wisdom at the time was, in fact, fairly opposed to such a meeting. The logic was that a good Scrum team should be always on the look out for opportunities to improve; they should not defer opportunities to discuss improvement to the end of a sprint.
That argument was a very good one. However, what was happening on teams was that day-to-day urgencies took precedence and opportunites to improve often went either unnoticed or unacted upon. And so what most teams eventually realized was that of course we should improve any time we notice an opportunity, but at a minimum each team should set aside a dedicated time each sprint for doing so–and thus the retrospective became a standard part of Scrum. This was helped along tremendous by the great book, Agile Retrospectives: Making Good Teams Great, by Esther Derby and Diana Larsen.
I’ve had more CSM course attendees recently asking questions about a Backlog Grooming Meeting as though it were a GASP. Many are surprised when I tell them that not every Scrum team has such a meeting each sprint. I still don’t advocate every team conduct a Backlog Grooming Meeting each sprint–as with the early arguments against retrospectives, I’d prefer backlog grooming to happen in a more continuous, as-needed way–but so many teams are successfully using a Backlog Grooming Meeting, arguments against it may be on their last gasps.
Share what you think below. Will a Product Backlog Grooming meeting become so common it becomes a Generally Accepted Scrum Practice (GASP)?