I was playing around with Docker locally and somehow ended up with this error when I tried to list my docker machines:
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS default - virtualbox Running tcp://192.168.99.101:2376 Unknown Unable to query docker version: Get https://192.168.99.101:2376/v1.15/version: x509: certificate is valid for 192.168.99.100, not 192.168.99.101
My Google Fu was weak I couldn’t find any suggestions for what this might mean so I tried shutting it down and starting it again!
On the restart I actually got some helpful advice:
$ docker-machine stop Stopping "default"... Machine "default" was stopped.
$ docker-machine start Starting "default"... (default) Check network to re-create if needed... (default) Waiting for an IP... Machine "default" was started. Waiting for SSH to be available... Detecting the provisioner... Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
So I tried that:
$ docker-machine env Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.101:2376": x509: certificate is valid for 192.168.99.100, not 192.168.99.101 You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'. Be advised that this will trigger a Docker daemon restart which will stop running containers.
And then regenerates my certificates:
$ docker-machine regenerate-certs Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y Regenerating TLS certificates Waiting for SSH to be available... Detecting the provisioner... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon...
And now everything is happy again!
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS default - virtualbox Running tcp://192.168.99.101:2376 v1.9.0
It’s my opinion, and I think the opinion of the authors of Scrum, that a Scrum team must be collocated. A collection of geographically distributed staff is NOT a Scrum team.
If you work in a “distributed team”, please consider the following question.
Do the members of this group have authority to decide (if they wanted to) to relocate and work in the same physical space?
- If you answer “Yes” with regard to your coworkers: then I’d encourage you to advise your colleagues toward collocating, even if only as an experiment for a few Sprints, so they can decide for themselves whether to remain remote.
- If you answer “No”, the members do not have authority to decide to relocate:
- then clearly it is not a self-organizing team;
- clearly there are others in the organization telling those members how to perform their work;
- and clearly they have dependencies upon others who hold authority (probably budgets as well) which have imposed constraints upon communication between team members.
- CLEARLY, THEREFORE, IT IS NOT A SCRUM TEAM.
The post A Group of Geographically Distributed Staff is NOT a Scrum Team appeared first on Agile Advice.
Stats from Agile Complexification Inverter blog site
Well the stats are just one insignificant measure of what one gets from writing about their experience.
The more meaningful measures have been seeing some of these articles and resources put into practice by other colleagues, discussion that have happened (off line & sometimes in comments or twitter, etc.) with readers that require me to refine my thinking and messaging of my thinking. Interestingly some times seeing a resource that you have created being "borrowed" and used in another persons or companies artifact without attribution is both rewarding and a bit infuriating. I like that the concept has resonated well with someone else and they have gone to the trouble of borrowing the concept, and repeating or improving or repurposing the concept.
Let me borrow someone else's concept: "The Bad Artist Imitate, the GREAT Artists Steal." -- Banksy
Most of all the collection of articles are a repository of resources that I do not need to carry around in my 3-4 lbs of white & grey matter. I can off-load the storage of concepts, research pointers and questions to a semi-perminate storage. This is a great benefit.
Sometimes, that works quite well. You have deliverables, and everyone understands the order in which you need to deliver them. You use agile because you can receive feedback about the work as you proceed.
You might make small adjustments, and you manage to stay on track with the work. In fact, you often complete what you thought you could complete in a quarter. (Congratulations to you!)
I rarely meet teams like that.
Instead, I meet and work with teams who discover something in the first or second iteration that means the entire rest of the quarter is suspect. As they proceed through those first few features/deliverables, they, including the PO, realize they don’t know what they thought they knew. They discovered something important.
Sometimes, the managers in the organization realize they want this team to work on a different project sometime in the quarter. Or, they want the team to alternate features (in flow) or projects (in iterations) so the managers can re-assess the project portfolio. Or, something occurs outside the organization and the managers need different deliverables.
If you’re like me, you then view all the planning you did for the rest of the quarter as waste. I don’t want to spend time planning for work I’m not going to do. I might need to know something about where the product is headed, but I don’t want to write stories or refine backlogs or even estimate work I’m not going to do.
If you are like me, we have alternatives if we use rolling wave, deliverable-based planning with decreased specific plans.
In this one-quarter roadmap example, you can see how the teams completed the first iteration. That completion changes the color from pink to white. Notice how the last month of the quarter is grayed out. That’s what we think will happen, and we’re not sure.
We only have specific plans for two iterations. As the team completes this iteration, the PO and the team will refine/plan for what goes into the 3 iteration from here (the end of the second month). As the team completes work, the PO (and the PO Value team) can reassess what should go into the last part of this quarter and the final month.
If you work in flow, it’s the same idea if you keep your demos on a cadence.
This is exactly what happened with a team I’m working with. They tried to plan for a quarter at a time. And, often, it was the PO who needed to change things partway through the quarter. Or, the PO Value team realized they did not have a perfect crystal ball and needed to change the order of the features partway through the quarter.
They tried to move to two-month horizons, and that didn’t help. They moved to one-month horizons, and almost always change the contents for the last half of the second month. In the example above, notice how the Text Transfer work moved to farther out, and the secure login work moved closer in.
You might have the same kind of problem. If so, don’t plan details for the quarter. Plan details as far out as you can see, and that might be only one or two iterations in duration. Then, take the principles of what you want (next part of the engine, or next part of search, or whatever it is that you need) and plan the details just in time.
Rolling wave deliverable-based planning works for agile. In fact, you might think it’s the way agile should work.
If you lie this approach to roadmapping, please join my Practical Product Owner workshop. All the details are on that page.
On June 6th 1944, D-Day, the largest seaborne invasion in history began the liberation of German-occupied northwestern Europe. 156,000 soldiers landed on the beaches of Normandy or were air-dropped behind German lines. The battle could not have been won without extensive planning, but the it did not go according to plan. Many paratroopers landed far from their targets as did landing craft. German defenses were stronger in some areas forecasted and weaker in others.
What made the battle successful was not the plan, but a combination of the knowledge gained in planning, plus the initiative taken by soldiers and units to adjust as reality emerged. This is captured in Eisenhower's quote above.
What we need is better planning, not better plans
I'm often asked "how much should we plan?". The answer is always necessarily vague. We plan differently for things that are certain and for things that are uncertain. Planning for things that are certain ensures that we are focused on a shared goal. Planning for uncertainty results in a prioritized list of experiments that will remove uncertainty so we can make better decisions in the future.
Examples of decisions best made earlier:
- Deciding the genre of game.
- Deciding what engine to use.
- Knowing what constraints a franchise has.
Examples of decisions best made later:- Deciding how many bullets each weapon carries in a magazine.- Deciding how many NPCs are in each view.
Deciding earlier or later depends on the cost of making that decision. The phrase "deciding at the last possible moment" applies. You shouldn't decide how many NPCs should be in view until your graphics engine is running well enough to tell you how much it can draw at 30 FPS. Conversely, you don't want to decide which engine to use a month before you release your game.
An illustrationI'm a fan of Donal Reinertsen's work. One of the tools he applies is called the u-curve. The u-curve illustrates the tradeoff between two opposing factors, such as the cost of planning too little vs. planning too much as a sum of those costs:
The graph shows the cost of planning based on how much of it we do (the red curve). This curve is based on two components: How much planning we do on uncertain things and how much planning we do for things we are certain about.
The green curve shows the cost of planning away uncertainty with detailed design up-front. As we do more up-front planning--making more and more decisions about the details of our game--the cost of those early decisions adds up. For example, when we've designed and built numerous gameplay missions based on speculated numbers and behaviors of NPCs, the cost of reversing those decisions late in development increases.
The blue curve shows the the costs of planning things we are (or become) certain about. If we don't make decisions about the engine, game genre or even what programming language to use, the cost of making those decisions later will impact us more. We can make lots of decisions early: NFL games have 32 teams and stadiums. FPS games have online multiplayer. Mobile games should at least run on iOS and/or Android.
The red curve is the sum of those cost curves and the sweet spot is where the cost is lowest. So, getting back to the "how much should we plan?" question, the answer is "where the cost between planning and iteration is lowest". This depends on the uncertainty of various parts of the game and the cost curve is different. Determining that starts with an honest assessment of the risks and uncertainty of your game and a willingness to take steps into the unknown in places you may have only felt comfortable trying to plan-away in the past.
A few nights ago we were watching The Voice. Every few minutes and seconds both audio and video would pause. The inability to hear feedback from the coaches or understand the context of their comments because we couldn’t hear the full song being performed was very frustrating. We eventually just stopped watching.
This made me think about challenges when delivery teams do not have a clear backlog. I often work with organizations that may have a number of agile delivery teams but they are unable to provide a context for the work to be done and even worse, provide clear detail for backlog items.
Often the product owner function isn’t scaleable in these organizations which creates a constraint. It has been said that the number one reason why agile teams fail are the lack of backlog. I would agree. The other observation is even though they know they can’t provide clarity in the backlog, they continue to ask those teams to delivery working tested software, “we have these teams so we have to keep them busy!”
The number one reason why agile teams fail are the lack of backlog.
It is fascinating how often this is seen. I worked with an organization that had a number of delivery teams but they could not provide sufficient requirements details for those teams. Regardless, they continued to have those teams deliver working tested software. What they found was that 50% of the work being done had to be reworked and numerous quality issues. The teams were frustrated as well. Does that seem like an effective approach?
I would argue they would be better off reducing the number of delivery teams and using that capacity to build out a strong product owner team. The product owner team will then be focused on providing clarity in the backlog. This enables the delivery teams to have context and the details necessary for them to deliver working tested software with dramatically reduced rework, improvements in team engagement, and value delivered with higher quality.
“Philips is continuously driving to develop high-quality software in a predictable, fast and Agile way. SAFe addresses this primary goal, as well as offering these further benefits: reduced time to market and improved quality, stronger alignment across geographically distributed multi-disciplinary teams, and collaboration across teams to deliver meaningful value to customers with reduced cycle time.”
—Sundaresan Jagadeesan, Program Manager, Philips Electronic India Limited
How do you improve quality and reduce release cycle time by two-thirds? For a $26 billion technology giant, the answer was found with SAFe. Our latest case study comes from Royal Philips which engages in the healthcare, lighting, and consumer well-being markets. They sit at #388 on the Forbes Global 2000 list, and their SAFe adoption represents one of the larger deployments we’ve seen to date: 42 Agile Release Trains and 1300+ certified practitioners.
Just two years ago, their release cycle time averaged 18 months using a traditional, project-based approach. Looking for a way to accelerate that cycle, they turned to SAFe to transition to Agile and bring an Lean-Agile mindset to business units beyond software development.
Today, Philips has 42 ARTs running across various business units, and SAFe is deployed well beyond the software businesses to include complex systems environments (hardware, software, mechanical engineering, customer support and electrical teams), as well as the R&D activities of a number of businesses, particularly in the Business Group, Healthcare Informatics, Solutions & Services (BG HISS).
The company saw improvements across the board:
- Average release cycle time down from 18 months to 6 months
- A greater focus on the customer mindset
- Feature cycle time reduced from >240 to <100 days
- Sprint and PI deliveries on time, leading to ‘release on demand’
- Quality improvements—zero regressions in some business units
- 5 major releases per train per year on demand, each catering to multiple products
- 3700+ people engaged in a SAFe way of working
- Around 1300+ trained and formally certified in Agile and SAFe
The Philips adoption showcases what is possible when an organization is fully committed to the transformation, and diligent in getting their leaders and workers trained in the principles and practices that will drive their collective success. It’s no secret that the results achievable through SAFe directly correlate to the depth of engagement by the organization, and Philips has done an excellent job reminding us of this.
Check out the the full case study for insights on their journey and their recommendations for a smooth transformation.
Many thanks to Stef Zaicsek, Sr. Director Software Technology/CTO; Rani Malli, Senior Director – Business Excellence & Quality Management; Sanne Reijniers, Agile Training, Change & Communications Lead at Philips; and Sundaresan Jagadeesan, Program Manager – I2M Excellence SW Development Program, for sharing their SAFe story.
This post is an unapologetic riff on Richard Rumelt’s book Good Strategy/Bad Strategy: The Difference and Why It Matters. The book is a wonderful analysis of what makes a good strategy and how successful organisations use strategy effectively. I found that it reinforced my notion that Agility is a Strategy and so this is also a way to help me organise my thoughts about that from the book.Good and Bad Agile
Rumelt describes Bad Strategy as having four major hallmarks:
- Fluff – meaningless words or platitudes.
- Failure to face the challenge – not knowing what the problem or opportunity being faced is.
- Mistaking goals for strategy – simply stating ambitions or wishful thinking.
- Bad strategy objectives – big buckets which provide no focus and can be used to justify anything (otherwise known as “strategic horoscopes”).
These hallmarks can also describe Bad Agile. For example, when Agile is just used for the sake of it (Agile is the fluff). Or when Agile is just used to do “the wrong thing righter” (failing to face the challenge). Or when Agile is just used to “improve performance” (mistaking goals for strategy). Or when Agile is just part of a variety of initiatives (bad strategy objectives).
Rumelt goes on to describe a Good Strategy as having a kernel with three elements:
- Diagnosis – understanding the critical challenge or opportunity being faced.
- Guiding policy – the approach to addressing the challenge or opportunity.
- Coherent actions – the work to implement the guiding policy.
Again, I believe this kernel can help identify Good Agile. When Agile works well, it should be easy to answer the following questions:
- What diagnosis is Agile addressing for you? What is the critical challenge or opportunity you are facing?
- What guiding policy does Agile relate to? How does it help you decide what you should or shouldn’t do?
- What coherent actions you are taking that are Agile? How are they coordinated to support the strategy?
Rumelt suggests that
“a good strategy works by harnessing power and applying it where it will have the greatest effect”.
He goes on to describe nine of these powers (although they are not limited to these nine) and it’s worth considering how Agile can enable them.
- Leverage – the anticipation of what is most pivotal and concentrating effort. Good Agile will focus on identifying and implementing the smallest change (e.g. MVPs) which will result in largest gains.
- Proximate objects – something close enough to be achievable. Good Agile will help identify clear, small, incremental and iterative releases which can be easily delivered by the organisation
- Chain-link systems – systems where performance is limited by the weakest link. Good Agile will address the constraint in the organisation. Understanding chain-link systems is effectively the same as applying Theory of Constraints.
- Design – how all the elements of an organisation and its strategy fit together and are co-ordinated to support each other. Good Agile will be part of a larger design, or value stream, and not simply a local team optimisation. Using design is effectively the same as applying Systems Thinking.
- Focus – concentrating effort on achieving a breakthrough for a single goal. Good Agile limits work in process in order to help concentrate effort on that single goal to create the breakthrough.
- Growth – the outcome of growing demand for special capabilities, superior products and skills. Good Agile helps build both the people and products which will result in growth.
- Advantage – the unique differences and asymmetries which can be exploited to increase value. Good Agile helps exploit, protect or increase demand to gain a competitive advantage. In fact Good Agile can itself be an advantage.
- Dynamics – anticipating and riding a wave of change. Good Agile helps explore new and different changes and opportunities, and then exploits them.
- Inertia and Entropy – the resistance to change, and decline into disorder. Good Agile helps organisations overcome their own inertia and entropy, and take advantage of competitors’ inertia and entropy. In effect, having less inertia and entropy than your competition means having a tighter OODA loop.
In general, we can say that Good Agile “works by harnessing power and applying it where it will have the greatest effect”, and it should be possible to answer the following question:
- What sources of power is your strategy harnessing, and how does Agile help apply it?
Rumelt concludes with some thoughts on creating strategy, and what he suggests is
“the most useful shift in viewpoint: thinking about your own thinking”.
He describes this shift from the following perspectives:
- The Science of Strategy – strategy as a scientific hypothesis rather than a predictable plan.
- Using Your Head – expanding the scanning horizon for ideas rather than settling on the first idea.
- Keeping Your Head – using independent judgement to decide the best approach rather than following the crowd.
This is where I see a connection between Good Strategy and Strategy Deployment, which is an approach to testing hypotheses (science as strategy), deliberately exploring multiple options (using your head), and discovering an appropriate, contextual solution (keeping your head).
In summary, Good Agile is deployed strategically by being part of a kernel, with a diagnosis of the critical problem or opportunity being faced, guiding policy which harnesses a source of power, and coherent actions that are evolved through experimenting as opposed to being instantiated by copying.
Some interesting SAFe applications:
At the US FAA:
and at Capital One bank:
Stay SAFe! (while flying or banking, or both at the same time)
–Dean and the Framework team
As I’ve been releasing my screencasts on learning Docker and working with Node in Docker, along with my (FREE!) cheatsheets for Docker, I’ve been getting a pretty regular stream of questions from people.
From these questions, I’ve been able to improve my own use of Docker while helping others. I’ve also had this notion that I need to write a small ebook to consolidate some of this knowledge.
What I really wanted to do was take the lessons learned, configuration ideas and solutions to problems that people run into, and provide almost a copy-and-paste solution set… maybe a github repository to go with it, and some sample code and configuration.
And I’ve finally settled on a way to bring it all together: an ebook and collection of code, with recipes for solving problems surrounding development of Node.js applications in Docker.
The Docker Recipes for Node.js Development ebook
My goal with this ebook, having just barely started the outline and writing, is to provide a set of solutions for common problems, as I said above.
The gist of each chapter will be a problem, solution description and “recipe” that you can easily follow for the solution.
But I want to take this a step further, as I said, and also provide these solutions as code and Dockerfile samples.
The end result should be simple copy & paste, “getting started” or “how-to” style examples from which you can build your applications and development processes.
I’ve started working on the outline and the first example recipe, but I’m not quite ready to make it available, yet.I need your help to make this book happen
Typically when I write an ebook or cheatsheet, I get a good chunk of it done and then do early release sales through Leanpub.
This has worked for me in the past, but I want to do something a little different in this case.
I need to make sure this book covers the real problems and questions that you’re running into with Docker and Node.js development.
With tha tin mind, I need your help – your questions, your problems, and your feedback to show me the right direction for this book.In January, I’m going to do a pre-sale for the ebook and code samples.
I’ll likely have 1 chapter written when it starts, and you can see above that I already have a book cover created.
From there, the purpose of the pre-sale is to get you involved.
You’ll have an opportunity to not only pick up the book at a significant discount compared to the final price, but also to help shape the direction of the book as I’m writing it.
I don’t have all the details ironed out yet, but I’ll have some kind of setup for receiving feedback, asking for input, getting code samples and early chapters to you, etc.
It’s all coming soon, but I’m not quite ready to roll out the pre-sale just yet. So stay tuned to the blog here, join the mailing list below, and watch for more information about the ebook and how you can be involved in shaping the content!Tweet
The post Coming Soon: Docker Recipes for Node.js Development ebook appeared first on DerickBailey.com.
This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen
Remain humble. Don’t worry about who receives the credit. Never let power or authority go to your head.
– Dick Winters, Beyond Band of Brothers
About fifteen years into my career, I thought of myself as a strong manager. I had progressed up the ranks and was responsible for an entire telecom equipment manufacturing facility, leveraging Lean with a great group of people.
Arrogance and ego have ruined many a leader. Unfortunately, these characteristics are still accepted today, although they shouldn’t be. If your goal is to optimize the value of your people, thereby improving the value of your organization, then you need to support and nurture them. You have to admit that you cannot know or control everything that happens and be humble enough to trust others to do their jobs.
Humility means accepting that you’re human and that you have faults, vulnerabilities, and worries like everyone else. Humble leaders are actually more confident than ego-driven leaders, as they are secure enough to show and admit their vulnerabilities and even mistakes. They are open to alternative ideas because they know, understand, and respect that they don’t have all the answers. (Examples of successful CEOs that take a more humble approach include Jeff Bezos of Amazon and Tony Hsei of Zappos.) This creates confidence, and thereby motivation, within the organization. Humble leaders let people do their jobs, aren’t afraid to ask stupid questions, turn mistakes into learning and mentoring opportunities, encourage dissent and embrace opinions and methods different than their own, and forego the trappings of power.
Years ago, when I was named president of the medical device company I ended up leading for eight years, my very first action—within the first hour of starting the job—was to remove the “Reserved for President” parking spot. Later, I removed the custom furniture from my office, and when we built a new building, I ditched the private bathroom. These were small actions in the grand scheme of things, but they sent a message to the company that I was no better than others who worked there. The approach paid dividends several years later, when I needed considerable time off and flexibility to deal with a family medical situation. I was open with my team about what was going on and I received incredible support, under- standing, and compassion from them.
Do you display arrogance or ego at home or in the work place? Would your family or coworkers agree? What would happen if you made your vulnerabilities and shortcomings more visible? How would your peers, team, and family react?
I wanted to respond earlier, but tweets were too restrictive. Here’s my response.The argument against Tech Leads
The article rebuts the necessity for a Tech Lead with the following points (emphasis author’s, not mine):
- Well functioning teams in which people share responsibilities are not rare.
- When a team is not functioning well, assigning a tech lead can potentially make it worse.
There are many great points in the article. Some of the points I support such as how sharing responsibilities (also known as effective delegation). Distributing responsibilities can be one way effective teams work. Other points lack essential context such as the title (it depends), while other points lack concrete answers such as how to turn a dysfunctional team into a highly performing team.Are well-functioning teams rare?
I’ve worked with at least 30 organisations over my career as a consultant, and countless teams, both as a team member (sometimes Tech Lead) and as an observer. I have seen the whole spectrum – from teams who function like a single person/unit to teams with people who simply tolerate sitting next to each other, and where one can’t miss the passive-aggressive behaviours or snide remarks.
The article claims:
that the “tech lead is a workaround – not a root cause solution
Tech leads could alleviate the consequences only
Unfortunately the article doesn’t explain how or why the tech lead is a workaround, nor how tech leads alleviate just the consequences.
The article gathered some discussion on Hackernews, and I found some comments particularly interesting.
Let’s take a sample:
- (gohrt) Trusting that a pair of engineers will always come to an agreement to authoritatively decide the best way forward seems naive to me. Where are these magical people?
- (vidhar) …we live in reality where lots of teams are not well-functioning some or all of the time, and we still need to get things done even when we don’t have the time, resources or influence to fix the team composition then and there.
- (ep103) If I had an entire team of my great engineers, my job would be easy. I’d simply delegate my duties to everyone else, and we’d all be nearly equal. I’m jealous of people who work in a shop where the teams are so well constructed, that they think you can get rid of the tech lead role.
- (shandor) My experience with other developers is that there is a surprisingly large dev population who would absolutely abhorred if they had to touch any of those things (EDIT: i.e. tech lead responsibilities)
- (doctor_fact) I have worked on teams of highly competent developers where there was no tech lead. They failed badly…
- (mattsmith321) It’s been a while since I have worked with a lot of talented, like-minded people that were all capable of making good technical decisions.
- (jt2190) I’ve been on more that one team where no leadership emerged, and in fact, leadership type behavior was passively resisted… These teams (if they can be called that) produced software that had little to no overall design.
Do these sound like well-functioning teams to you? They don’t to me.Image from David Trawin’s Flickr stream under the Creative Commons licence
Well-functioning teams do exist. However it is clear that not all teams are well-functioning. In my experience, I would even say that really well-functioning teams are less common than dysfunctional, or just functioning teams. For me, the comments are proof enough that well-functioning teams are not everywhere.
It is actually irrelevant if well-performing teams are rare – there are teams that definitely need help! Which leads to the question…Does assigning a tech lead to a poorly functioning team make it worse?
In my talk, What I wish I knew as a first time Tech Lead, I explain how acts of leadership are amplifiers (can be good or bad). Therefore assigning a bad tech lead to a poorly functioning team will probably make it worse. However I don’t think organisations set out to give teams bad tech leads.
If a team is poorly functioning, what do organisations do? Simply leave the team to stew in its own juices until things are resolved? That’s one option. Doing nothing is a gamble – you depend on someone in the team to take an act of leadership but the question is will they? I’ve seen many teams never resolve the very issues that make them poorly functioning without some form external intervention or assistance.
Most organisations try to solve this by introducing a role who has some authority. It doesn’t necessarily need to be a Tech Lead, but when the core issues are technical in nature, a good Tech Lead can help. A good leader will seek out the core issues that prevent good teamwork, and use their role to find ways to move them towards a well-functioning team. Sometimes this may mean calling meetings, even if the team do not want to have meetings to reach an agreement about how the team handles certain situations, tasks or responsibilities. A good outcome might be an agreed Team Charter or some clarity about who in the team is responsible for what. A team may end up with a model that looks like they do not need a Tech Lead, but it takes an act of leadership to to make that happen.The wrong analysis?
The article suggests that a full-time Tech Lead introduces risks such as a lack of collective code ownership, decision-making bottlenecks, a single point bus factor, and (reduced) impact on motivation. I have seen teams with and without Tech Leads both suffering from these issues. In my experience, teams without a Tech Lead tend to have more issues with knowledge silos, no cohesive view and less collective code ownership because there is little motivation to optimise for the group and individuals end up optimising for themselves.
The issue is not caused by whether or not teams have a Tech Lead. Rather, these issues are caused by a lack of a technical leadership (behaviour). The Tech Lead role is not a prerequisite for having technical leadership. I have seen teams where strong, passionate individuals will speak up, bring the team together and address these issues – which are acts of leadership. I have also seen dysfunctional teams sit on their hands because individual (job) safety is an issue and these issues go unaddressed.My conclusion
The article misses the subtle but important point of good technical leadership. A good leader and Tech Lead is not trying to own all of the responsibilities – they are there to make sure they happen. There is nothing worse than expecting everyone is responsible for a task, only to find that no one is responsible for it.
“The greatest leader is not necessarily the one who does the greatest things. (They) are the one that gets the people to do the greatest things.” – Ronald Reagan
The extent to how much individuals in a team can own these responsibilities is a function of the individuals’ interests, skills and experience. It depends!
Asking whether or not teams need a Tech Lead is the wrong question. Better questions to ask include what’s the best way to make sure all of the Tech Lead responsibilities are fulfilled, and what style of leadership does this team need right now.
I recently upgraded my mac to the latest OS only to find out that my ssh command wasn’t working.
> .ssh/config: line 18: Bad configuration option: useroaming > .ssh/config: terminating, 1 bad configuration options
which looks like because I added in the following entry to my
file in response to a previous SSH vulnerability:
This vulnerability looks like it’s been fixed: https://www.solved.tips/sshconfig-line-7-bad-configuration-option-useroaming-macos-10-12-sierra/
Seeing an organizational change map with seven to eight proposed major changes can feel daunting and discouraging. It’s even worse when we realize that list will keep growing as new organizational problems are discovered.
By taking on only one or two changes at a time, and creating and acknowledging the visible improvements that result, we avoid becoming so overwhelmed that we throw in the towel before significant change can be affected.
Our goal in the first few Sprints is primarily to create a habit of change, even more than the changes themselves. We want the idea of regular small changes to become the norm. The Organizational Improvement Team needs small wins. They need to create a sense of momentum and to prove to everyone that meaningful change is happening, so trust and support can grow.
In the previous post’s Case Study, the strategic items – “0 Net New Bugs at the End of Every Sprint” and “Find a More Effective Performance Review Process” are too big to be achieved in one or two Sprints, so they’re broken down into smaller visible steps. When breaking these down into smaller parts, it’s important to incorporate the element of Product Backlog/Organizational Queue Refinement and include some of the doers (e.g. QA, BA, Developers, etc.) who will be affected by the change. We need their input to ensure that the changes deliver value to them and are on point with what we’re trying to improve.
Scrum Development Teams often use User Stories to help them articulate the need and the value of a new piece of functionality, so let’s apply the same principles to this Organizational Improvement that we’re trying to accomplish.
Robin Dymond uses Improvement Stories at the team level so that teams have a tool to inject improvement into every Sprint. We will use the same approach at the organizational level. Like User Stories, Improvement Stories have: Why, What, Who and Acceptance Criteria.
Start with why – why would the organization benefit from a change? Time savings? Quality improvements? Repeatability? Reduction on conflict or stress? Happier team members? By stating the “why” first, we don’t presuppose the solution. Instead, we focus on the problem.Improvement Story Templates
There are two different templates you can try for creating Improvement Stories.
“Template B – Start with Why” emphasizes the importance of why more than the standard template.
Template A is likely more familiar to most people who are using User Stories.
User Stories invite conversation, focus on the needs of the end user, have a clearly stated value, are small enough to be manageable, and can be implemented individually. They help us to break big challenges down into smaller, more manageable ones, and execute small changes
Will small changes solve all of the organization’s goals quickly?
Will they foster positive momentum, an environment of trust, and habits that welcome improvements?
For the next few Sprints, the WorldsSmallestOnlineBookStore Organization Improvement Team selects two areas to improve: “0 Net New Bugs to Escape a Sprint” and “Find a more effective Performance Review Strategy”. They hand the first directly to the development teams and ask them to create a plan to implement, and the second is taken on directly by the Improvement Team itself.
“0 Net New Bugs to Escape a Sprint” – the Development Teams meet and come up with the following list of things to try:
- Start testing partially-completed Stories
- All testing involves collaboration between the team members who built the story and the tester
- Eliminate pressure on the Development Team to push more features out faster
- Try using the Specification By Example/BDD approach as a tool to foster greater understanding between QA, BA and Development.
For “Eliminate pressure on the Development Team to push more features out faster”, the Development Teams communicate with the Organization Improvement Team to make the need clear.
Meanwhile, the Org Improvement Team starts breaking down the Annual Performance Review:
- Eliminate Stack Ranking
- Test informal feedback process through bi-weekly one-on-ones
- Look into Performance Review alternatives
- Find options for Executive Coaching on Effective Leadership Mindset
Both groups agree to run these initial experiments for a period of six weeks (or three Sprints). They will recheck the results at the end of those cycles.
Image attribution: Agile Pain Relief Consulting
Template images © Robin Dymond
This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen
Amateurs sit and wait for inspiration, the rest of us just get up and go to work.
– Stephen King
Similar to the Hour of Power is the concept that certain parts of the day are more productive than others. Once again we’re all different in this respect, and it’s why some people are “morning people” while others prefer the evenings. I once even had a star employee who worked best from one to three in the morning, and insisted on being in the office at that time. (I won’t divert our discussion by describing the other problems that caused.)
I’ve long known that the most productive time of my day is in the mornings, especially the early mornings. I almost never set an alarm, but am generally awake at four. I take care of my morning meditation, breakfast, reading the paper, and then start the Hour of Power. After the Hour of Power, I usually hit the gym for an hour of strength or cardio work. After a quick shower, I head to the office, attend our morning team video call (our version of the “standup meeting” meeting I’ll describe later), and then get back into my productive time. I’ll use the pomodoro method to optimize the use of that time, which lasts until 11:30 a.m. or so.
While I find that my mornings are usually very productive, afternoons are far more difficult. I find it harder to maintain focus, even when removing distractions, and I know my mental acuity is not at the level it was in the morning. Therefore, I schedule phone calls, more mindless tasks, and errands during this time. A couple times a week, I’ll also take do a walking meditation on the beach. During this time, work (improvement) is still being done, just not the most critical tasks. I’m working on improving my mental productivity in the afternoon, but so far nothing has changed—perhaps reinforcing how powerful the productive time of my day is.
Evenings are a bit better than the afternoons, but my priority in the evening is my family. Therefore, I purposely don’t schedule any work tasks during this part of the day. When I have evening free time—which is common, since my wife requires more sleep than I do—I use the evenings to catch up on reading. If my brain feels a bit fried, I’ll turn on the TV. I end the evening with some reflection.
What does “done” mean at your studio? When I ask this question onsite, I often hear some anecdotes and definitions such as: • “It compiles on my machine.” • “I’ll know it when I see it.” • “It’s 90% complete” • “It’s coded” (or “it compiles”). • “It's done, but it needs final assets.” • “It’s first pass” (out of an undetermined number of passes)Usually there is no agreed-upon definition. This is a big warning sign on the road to death-march-ville.One of the initial challenges in adopting agile is defining a more frequent and universal definition of done (DoD). As with all agile practices, it’s emergent; Teams inspect and adapt the game and their work and continually improve the DoD.For Scrum every sprint must meet this definition of done before being accepted as done by the product owner. For Kanban, a work item cannot be pulled into the next stage until it meets the definition of done in it's current stage. For example, a rigged model cannot be pulled into animation unless all the naming conventions for the skeleton are met.Non-functional RequirementsA DoD is derived from a set of non-functional requirements. These are attributes of the game that apply universally to all work , such as: • The game runs at a smooth, playable frame-rate • The game runs on all target platforms • The game streams off disc cleanly • The game meets all Facebook API standards • The code meets the coding standards of the studio • The assets are appropriately named and fit within budgets • All asset changes must be “hot-loadable” into the running game • …and so onNon-functional requirements are never done. Any feature added or tuned can impact frame-rate, so by the end of a sprint we must ensure that any work added does not slow frame-rate below a minimum standard. If it does, it's not considered done.Multiple Definitions of DoneBecause potentially shippable doesn’t always mean shippable, there often needs to be more than one DoD, I’ve seen teams create up four DoDs, which cover anything from “prototype done” to “demo-able done” to “shippable done”. These identify the quality and content stages that large features, which take longer than one sprint to be viable (called epics), must go through. Keep in mind, that these definitions shouldn't allow the team to "waterfall" the work. Even if you have a set of cubes flying around for targets, before animating creatures are ready, those cubes can't crash the game or impact frame-rate too much. If cubes are killing your frame-rate, you have more important things to address before adding animating creatures!
Starting OutThe best implementations of DoDs start small and grow (that can be said of a lot of good practices). Start by identifying a few of your larger areas of debt in a retrospective and pick some low hanging fruit DoDs. For example, if basic stability is an issue, a definition such as "the game can't crash while using the feature" is a good starting place. Don't be surprised if this definition impacts how much the team can commit to every sprint: quality costs...but that cost will be recouped with interest over time as practices are improved.
What Post-Alpha looks like with a Good Definition of DoneWhen we started using agile, I’d hoped we could do away with the alpha and beta stages of game development. In practice, for major initial release of a game, we still needed time after feature-complete to balance the game. What we did lose were the death marches and compromises.
I’m teaching a Product Owner workshop this week, and I had an insight about a Minimum Viable Product.
AN MVP has to fulfill these criteria:
- Minimum means it’s the smallest chunk of value that allows us to build, measure, and learn. (Yes, Eric Ries’ loop)
- Viable means the actors/users can use it.
- Product means you can release it, even if to a small, specific group of test users
My insight came when I was discussing with the class how their data moves through their system. Think of data like an onion: there is an inside kernel and concentric rings around the inside. (If you prefer, a cut-down tree, but I like the onion.) Often, you start with that inside and small (round) piece of value. You then add layers around it, every time you do another MVP (or Experiment, if you’re prototyping).
The data has to do a round trip. That’s why I thought of it in circles like an onion. If you only implement half (or some other part) of the entire round trip, you don’t have the entire circle of minimum functionality.
This image on the left, where you see the feature go through the architectural layers might be a better image.
The actions that the user—whomever the user is—has to go into the system and come out. In an MVP, you might not have a very thin slice all by itself, but it still needs to be a thin slice.
When I think about the idea of the onion, I think, “What is the smallest piece I can define so we can see some results?” That’s the idea of the MVP.
I realize that MVPs might be useful for the team to learn about something in a small test environment. And, the smaller chunk of value you deliver, the more experiments you can run, and the faster you can learn. Maybe the idea of the onion, or the round trip through the feature will help you think about your MVPs.
BTW, I still have room in my online Product Ownership workshop that starts in January. Please join us.
Trying this out today for a remote client meeting. So far, I like it. Will update after the meeting…Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Be it get-rich-quick schemes or rapid-weight-loss solutions, the Internet is littered with a million improvement schemes. In my many years of attempting to improve productivity for my clients and myself, I’ve tried just about everything. Regardless if the post, podcast, or book is promising to do twice the work in half the time or that you can cram an entire work week into 4 hours, there is something out there for everyone. My first venture into this productivity-focused world was way back in the early 90s, when I watched this horrible movie titled Taking Care of Business, starring Jim Belushi and Charles Grodin. In the movie, an uptight advertising exec has his entire life in a filofax organizer which mistakenly ends up in the hands of a friendly convict who poses as him. The movie is still horrible but the organizer idea seemed to work for me.Franklin Covey Planner
From this movie, I discovered the Franklin Covey Planner. Yep, my world was filled with A1, B1, C1’s. Alas, I couldn’t make it work. Much like the guy in the movie, everything was in a little leather book with special pages (that were not cheap). Unfortunately, if I didn’t have the book in my field of view to constantly remind myself, things didn’t get done. I think I lasted a year, until I discovered the cost of refilling the book with new pages.GTD
I then discovered GTD (Getting Things Done) by David Allen. This was 15-20 years ago. Again, it worked for a little while but I then found myself doing too much organizing and too little doing. Things were going away from paper filing and everything in that system was all about paper filing. Maybe I was doing it wrong. It just wasn’t clear to me. I didn’t see any real progress or productivity improvement so I just stopped doing it.Personal Kanban
In mid 2009, in a moment of Internet serendipity, I ventured into the world of Personal Kanban. I think I searched “Zen” and up popped a website for a Kanban tool. I started using it and loved it. Alas, that company got purchased by Rally and they are no longer taking registrations. But, this has become the first system I have been able to stick with. Just to try other tools, I soon switched over to LeanKit Kanban. I’ve been using it ever since. I like that it doesn’t make any promises it can’t keep. “Visualize your work, optimize your process and deliver faster”.LeadingAgile Transformation Framework
In 2012, I joined LeadingAgile. Though we didn’t have a defined system at the time, a Transformation Framework emerged. Since that time, when the system is followed, it works really well. When things don’t work so well, the same failure patterns are present.Productivity Rosetta Stone
So, why do some methods work and some do not? Why did I abandon the Planner and GTD systems so long ago but still use Kanban and LeadingAgile’s transformation framework? Well, I started by listing common traits on a whiteboard and saw relationships and discovered some patterns. Not only are there three things I believe every productivity system needs to work, I also see three things that are necessary to prevent you from abandoning that system.
I describe it as a Productivity Rosetta Stone. For those unfamiliar, the Rosetta Stone is a rock slab, found in 1799. It was inscribed with a decree that appears in three scripts: Ancient Egyptian hieroglyphs, Demotic script, and Ancient Greek. The stone presents essentially the same text in all three scripts and provided the key to the modern understanding of Egyptian hieroglyphs. I’ve applied my productivity Rosetta Stone to Scrum, Kanban, Pomodoro Technique, and even LeadingAgile’s Transformation Framework. All of them check out and it provided me with a key to better understand productivity patterns.3 things to increase productivity: System, Ritual, Habit
1. A system is a set of principles or procedures to get something done or accomplished; Anyone can follow a system.
2. A ritual is a series of actions or type of behavior regularly and invariably followed by someone. It’s different from a system. A system might only be followed once, but by many people. A ritual is something someone or some group does again and again, in the hope of arriving at the same or improved outcome.
3. A habit is a regular tendency or practice, especially one that is hard to give up. If you want to be productive, you have to be habitual with your rituals, as part of your system.
How does it all fit together? Name a system. Next, list the process steps, sequence, and any rules around them. Last, do the steps again and again until it becomes a habit.Lack of these kills productivity: Clarity, Progress, or Commitment
1. Clarity is the quality of being certain or definite. You need clarity in order to know what you need to do. Lack of clarity creates confusion and waste. Each step of a system should be actionable and repeatable. In order to ensure certainty around your steps, write them down; maybe draw a picture or diagram. If your outcomes are not repeatable, you have an experiment but not a system.
2. Progress is forward or onward movement toward a destination or goal. Your goal is productivity. If you lack progress, you lose momentum. If you lose momentum (or should I be so bold to say velocity or throughput), you will lose commitment to the system.
3. Lack of commitment to the system results in you no longer using the system. You move on to something new to get the productivity results you seek.
In the event your system lacks clarity, progress, or commitment, performance will go down or you’ll stop using it all together.Scrum
Enough with the nebuous ideas. Let’s apply the patterns against the Scrum Framework.
Jeff Sutherland and Ken Schwaber did a pretty darned good job providing clarity around the system in The Scrum Guide. Being the Guide is only 16 pages long, there it’s a whole lot to it. It includes a definition of Scrum, the theory behind it, and then provides details behind teams, events, and artifacts. That’s it! Rituals (events) include sprint planning, a daily (15-minute) Scrum, a sprint review, and a retrospective. Each of these rituals helps provide both feedback and progress within the sprint. To ensure we see the progress, we timebox sprints, commit to deliver product increments regularly, and use information radiators like burndown charts to visualize the completion of work. Like any system, if you are not habitual about each of the items within the Scrum Guide, Scrum falls apart. That means commit to the system and be consistent, sprint after sprint.Summary
Though I have only provided one example of the pattern in this post (against Scrum), I’ve also applied it to Kanban and the Pomodoro Technique. Look for future posts on the topic. Like in Scrum, once your defined system becomes habitual, you can start to focus on improvements. Maybe you want to do more in less time. Maybe you want to do the same with higher quality. You be the judge. It’s your system. Remember, you’ll still need clarity, progress, and commitment or your productivity will be short lived.
Listen to Dave Prior and me in an episode of LeadingAgile Sound Notes, as we talk about the Productivity Triangle.
If you want a copy of the triangle, download it here: Productivity Triangle Template