Skip to content

Feed aggregator

Meetings, Meetings, and More Meetings

Leading Agile - Mike Cottmeyer - Thu, 03/19/2015 - 14:25

Why on earth do I need to spend so much of my time in a meeting? This is an absolutely sane question that most of the team members wind up asking at some point in time while I am coaching an organization towards more adaptive management techniques.

Regardless of the role, there are other things beyond meetings that we have traditionally declared to be a productive use of time. If you are a developer, then we declare productivity to be associated with time spent writing software. If you are a product manager, then we declare productivity to be associated with time spent defining the next version of a product or understanding the market’s demands. Whatever the role, it is rare for an organization or a profession to associate meeting time with high productivity.

From this perspective, it makes a ton of sense when people beg the question:

Why on earth do I need to spend so much of my time in a meeting?

Here’s my usual answer:

What defines a productive minute, is it one that is spent focusing on your craft or is it a minute that is spent delivering value to the organization as quickly as possible?

I tend to think that a productive minute is one that is spent delivering value to the organization as quickly as possible. So, while the time spent practicing a craft is absolutely a critical part of getting value to the organization it is a waste if the individual is not hyper focused on the actual needs of the organization. And this is where meetings come into the picture.

Effective meetings will have a specific theme and will enable a team to establish high clarity around the needs of the organization and teach accountability. For most of the teams that I coach this involves a few specific themes:

(1) Daily Standup – This is a quick touchpoint that is oriented around maintaining accountability within a team as each member takes a minute to update the other team members about the progress made over the past 24 hours, progress that they expect to make over the next 24 hours, and any issues or concerns that they need help addressing.

(2) Tactical Meeting – This is an hour or more and has a very specific purpose, dealing with short term tactics such as creating clarity around near term market needs or ensuring that the team is successful in meeting their commitments.

(3) Strategic Meeting – This is usually a half day or more and is focused on creating clarity about how to move the organization forward with a focus on the longer term vision and strategies.

What’s your take, are meetings useful in your organization? Do your meetings have specific themes or are they a mix-mash of agenda topics?

The post Meetings, Meetings, and More Meetings appeared first on LeadingAgile.

Categories: Blogs

Tell us Your Thoughts: LeanKit Analytics & Reporting Survey

With advancements coming soon to LeanKit analytics, we’re looking to learn more about your reporting interests. Do you want more Lean, Scrum, SAFe, or general project management reports? Do you want to know more about individuals, teams, projects, or the entire portfolio? Use the survey below to let us know which reports you use today — whether […]

The post Tell us Your Thoughts: LeanKit Analytics & Reporting Survey appeared first on Blog | LeanKit.

Categories: Companies

Collaborative blogging contest

Pivotal Tracker Blog - Wed, 03/18/2015 - 19:40

We are excited to be sponsoring a $500 cash prize for the best Pivotal Tracker post submitted in AirPair’s $100K developer writing competition!

AirPair has released cool features that allow authors and readers to collaborate on posts, just like normal code via forks and pull requests! Over the next 10 weeks, you can win your share of $100,000 in prize money for the best tutorials, opinion pieces, and tales of using Pivotal Tracker in production.

Have you used Pivotal Tracker in a way you are particularly proud of? Have you learned something you feel others would benefit from? How have you integrated it with other APIs to get the job done? The average post published on AirPair in January was read 15,000 times, so it’s a great way to share the cool things you’ve made with fellow developers.

Click here to submit your posts before May 30.

airpair competition

The post Collaborative blogging contest appeared first on Pivotal Tracker.

Categories: Companies

How to Make Smart Tradeoffs When Developing Software Products

Pivotal Tracker Blog - Wed, 03/18/2015 - 17:49

As technologists we want to build software that is friendly, fast, beautiful, reliable, secure, and scalable. And we expect ourselves to deliver it on time and under budget, because our ultimate goal is to have lots of happy customers who can do what they want: cue Daft Punk’s Technologic!

But time and energy are finite, and we simply cannot deliver it all at once. We need to choose our priorities, and this choice is one we should make consciously.

Evaluating our software development priorities while dealing with constraints is known as the tradeoff space.

How can you make wise tradeoffs for your product?

The choice is based on a balance between your technology stack and business model type.

“Move fast and break things!”

While this has become a popular motto, it doesn’t apply to every company.

For example, enterprise software companies that are building system-level software prioritize reliability because customers need to use them. Each change needs to be rigorously tested, and often approved before it can be released.

Meanwhile, consumer internet companies spend time and money on making their UX delightful so that people want to use them. Reliability is something they’re willing to sacrifice. Since many are web-based applications, they can iterate quickly and release changes frequently.

So yes, they can move fast and break things.

The tradeoff space may seem insurmountable, but you too can become confident about your decisions by learning from a true pro!

In the second episode of Femgineer TV, I’ve invited Jocelyn Goldfein, the Former Director of Engineering at Facebook, to talk about:

  • What the tradeoff space is
  • How to not get overwhelmed by the tradeoff space
  • How to make decisions that will help you ship product that your customers will love and help you meet business goals

Jocelyn has led engineering teams at early to growth-stage startups like VMWare and enterprise companies like Trilogy, so she’s definitely had her fair share of dealing with constraints and having to make tradeoffs to ship product and meet business goals.

We also dig into the cost of a mistake, how to take risks, the BIGGEST mistake Jocelyn sees technical folks making over and over again, and how to avoid making it!

Watch the episode to learn how you can make smart tradeoffs when developing software products.

Viewers Challenge!

After you’ve watched the episode, take our challenge. Let us know in the blog comments below:

  • What was the last tradeoff you had to make?
  • What was the cost of the mistake?
  • How did you or your company feel about taking the risk?

The 3 BEST responses will receive a special giveaway from our sponsor Pivotal Tracker and be showcased in Femgineer’s weekly newsletter!

Submit your responses in the blog comments below by March 19th at 11:59pm PST.

The next episode of Femgineer TV airs in April. I’ve invited Ryan Hoover and Erik Torenberg, the founders of Product Hunt, to talk about: How to Build a Community of Evangelists for Your Software Product. Subscribe to our YouTube channel to know when it’s out!

The post How to Make Smart Tradeoffs When Developing Software Products appeared first on Pivotal Tracker.

Categories: Companies

Microservices: coupling vs. autonomy

Xebia Blog - Wed, 03/18/2015 - 15:35

Microservices are the latest architectural style promising to resolve all issues we had we previous architectural styles. And just like other styles it has its own challenges. The challenge discussed in this blog is how to realise coupling between microservices while keeping the services as autonomous as possible. Four options will be described and a clear winner will be selected in the conclusion.

To me microservices are autonomous services that take full responsibility for one business capability. Full responsibility includes presentation, API, data storage and business logic. Autonomous is the keyword for me, by making the services autonomous the services can be changed with no or minimal impact on others. If services are autonomous, then operational issues in one service should have no impact on the functionality of other services. That all sounds like a good idea, but services will never be fully isolated islands. A service is virtually always dependent on data provided by another service. For example imagine a shopping cart microservice as part of a web shop, some other service must put items in the shopping cart and the shopping cart contents must be provided to yet other services to complete the order and get it shipped. The question now is how to realise these couplings while keeping maximum autonomy.  The goal of this blog post is to explain which pattern should be followed to couple microservices while retaining maximum autonomy.

rr-ps

I'm going to structure the patterns by 2 dimensions, the interaction pattern and the information exchanged using this pattern.

Interaction pattern: Request-Reply vs. Publish-Subscribe.

  • Request-Reply means that one service does a specific request for information (or to take some action). It then expects a response. The requesting service therefore needs to know what to aks and where to ask it. This could still be implemented asynchronously and of course your could put some abstraction in place such that the request service does not have to know the physical address of the other service, the point still remains that one service is explicitly asking a for specific information (or action to be taken) and functionally waiting for a response.
  • Publish-Subscribe: with this pattern a service registers itself as being interested in certain information, or being able to handle certain requests. The relevant information or requests will then be delivered to it and it can decide what to do with it. In this post we'll assume that there is some kind of middleware in place to take care of delivery of the published messages to the subscribed services.

Information exchanged: Events vs. Queries/Commands

  • Events are facts that cannot be argued about. For example, an order with number 123 is created. Events only state what has happened. They do not describe what should happen as a consequence of such an event.
  • Queries/Commands: Both convey what should happen. Queries are a specific request for information, commands are a specific request to the receiving service to take some action.

Putting these two dimensions in a matrix results into 4 options to realise couplings between microservices. So what are the advantages and disadvantages for each option? And which one is the best for reaching maximum autonomy?

In the description below we'll use 2 services to illustrate each pattern. The Order service which is responsible for managing orders and the Shipping service which is responsible for shipping stuff, for example the items included in an order. Services like these could be part of a webshop, which could then also contain services like a shopping cart, a product (search) service, etc.

1. Request-Reply with Events:rre

In this pattern one service asks a specific other service for events that took place (since the last time it asked). This implies strong dependency between these two services, the Shipping service must know which service to connect to for events related to orders. There is also a runtime dependency since the shipping service will only be able to ship new orders if the Order service is available.

Since the Shipping service only receives events it has to decide by itself when an order may be shipped based on information in these events. The Order service does not have to know anything about shipping, it simply provides events stating what happened to orders and leaves the responsibility to act on these events fully to the services requesting the events.

2. Request-Reply with Commands/Queries:

rrcIn this pattern the shipping Order service is going to request the Shipping service to ship an order. This implies strong coupling since the Order service is explicitly requesting a specific service to take care of the shipping and now the Order service must determine when an order is ready to be shipped. It is aware of the existence of a Shipping service and it even knows how to interact with it. If other factors not related to the order itself should be taken into account before shipping the order (e.g. credit status of the customer), then the order services should take this into account before requesting the shipping service to ship the order. Now the business process is baked into the architecture and therefore the architecture cannot be changed easily.

Again there is a runtime dependency since the Order service must ensure that the shipping request is successfully delivered to the Shipping service.

3. Publish-Subscribe with Events

pseIn Publish-Subscribe with Events the Shipping service registers itself as being interested in events related to Orders. After registering itself it will receive all events related to Orders without being aware what the source of the order events is. It is loosely coupled to the source of the Order events. The shipping service will need to retain a copy of the data received in the events such that is can conclude when an order is ready to be shipped. The Order service needs to have no knowledge about shipping. If multiple services provide order related events containing relevant data for the Shipping service then this is not recognisable by the Shipping service. If (one of) the service(s) providing order events is down, the Shipping service will not be aware, it just receives less events. The Shipping service will not be blocked by this.

4. Publish-Subscribe with Commands/Queries

pscIn Publish-Subscribe with Command/Queries the Shipping service registers itself as a service being able to ship stuff. It then receives all commands that want to get something shipped. The Shipping service does not have to be aware of the source of the Shipping commands and on the flip side the Order service is not aware of which service will take care of shipping. In that sense they are loosely coupled. However, the Order service is aware of the fact that orders must get shipped since it is sending out a ship command, this does make the coupling stronger.

Conclusion

Now that we have described the four options we go back to the original question, which pattern of the above 4 provides maximum autonomy?

Both Request-Reply patterns imply a runtime coupling between two services and that implies strong coupling. Both Command/Queries patterns imply that one service is aware of what another service should do (in the examples above the order service is aware that another service takes care of shipping) and that also implies strong coupling, but this time on functional level. That leaves one option: 3. Publish-Subscribe with Events. In this case both services are not aware of each others existence from both runtime and functional perspective. To me this is the clear winner for achieving maximum autonomy between services.

The next question pops up immediately, should you always couple services using Publish-Subscribe with events? If your only concern is maximum autonomy of services the answer would be yes, but, there are more factors that should be taken into the account. Always coupling using this pattern comes at a price, data is replicated, measures must be taken to deal with lost events, events driven architectures do add extra requirements on infrastructure, their might be extra latency, and more. In a next post I'll dive into these trade-offs and put things into perspective. For now remember that Publish-Subscribe with Events is a good bases for achieving autonomy of services.

Categories: Companies

More Ways to Visualize Your Project Portfolio

Johanna Rothman - Wed, 03/18/2015 - 13:47

Every time I work with a client or teach a workshop, people want more ways to visualize their project portfolios. Here are some ideas:

Here is a kanban view of the project portfolio with a backlog:

Kanban view of the project portfolio

Kanban view of the project portfolio

 

 

 

 

 

 

 

And a kanban view of the project portfolio with an “Unstaffed Work” line, so it’s clear:

Project Portfolio Kanban with Unstaffed Work Line

Project Portfolio Kanban with Unstaffed Work Line

 

 

 

 

 

 

 

 

If you haven’t read Visualizing All the Work in Your Project Portfolio, you should. It has some other options, too.

I have yet more options in Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects.

Categories: Blogs

are you done yet? how about now?

Derick Bailey - new ThoughtStream - Wed, 03/18/2015 - 12:00

break-things-down

It’s the curse of doing large things – the constant questioning from other people and even yourself, wanting to know if you’re done yet. I hate it. It makes me mad. I want to scream, “NO! Can’t you see I’m still working?! Go away and I’ll tell you when I’m done!”

Now, compare that to the happiness of a recent conversation I had:

  • Client: “Hey, looks like you’ve been making some great progress today! I see a lot of things checked off!”
  • Me: “Heh – not really. Just another day of work. Except I broke the tickets down in to smaller things.”
  • Client: “Great! Keep up the great work – I’m glad to see you’re getting so much done!”

So, what’s the difference, here? It’s not the client… I’ve had the same client for well over a year now, and I’ve had more than one instance of me wanting to yell about when I’ll be done. The difference is in how I broke down the big things that I was working on. I made myself look good by having many smaller things to do and showing that I was getting them done.

Check Check Check

When you look at a task board, issue list, ticket system or any other place where you keep a list of things to do, it can be overwhelming to see that One Giant Thing To Do. It’s a monumental task that scares you when you think about it, and makes you want to crawl under your desk and hide.

Like so many other tasks in our lives, though, it becomes much more manageable when we break that One Giant Thing To Do down in to smaller things to do. Suddenly that giant thing seems like it may actually be possible because you can see that you’ve made progress. You’ve moved tickets across your task board, checked them off, or done whatever it is that you do to say these small things are done.

Happy++

As an added bonus to getting many small things done every day, you’ll find your own satisfaction increasing. When you can look back at your list of things to do and see that you got 15 things done today, instead of looking at that One Giant Thing To Do that has been on your list all month, you will be much happier.

Getting things done makes us, as people who do things, happy. It also makes our client / boss / team / customers / etc happy. When the people for whom we are building things can see the progress we are making (even if they don’t understand that progress), they know that we are working and are going to get it done eventually.

Break It Down, Now

The next time you set out to conquer that One Giant Thing To Do, take a few moments and break it down in to smaller things.

When you’ve got a rough idea of the smaller things, get started on one of them. Break it apart and break it down further when you see the need.

The perception of productivity will greatly improve your outlook on the One Giant Thing To Do.

– Derick

Categories: Blogs

Android: JUnit XML Reports with Gradle

A little madness - Wed, 03/18/2015 - 08:32

The Android development tools project has seen big changes over the last year. The original Eclipse ADT development environment was superseded late last year by Android Studio — a new IDE based on Intellij. Under the hood Android Studio also uses a new command line build system based on Gradle, replacing the previous Ant-based system. I’ve been keen to find out how these changes impact the integration of Android test reports with continuous integration servers like Pulse.

Summary
  • Android JUnit Report is redundant.
  • Run on-device Android tests with: ./gradlew connectedAndroidTest
  • Collect reports from: app/build/outputs/androidTest-results/connected/*.xml

 

Details

The original Ant-based build system for Android didn’t produce XML test reports for instrumentation tests (i.e. those that run on-device), prompting me to create the Android JUnit Report project. Android JUnit Report produced XML output similar to the Ant JUnit task, making it compatible with most continuous integration servers. The good news is: Android JUnit Report is now redundant. The new Gradle-based build system produces sane XML test reports out of the box. In fact, they’re even more complete than those produced by Android JUnit Report, so should work with even more continuous integration servers.

The only downside is the documentation, which is a little confusing (while there are still documents for the old system about) and not very detailed. With a bit of experimentation and poking around I found how to run on-device (or emulator) tests and where the XML reports were stored. With a default project layout as created by Android Studio:

ASDemo.iml
app/
  app.iml
  build.gradle
  libs/
  proguard-rules.pro
  src/
    androidTest/
    main/
build.gradle
gradle
gradle.properties
gradlew
gradlew.bat
local.properties
settings.gradle

You get a built-in version of Gradle to use for building your project, launched via gradlew. To see available tasks, run:

$ ./gradlew tasks

(This will download a bunch of dependencies when first run.) Amongst plenty of output, take a look at the Verification Tasks section:

Verification tasks
------------------
check - Runs all checks.
connectedAndroidTest - Installs and runs the tests for Debug build on connected devices.
connectedCheck - Runs all device checks on currently connected devices.
deviceCheck - Runs all device checks using Device Providers and Test Servers.
lint - Runs lint on all variants.
lintDebug - Runs lint on the Debug build.
lintRelease - Runs lint on the Release build.
test - Run all unit tests.
testDebug - Run unit tests for the Debug build.
testRelease - Run unit tests for the Release build.

The main testing target test does not run on-device tests, only unit tests that run locally. For on-device tests you use the connectedAndroidTest task. Try it:

$ ./gradlew connectedAndroidTest
...
:app:compileDebugAndroidTestJava
:app:preDexDebugAndroidTest
:app:dexDebugAndroidTest
:app:processDebugAndroidTestJavaRes UP-TO-DATE
:app:packageDebugAndroidTest
:app:assembleDebugAndroidTest
:app:connectedAndroidTest
:app:connectedCheck

BUILD SUCCESSFUL

Total time: 33.372 secs

It’s not obvious, but this produces compatible XML reports under:

app/build/outputs/androidTest-results/connected

with names based on the application module and device. In your continuous integration setup you can just collect all *.xml files in this directory for reporting.

Although the new build system has killed the need for my little Android JUnit Report project, this is a welcome development. Now all Android developers get better test reporting without an external dependency. Perhaps it will even encourage a few more people to use continuous integration servers like Pulse to keep close tabs on their tests!

Categories: Companies

Reducing the size of Docker Images

Xebia Blog - Wed, 03/18/2015 - 02:00

Using the basic Dockerfile syntax it is quite easy to create a fully functional Docker image. But if you just start adding commands to the Dockerfile the resulting image can become unnecessary big. This makes it harder to move the image around.

A few basic actions can reduce this significantly.

Categories: Companies

Neo4j: Detecting potential typos using EXPLAIN

Mark Needham - Wed, 03/18/2015 - 00:46

I’ve been running a few intro to Neo4j training sessions recently using Neo4j 2.2.0 RC1 and at some stage in every session somebody will make a typo when writing out of the example queries.

For example one of the queries that we do about half way finds the actors and directors who have worked together and aggregates the movies they were in.

This is the correct query:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5

which should yield the following results:

==> +-----------------------------------------------------------------------------------------------------------------------+
==> | actor.name           | director.name    | movies                                                                      |
==> +-----------------------------------------------------------------------------------------------------------------------+
==> | "Hugo Weaving"       | "Andy Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Hugo Weaving"       | "Lana Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] |
==> | "Laurence Fishburne" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Keanu Reeves"       | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> | "Carrie-Anne Moss"   | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"]               |
==> +-----------------------------------------------------------------------------------------------------------------------+

However, a common typo is to write ‘DIRECTED_IN’ instead of ‘DIRECTED’ in which case we’ll see no results:

MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5
 
==> +-------------------------------------+
==> | actor.name | director.name | movies |
==> +-------------------------------------+
==> +-------------------------------------+
==> 0 row

It’s not immediately obvious why we aren’t seeing any results which can be quite frustrating.

However, in Neo4j 2.2 the ‘EXPLAIN’ keyword has been introduced and we can use this to see what the query planner thinks of the query we want to execute without actually executing it.

Instead the planner makes use of knowledge that it has about our schema to come up with a plan that it would run and how much of the graph it thinks that plan would touch:

EXPLAIN MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director)
RETURN actor.name, director.name, COLLECT(movie.title) AS movies
ORDER BY LENGTH(movies) DESC
LIMIT 5

2015 03 17 23 39 55

The first row of the query plan describes an all nodes scan which tells us that the query will start from the ‘director’ but it’s the second row that’s interesting.

The estimated rows when expanding the ‘DIRECTED_IN’ relationship is 0 when we’d expect it to at least be a positive value if there were some instances of that relationship in the database.

If we compare this to the plan generated when using the proper ‘DIRECTED’ relationship we can see the difference:

2015 03 17 23 43 11

Here we see an estimated 44 rows from expanding the ‘DIRECTED’ relationship so we know there are at least some nodes connected by that relationship type.

In summary if you find your query not returning anything when you expect it to, prefix an ‘EXPLAIN’ and make sure you’re not seeing the dreaded ‘0 expected rows’.

Categories: Blogs

Who has two thumbs, excels at pair programming, and wants to work with us in Denver? Is it you?

Pivotal Tracker Blog - Wed, 03/18/2015 - 00:15

At Pivotal Tracker, we’re trying to make life better for developers all over the world, one project at a time. Our philosophy is that a good tool helps you do your job and gets out of your way, allowing you to focus on what’s important. We’re looking for a few great engineers to join our team to work on improving the greatest agile communication tool around. If this sounds like something you can get behind, read on to learn about life on the Tracker team.

Great culture

Daily catered breakfasts. Start the day off right with a catered breakfast while you catch up with your team, then hit the ground running—together.

Ping-pong. When you need to get up and stretch your legs and reset your brain, grab a ping-pong paddle and show off your skills (bragging rights included).

Small team. The Pivotal Tracker team is lean and mean, which means you’ll have an immediate impact.

Collaboration is key. We build things as a team, so we make decisions as a team. We believe a highly collaborative approach is part of the DNA of success.

Fun Fridays. At the end of the week, get the weekend started a little early with some head-to-head gaming.

Curling with the Tracker Team

Curling with the Tracker Team

 

Great location

The City. Denver is one of the fastest-growing cities in the country for a reason. Try out one of the innovative, chef-owned restaurants that are popping up everywhere or catch a show in the second-largest performing arts center in the country.

Get outside. With more than 300 days of sunshine a year, Denver is a city of active people. Whether you want to hike a fourteener, explore the trails on a mountain bike in the summer, or hit the slopes in the winter, Colorado has something to inspire you to get off the couch.

Brand-new office. We recently finished building a brand-new office building in the LoHi neighborhood. We have a full coffee bar, a dedicated B-Cycle station, and great patios for when you need a breath of fresh Colorado air.

Downtown Denver

Downtown Denver

Colorado Mountains in the Summer

Colorado Mountains in the summer

Great discipline

We pairall the time. Two heads are better than one, which is why pairing is a core part of our discipline. You’ll ramp up faster and spend less time dealing with roadblocks.

TDD. Good engineers write good code; great engineers write tests. We practice test-driven development as much as possible.

Refactoring. We think that there’s always room to do things better, which is why we encourage refactoring as a regular part of the process.

Regular retros. Our process is just as important as our code, which is why we have regular retros to check in often and make sure everything is running as smoothly as possible.

Pivotal Tracker Pivots Pairing

Pivotal Tracker Pivots pairing

 

You got this far, so why not go ahead and apply?!

Does this sound too good to be true? Look, we’re not making this up; come see for yourself! If you think we’d be a good match, apply online now and let’s get to work.

The post Who has two thumbs, excels at pair programming, and wants to work with us in Denver? Is it you? appeared first on Pivotal Tracker.

Categories: Companies

Is “Protecting the Team” the Right Thing?

Illustrated Agile - Len Lagestee - Tue, 03/17/2015 - 23:30

If you were to ask a Scrum Master what they do a common response is “we protect the team.” From the context of protecting the team from themselves or an aggressive product owner as Mike Cohn describes, I would agree. Protecting the team from complacency or overwork is a worthy endeavor.

For many Scrum Masters, protecting means shielding the team from outside distractions and interferences. These distractions and interferences come in different forms but most of them are from other humans. Here are three I have witnessed and experienced:

  • The “trespassers” have lost their voice of influence on a product or project. This may be a senior leader with a history of ownership on a product. As an organization grows, there is a need for them to relinquish control over their product but this is often a challenge for many senior leaders. They feel the need to strongly interject their opinions on the direction of a product vision or backlog. For the product owner, this leads to a lack of autonomy and a feeling of frustration. For the senior leader, this leads to intruding on product owner territory to get their ideas heard.
  • The “uninvited guests” have lost their assignment to direct the team. This is typically a manager with direct reports on the team. Prior to agile, they would be the ones who would assign work to the team and would always know what the team was doing. Status reports often originate from the uninvited guests (who are now looking from the outside in).
  • The “requestors” have lost their direct connection to team. This is typically a business person who in the past, had the ear of a developer and now must bypass the product owner. When something needs to be fixed or tweaked, a quick call to the developer and in just a few minutes the changes were made. This behavior often continues even after a team has assigned a product owner.

Our natural response to these situations are to protect, to shield, and to make life easier for the team by limiting the number of “distractions.” But just how should a Scrum Master handle them?

As an example, when the “trespasser” attempts to influence a product backlog, is a Scrum Master expected to tell the leader to back off? I have found very few who will. Most recognize their performance review, salary, bonus, and reputation are tied to the perception the leader has of them and are not willing to take the risk.

Beyond the personal impact, being in a mode of protecting also:

  • Increases isolation. As we continue to deflect people away from the team without creating an avenue for communication and conversation, we are conditioning them to never return. While this may seem like a good thing, this is where silos are born.
  • Fosters distrust. When people are isolated it is natural for doubt and suspicion to begin. For leaders, this is typically the time they will feel the need to get involved.
  • Solves nothing. Shielding the team will buy some time…until the next time. There is a short-term alleviation of discomfort or inconvenience but the real issues triggering the need to protect won’t go away.

As an alternative to protecting the team, here are a few areas for the Scrum Master and team to focus on to begin transforming into a culture where protection is no longer necessary:

Become a radiating team. I mentioned this in my last blog post. By naturally radiating work progress, the team begins to feel open and welcoming. Nothing feels hidden or mysterious.

Create connection points and conversations. The sprint review is a great place to start. Make this session open to all and facilitate healthy dialog around what reviewed and the direction of the product. Design other serendipitous occasions for people on the team to interact and engage with stakeholders and leaders.

Focus on co-creating opportunities. When the feeling or sense of protection emerges, use it to seek out ways to build things together. There are advantages to this:

  • Co-creation will illuminate lack of trust (and build trust) very quickly. For many organizations, a culture of distrust is just below the surface and is rarely addressed. By co-creating, we can begin to address this painful dysfunction and find ways to rebuild trust where needed.
  • Co-creation will amplify the strengths of each participant. When we spend time with each other, we learn how to leverage the best each has to offer.
  • Co-creation has transparency built-in. No need for status reports or additional meetings as vested parties have all contributed to the work. The Agile Leadership Engagement Grid walks through an approach for this type of transparency and connection at different levels in the enterprise.

SHARE YOUR THOUGHTS: Are there situations where you feel you must protect your team? Do you have any techniques to welcome interaction and co-creation? Please add your comments below.

Becoming a Catalyst - Scrum Master Edition

The post Is “Protecting the Team” the Right Thing? appeared first on Illustrated Agile.

Categories: Blogs

Stabilization Sprints and Velocity

Agile Learning Labs - Tue, 03/17/2015 - 22:42

Here is a question that just showed up in my in-box regarding how to calculate a scrum team’s velocity when they are doing stabilization sprints. This notion of stabilization sprints has become more popular lately, as they are included in SAFe (Scaled Agile Framework).

Question

We do a 2-week stabilization sprint every 4th sprint where we complete regression testing, etc. but don’t take any new stories. Is there a rule of thumb around including a stabilization sprint in the team’s velocity?

Answer

The purpose of tracking a scrum team’s velocity is to give stakeholders (and the team) predictability into the rate at which they will complete the planned deliverables (the stories). Velocity is the rate of delivery. The stabilization work doesn’t represent specific deliverables that the stakeholders have asked for; it is simply a cost that you are paying every 4th sprint, because you aren’t really done with the stories during the non-stabilization sprints.

You can reduce this cost by having a more robust definition of done. Look at each thing that gets done during stabilization and ask “How could we do that during each sprint, for each story, so that done really means done?” As you move more work out of stabilization and into your definition of done, your predictability gets better because there are fewer surprises to be discovered during stabilization. The amount of stabilization time that you need goes down, and you can measure the cost savings in terms of reduced time and effort (which is money). By the way, you can learn more about definition of done this Wednesday at the Scrum Professionals MeetUp.

Therefore, my recommendation is to not assign points to the stabilization work.

Here are a couple of other posts related to velocity:

Cheers,

Chris

Categories: Companies

Agile and Scrum Trello Extensions

Scrum Expert - Tue, 03/17/2015 - 19:27
Trello is a free on-line project management tool that provides a flexible and visual way to organize anything. This approach is naturally close to the visual boards used in the Scrum or Kanban approaches. As the tool as an open architecture, some extensions have been developed for a better implementation of Agile project management in Trello. Updates March 17 2015: added Screenful for Trello extension The visual representation and the card system used by Trello already make it possible to use it for Scrum project that need a virtual board to display their ...
Categories: Communities

20 Common Logical Fallacies – Don’t Be a Victim!

Agile For All - Bob Hartman - Tue, 03/17/2015 - 16:15
The 20 Most Common Logical Fallacies
  1. Appeal to ignorance – Thinking a claim is true (or false) because it can’t be proven true (or false).
  2. Ad hominem – Making a personal attack against the person saying the argument, rather than directly addressing the issue.
  3. Strawman fallacy – Misrepresenting or exaggerating another person’s argument to make it easier to attack.
  4. Bandwagon fallacy – Thinking an argument must be true because it’s popular.
  5. Naturalistic fallacy – Believing something is good or beneficial just because it’s natural.
  6. Cherry picking – Only choosing a few examples that support your argument, rather than looking at the full picture.
  7. False dilemma – Thinking there are only two possibilities when there may be other alternatives you haven’t considered.
  8. Begging the question – Making an argument that something is true by repeating the same thing in different words.
  9. Appeal to tradition – Believing something is right just because it’s been done around for a really long time.
  10. Appeal to emotions – Trying to persuade someone by manipulating their emotions – such as fear, anger, or ridicule – rather than making a rational case.
  11. Shifting the burden of proof – Thinking instead of proving your claim is true, the other person has to prove it’s false.
  12. Appeal to authority – Believing just because an authority or “expert” believes something than it must be true.
  13. Red herring – When you change the subject to a topic that’s easier to attack.
  14. Slippery slope – Taking an argument to an exaggerated extreme. “If we let A happen, then Z will happen.”
  15. Correlation proves causation – Believing that just because two things happen at the same time, that one must have caused the other.
  16. Anecdotal evidence – Thinking that just because something applies toyou that it must be true for most people.
  17. Equivocation – Using two different meanings of a word to prove your argument.
  18. Non sequitur – Implying a logical connection between two things that doesn’t exist. “It doesn’t follow…”
  19. Ecological fallacy – Making an assumption about a specific person based on general tendencies within a group they belong to.
  20. Fallacy fallacy – Thinking just because a claim follows a logical fallacy that it must be false.


Faulty thinking is part of life. We’re not perfect, nor do we think perfectly. It is, however, helpful to identify faulty thinking in our own mental processes. Sometimes, merely being aware of how we think can help us stay away from potential pitfalls in our logic.

It also helps to be aware when people use logical fallacies, especially to ‘rationalize’ their thinking. Don’t be afraid to call it out for what it is. Getting people together to collaborate can be a challenge in itself, candor, honesty, and arriving at a shared understanding is crucial for any decision making process.

Be a head above. Bring people together when making decisions, just make sure we aren’t dealing with dissonance in irrational ways… :)

[HT: TheMotionMachine]

The post 20 Common Logical Fallacies – Don’t Be a Victim! appeared first on Agile For All.

Categories: Blogs

One month of mini habits

Mark Needham - Tue, 03/17/2015 - 03:32

I recently read a book in the ‘getting things done’ genre written by Stephen Guise titled ‘Mini Habits‘ and although I generally don’t like those types of books I quite enjoyed this one and decided to give his system a try.

The underlying idea is that there are two parts of actually doing stuff:

  • Planning what to do
  • Doing it

We often get stuck in between the first and second steps because what we’ve planned to do is too big and overwhelming.

Guise’s approach for overcoming this inaction is to shrink the amount of work to do until it’s small enough that we don’t feel any resistance to getting started.

It should be something that you can do in 1 or 2 minutes – stupidly small – something that you can do even on your worst day when you have no time/energy.

I’m extremely good at procrastinating so I thought I’d give it a try and see if it helped. Guise suggests starting with one or two habits but I had four things that I want to do so I’ve ignored that advice for now.

My attempted habits are the following:

  • Read one page of a data science related paper/article a day
  • Read one page of a computer science related paper/article a day
  • Write one line of data science related code a day
  • Write 50 words on blog a day
Sooooo….has it helped?

In terms of doing each of the habits I’ve been successful so far – today is the 35th day in a row that I’ve managed to do each of them. Having said that, there have been some times when I’ve got back home at 11pm and realised that I haven’t done 2 of the habits and need to quickly do the minimum to ‘tick them off’.

The habit I’ve enjoyed doing the most is writing one line of data science related code a day.

My initial intention was that this was going to only involved writing machine learning code but at the moment I’ve made it a bit more generic so it can include things like the Twitter Graph or other bits and pieces that I want to get started on.

The main problem I’ve had with making progress on mini projects like that is that I imagine its end state and it feels too daunting to start on. Committing to just one line of code a day has been liberating in some way.

One tweak I have made to all the habits is to have some rough goal of where all the daily habits are leading as I noticed that the stuff I was doing each day was becoming very random. Michael pointed me at Amy Hoy’s ‘Guide to doing it backwards‘ which describes a neat technique for working back from a goal and determining the small steps required to achieve it.

Writing at least 50 words a day has been beneficial for getting blog posts written. Before the last month I’ve found myself writing most of my posts at the end of month but I have a more regular cadence now which feels better.

Computer science wise I’ve been picking up papers which have some sort of link to databases to try and learn more of the low level detail there. e.g. I’ve read the LRU-K cache paper which Neo4j 2.2’s page cache is based on and have been flicking through the original CRDTs paper over the last few days.

I also recently came across the Papers We Love repository so I’ll probably work through some of the distributed systems papers they’ve collated next.

Other observations

I’ve found that if I do stuff early in the morning it feels better as you know it’s out of the way and doesn’t linger over you for the rest of the day.

I sometimes find myself wanting to just tick off the habits for the day even when it might be interesting to spend more time on one of the habits. I’m not sure what to make of this really – perhaps I should reduce the number of habits to the ones I’m really interested in?

With the writing it does sometimes feel like I’m just writing for the sake of it but it is a good habit to get into as it forces me to explain what I’m working on and get ideas from other people so I’m going to keep doing it.

I’ve enjoyed my experience with ‘mini habits’ so far although I think I’d be better off focusing on fewer habits so that there’s still enough time in the day to read/learn random spontaneous stuff that doesn’t fit into these habits.

Categories: Blogs

State of Scrum Survey

Notes from a Tool User - Mark Levison - Mon, 03/16/2015 - 21:45
State of Scrum Survey 2015

State of Scrum Survey 2015

How are you using (or not using) Scrum in your organization and projects?

That’s what the Scrum Alliance, ProjectManagement.com and ProjectsAtWork want to find out with their annual “State of Scrum” survey.

Who is using Scrum, how are they using it, and why are they using it… and if they’re not, why not?

Please take a few moments to complete the survey. Two participants will win a $500 Amazon.com gift card. The results will be compiled and presented later this year, and all respondents who request the report will receive a copy.

Your answers will be strictly confidential.

 

Categories: Blogs

From Agile Hangover to Antifragile Organisations

Scrum Expert - Mon, 03/16/2015 - 19:37
Many organisations have been swept up in agile process adoption, with good reasons! The Agile Party is coming to a close and many organisations are now beginning to look at where they are and have come to the disheartening realisation that, rather than in a new world of embracing change and competitiveness, they have a lot of new processes, not much to show for it, and people are disillusioned enough to begin to revert to older, familiar ways… This is the unfortunate age of the ‘agile hangover’. In this talk Russ ...
Categories: Communities

A Retake on the Agile Manifesto

TV Agile - Mon, 03/16/2015 - 18:49
The Agile Manifesto was the spark that brought about a shift in how software was being developed and as a result a wave of new Agile Methodologies such as SCRUM, XP, and Continuous Delivery have been introduced as “better ways of developing software”. Many development organizations have adopted these agile methodologies to improve their communication, […]
Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.