Our team has been hard at work building the new content for the 4.0 release this summer. Below is a sneak preview of the latest rendering of the new Big Picture, followed by key highlights of potential new changes:
- Adding the immutable Lean-Agile principles on which SAFe is based; they will be a key element on the Big Picture
- Updating the House of Lean guidance and terminology. Adding a new pillar for Innovation to reflect its critical role in today’s modern business world.
- Adding new Lean Systems Engineering (LSE) value stream to illustrate the integration to the new SAFe LSE framework and to show that a SAFe Portfolio can have Value Streams for both Software Systems and Cyber Physical Systems
- Adding the “Customer” to the Big Picture. Customers are the reason why Value Streams exist and are the ultimate economic buyer of the subject solution
- Introducing the concept of the Enterprise Portfolio to govern multiple instance of SAFe
- Adding Software Capitalization guidance to the framework
- Adding an “ART Kanban” to make Feature WIP visible and to improve program execution, increase alignment and transparency
- Renaming the majority of icons from “Program” to “ART” (Agile Release Train) to improve consistency of terminology and emphasize the release train concept (e.g. ART Epics. ART Backlog, ART PI Objectives, etc.)
- Renaming Release Planning to PI Planning to further clarify the separation of concerns between Developing on Cadence and Releasing on Demand.
- Agile Teams in SAFe will have the choice to use ScrumXP and/or Kanban to manage the flow of work. Kanban is particularly useful for managing WIP when the work of the team does not have a predictable arrival rate (e.g. maintenance work, work of System Team and DevOps, etc.). You can learn more about using Kanban in SAFe now in the current Guidance article.
We are excited about the introduction of these new improvements to the Scaled Agile Framework. SAFe is an evolving work in process, capturing current best practices for implementing Lean-Agile practices at scale. Of course, that means you sometimes have to change things you’ve decided in the past. That’s called learning.
We are targeting release 4.0 of the framework around August, and we are looking forward to your feedback and participation in improving the framework. We’ll also be supporting our users of V3.0 for a year after this next release.
Our team would like to thank our Customers, SPCs/ SPCTs, SAFe Community, and SAFe partners who help us relentlessly improve the framework. Keep the feedback coming!
“If a thing is worth doing, it’s worth doing well”
As you may know tinyPM is integrated with such SCM tools like Github, Beanstalk, Bitbucket and Stash. Thanks to this integration you can group all commits in a logical structure that corresponds to your backlog structure in tinyPM.
This is actually an awesome thing because you can easily access not only the history of particular user story, but also its development history.
And indeed, it’s all about communication. Almost all software projects are collaborative team projects. A well written commit message is critical to communicate the context of change to team members. Moreover, later on you can easily check and understand the context without going back to it again and again. This is how you can save your time and resources.
Now, this is not only developer’s or project manager’s dream about having everything in the right order and place. Let’s stop dreaming – it can all be done properly here and now!
There is only one condition – a whole team needs to be willing to write great commit messages. And now it’s time to show you how to do that.
Here are 3 rules to follow:
1. KISS – Keep It Simple Stupid
A good commit message should be concise and consistent.
If the change is trivial that where explanation is not required, single line is sufficient enough as a commit message.
It should be written in an imperative style like “fix a typo”:
If applied, this commit will fix the typo.
If a commit is more complicated we need to write a subject (in an imperative style) and provide explanation.
Within this explanation, we should focus on describing why this change was made and how it benefitted the project.
It’s very important that a commit should refer to one logical change at a time. What does it mean? It’s not correct to write message like: “fix the filtering bug, the searching bug, and add attachments to the user story”. Here we should create three, separate commits:
fix filtering bug
fix searching bug
add attachments to user story
2. Use three magic questions to provide sufficient context
Peter Hutterer wrote that good commit message should answer three, fundamental questions:
- Why is it necessary?
Concise description of the reason why a commit needs to be done. What does it fix? What feature does it add? How does it improve performance or reliability?
- How does it address the issue?
For trivial patches this part can be omitted. There should be a high-level description of what the approach was.
- What effects does the patch have?
(In addition to the obvious ones, this may include benchmarks, side effects, etc.)
These three questions will help you to provide all necessary information and context for future reference.
You can easily link commits to tasks or user stories in your tinyPM. Simply include a user story #id in the commit message #512 and the commit will show up on this particular story card:
Add a list of recommended products #512
3. Do not use excuses!
We all know that sticking to the rules can be a pain in the neck. However, this is often necessary in order to achieve certain goals. Our goal is to write great commit messages, therefore, we need to adhere to such rules.
And yes, there will be a lot of excuses like: “but it works”, “we didn’t have time” or “I’m the only one working on this” and a dozen of others. Finding excuses is way easier than being consequent. Therefore, we need to think about it in the big picture – think how many benefits we can get.
Just remember that good programmers can be recognized by their legacy. Good commit history will help others to start work on such projects readily without any problems figuring it all out.
This should become a habit that will constantly benefit a whole team in terms of productivity and efficiency as well as a sense of satisfaction.
The last word belongs to Chris Beams: “(…) just think how much time the author is saving fellow and future committers by taking the time to provide this context here and now.”
Here’s the March 2015 edition of the LeanKit monthly newsletter. Make sure you catch the next issue in your inbox and subscribe today. Kanban for DevOps: 3 Reasons IT Ops Uses Lean Flow (part 2 of 3) A top frustration for operations teams is the context switching caused by conflicting priorities. In part two of this three-part […]
The post March Newsletter: Managing Priorities, 10 Kanban Board Examples, Why It’s Time for Lean, and More appeared first on Blog | LeanKit.
Why on earth do I need to spend so much of my time in a meeting? This is an absolutely sane question that most of the team members wind up asking at some point in time while I am coaching an organization towards more adaptive management techniques.
Regardless of the role, there are other things beyond meetings that we have traditionally declared to be a productive use of time. If you are a developer, then we declare productivity to be associated with time spent writing software. If you are a product manager, then we declare productivity to be associated with time spent defining the next version of a product or understanding the market’s demands. Whatever the role, it is rare for an organization or a profession to associate meeting time with high productivity.
From this perspective, it makes a ton of sense when people beg the question:
Why on earth do I need to spend so much of my time in a meeting?
Here’s my usual answer:
What defines a productive minute, is it one that is spent focusing on your craft or is it a minute that is spent delivering value to the organization as quickly as possible?
I tend to think that a productive minute is one that is spent delivering value to the organization as quickly as possible. So, while the time spent practicing a craft is absolutely a critical part of getting value to the organization it is a waste if the individual is not hyper focused on the actual needs of the organization. And this is where meetings come into the picture.
Effective meetings will have a specific theme and will enable a team to establish high clarity around the needs of the organization and teach accountability. For most of the teams that I coach this involves a few specific themes:
(1) Daily Standup – This is a quick touchpoint that is oriented around maintaining accountability within a team as each member takes a minute to update the other team members about the progress made over the past 24 hours, progress that they expect to make over the next 24 hours, and any issues or concerns that they need help addressing.
(2) Tactical Meeting – This is an hour or more and has a very specific purpose, dealing with short term tactics such as creating clarity around near term market needs or ensuring that the team is successful in meeting their commitments.
(3) Strategic Meeting – This is usually a half day or more and is focused on creating clarity about how to move the organization forward with a focus on the longer term vision and strategies.
What’s your take, are meetings useful in your organization? Do your meetings have specific themes or are they a mix-mash of agenda topics?
With advancements coming soon to LeanKit analytics, we’re looking to learn more about your reporting interests. Do you want more Lean, Scrum, SAFe, or general project management reports? Do you want to know more about individuals, teams, projects, or the entire portfolio? Use the survey below to let us know which reports you use today — whether […]
The post Tell us Your Thoughts: LeanKit Analytics & Reporting Survey appeared first on Blog | LeanKit.
We are excited to be sponsoring a $500 cash prize for the best Pivotal Tracker post submitted in AirPair’s $100K developer writing competition!
AirPair has released cool features that allow authors and readers to collaborate on posts, just like normal code via forks and pull requests! Over the next 10 weeks, you can win your share of $100,000 in prize money for the best tutorials, opinion pieces, and tales of using Pivotal Tracker in production.
Have you used Pivotal Tracker in a way you are particularly proud of? Have you learned something you feel others would benefit from? How have you integrated it with other APIs to get the job done? The average post published on AirPair in January was read 15,000 times, so it’s a great way to share the cool things you’ve made with fellow developers.
Click here to submit your posts before May 30.
As technologists we want to build software that is friendly, fast, beautiful, reliable, secure, and scalable. And we expect ourselves to deliver it on time and under budget, because our ultimate goal is to have lots of happy customers who can do what they want: cue Daft Punk’s Technologic!
But time and energy are finite, and we simply cannot deliver it all at once. We need to choose our priorities, and this choice is one we should make consciously.
Evaluating our software development priorities while dealing with constraints is known as the tradeoff space.
How can you make wise tradeoffs for your product?
The choice is based on a balance between your technology stack and business model type.
“Move fast and break things!”
While this has become a popular motto, it doesn’t apply to every company.
For example, enterprise software companies that are building system-level software prioritize reliability because customers need to use them. Each change needs to be rigorously tested, and often approved before it can be released.
Meanwhile, consumer internet companies spend time and money on making their UX delightful so that people want to use them. Reliability is something they’re willing to sacrifice. Since many are web-based applications, they can iterate quickly and release changes frequently.
So yes, they can move fast and break things.
The tradeoff space may seem insurmountable, but you too can become confident about your decisions by learning from a true pro!
In the second episode of Femgineer TV, I’ve invited Jocelyn Goldfein, the Former Director of Engineering at Facebook, to talk about:
- What the tradeoff space is
- How to not get overwhelmed by the tradeoff space
- How to make decisions that will help you ship product that your customers will love and help you meet business goals
Jocelyn has led engineering teams at early to growth-stage startups like VMWare and enterprise companies like Trilogy, so she’s definitely had her fair share of dealing with constraints and having to make tradeoffs to ship product and meet business goals.
We also dig into the cost of a mistake, how to take risks, the BIGGEST mistake Jocelyn sees technical folks making over and over again, and how to avoid making it!
Watch the episode to learn how you can make smart tradeoffs when developing software products.
After you’ve watched the episode, take our challenge. Let us know in the blog comments below:
- What was the last tradeoff you had to make?
- What was the cost of the mistake?
- How did you or your company feel about taking the risk?
The 3 BEST responses will receive a special giveaway from our sponsor Pivotal Tracker and be showcased in Femgineer’s weekly newsletter!
Submit your responses in the blog comments below by March 19th at 11:59pm PST.
The next episode of Femgineer TV airs in April. I’ve invited Ryan Hoover and Erik Torenberg, the founders of Product Hunt, to talk about: How to Build a Community of Evangelists for Your Software Product. Subscribe to our YouTube channel to know when it’s out!
The post How to Make Smart Tradeoffs When Developing Software Products appeared first on Pivotal Tracker.
Microservices are the latest architectural style promising to resolve all issues we had we previous architectural styles. And just like other styles it has its own challenges. The challenge discussed in this blog is how to realise coupling between microservices while keeping the services as autonomous as possible. Four options will be described and a clear winner will be selected in the conclusion.
To me microservices are autonomous services that take full responsibility for one business capability. Full responsibility includes presentation, API, data storage and business logic. Autonomous is the keyword for me, by making the services autonomous the services can be changed with no or minimal impact on others. If services are autonomous, then operational issues in one service should have no impact on the functionality of other services. That all sounds like a good idea, but services will never be fully isolated islands. A service is virtually always dependent on data provided by another service. For example imagine a shopping cart microservice as part of a web shop, some other service must put items in the shopping cart and the shopping cart contents must be provided to yet other services to complete the order and get it shipped. The question now is how to realise these couplings while keeping maximum autonomy. The goal of this blog post is to explain which pattern should be followed to couple microservices while retaining maximum autonomy.
I'm going to structure the patterns by 2 dimensions, the interaction pattern and the information exchanged using this pattern.
Interaction pattern: Request-Reply vs. Publish-Subscribe.
- Request-Reply means that one service does a specific request for information (or to take some action). It then expects a response. The requesting service therefore needs to know what to aks and where to ask it. This could still be implemented asynchronously and of course your could put some abstraction in place such that the request service does not have to know the physical address of the other service, the point still remains that one service is explicitly asking a for specific information (or action to be taken) and functionally waiting for a response.
- Publish-Subscribe: with this pattern a service registers itself as being interested in certain information, or being able to handle certain requests. The relevant information or requests will then be delivered to it and it can decide what to do with it. In this post we'll assume that there is some kind of middleware in place to take care of delivery of the published messages to the subscribed services.
Information exchanged: Events vs. Queries/Commands
- Events are facts that cannot be argued about. For example, an order with number 123 is created. Events only state what has happened. They do not describe what should happen as a consequence of such an event.
- Queries/Commands: Both convey what should happen. Queries are a specific request for information, commands are a specific request to the receiving service to take some action.
Putting these two dimensions in a matrix results into 4 options to realise couplings between microservices. So what are the advantages and disadvantages for each option? And which one is the best for reaching maximum autonomy?
In the description below we'll use 2 services to illustrate each pattern. The Order service which is responsible for managing orders and the Shipping service which is responsible for shipping stuff, for example the items included in an order. Services like these could be part of a webshop, which could then also contain services like a shopping cart, a product (search) service, etc.1. Request-Reply with Events:
In this pattern one service asks a specific other service for events that took place (since the last time it asked). This implies strong dependency between these two services, the Shipping service must know which service to connect to for events related to orders. There is also a runtime dependency since the shipping service will only be able to ship new orders if the Order service is available.
Since the Shipping service only receives events it has to decide by itself when an order may be shipped based on information in these events. The Order service does not have to know anything about shipping, it simply provides events stating what happened to orders and leaves the responsibility to act on these events fully to the services requesting the events.2. Request-Reply with Commands/Queries:
In this pattern the shipping Order service is going to request the Shipping service to ship an order. This implies strong coupling since the Order service is explicitly requesting a specific service to take care of the shipping and now the Order service must determine when an order is ready to be shipped. It is aware of the existence of a Shipping service and it even knows how to interact with it. If other factors not related to the order itself should be taken into account before shipping the order (e.g. credit status of the customer), then the order services should take this into account before requesting the shipping service to ship the order. Now the business process is baked into the architecture and therefore the architecture cannot be changed easily.
Again there is a runtime dependency since the Order service must ensure that the shipping request is successfully delivered to the Shipping service.3. Publish-Subscribe with Events
In Publish-Subscribe with Events the Shipping service registers itself as being interested in events related to Orders. After registering itself it will receive all events related to Orders without being aware what the source of the order events is. It is loosely coupled to the source of the Order events. The shipping service will need to retain a copy of the data received in the events such that is can conclude when an order is ready to be shipped. The Order service needs to have no knowledge about shipping. If multiple services provide order related events containing relevant data for the Shipping service then this is not recognisable by the Shipping service. If (one of) the service(s) providing order events is down, the Shipping service will not be aware, it just receives less events. The Shipping service will not be blocked by this.4. Publish-Subscribe with Commands/Queries
In Publish-Subscribe with Command/Queries the Shipping service registers itself as a service being able to ship stuff. It then receives all commands that want to get something shipped. The Shipping service does not have to be aware of the source of the Shipping commands and on the flip side the Order service is not aware of which service will take care of shipping. In that sense they are loosely coupled. However, the Order service is aware of the fact that orders must get shipped since it is sending out a ship command, this does make the coupling stronger.Conclusion
Now that we have described the four options we go back to the original question, which pattern of the above 4 provides maximum autonomy?
Both Request-Reply patterns imply a runtime coupling between two services and that implies strong coupling. Both Command/Queries patterns imply that one service is aware of what another service should do (in the examples above the order service is aware that another service takes care of shipping) and that also implies strong coupling, but this time on functional level. That leaves one option: 3. Publish-Subscribe with Events. In this case both services are not aware of each others existence from both runtime and functional perspective. To me this is the clear winner for achieving maximum autonomy between services.
The next question pops up immediately, should you always couple services using Publish-Subscribe with events? If your only concern is maximum autonomy of services the answer would be yes, but, there are more factors that should be taken into the account. Always coupling using this pattern comes at a price, data is replicated, measures must be taken to deal with lost events, events driven architectures do add extra requirements on infrastructure, their might be extra latency, and more. In a next post I'll dive into these trade-offs and put things into perspective. For now remember that Publish-Subscribe with Events is a good bases for achieving autonomy of services.
Every time I work with a client or teach a workshop, people want more ways to visualize their project portfolios. Here are some ideas:
Here is a kanban view of the project portfolio with a backlog:
And a kanban view of the project portfolio with an “Unstaffed Work” line, so it’s clear:
If you haven’t read Visualizing All the Work in Your Project Portfolio, you should. It has some other options, too.
I have yet more options in Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects.
It’s the curse of doing large things – the constant questioning from other people and even yourself, wanting to know if you’re done yet. I hate it. It makes me mad. I want to scream, “NO! Can’t you see I’m still working?! Go away and I’ll tell you when I’m done!”
Now, compare that to the happiness of a recent conversation I had:
- Client: “Hey, looks like you’ve been making some great progress today! I see a lot of things checked off!”
- Me: “Heh – not really. Just another day of work. Except I broke the tickets down in to smaller things.”
- Client: “Great! Keep up the great work – I’m glad to see you’re getting so much done!”
So, what’s the difference, here? It’s not the client… I’ve had the same client for well over a year now, and I’ve had more than one instance of me wanting to yell about when I’ll be done. The difference is in how I broke down the big things that I was working on. I made myself look good by having many smaller things to do and showing that I was getting them done.Check Check Check
When you look at a task board, issue list, ticket system or any other place where you keep a list of things to do, it can be overwhelming to see that One Giant Thing To Do. It’s a monumental task that scares you when you think about it, and makes you want to crawl under your desk and hide.
Like so many other tasks in our lives, though, it becomes much more manageable when we break that One Giant Thing To Do down in to smaller things to do. Suddenly that giant thing seems like it may actually be possible because you can see that you’ve made progress. You’ve moved tickets across your task board, checked them off, or done whatever it is that you do to say these small things are done.Happy++
As an added bonus to getting many small things done every day, you’ll find your own satisfaction increasing. When you can look back at your list of things to do and see that you got 15 things done today, instead of looking at that One Giant Thing To Do that has been on your list all month, you will be much happier.
Getting things done makes us, as people who do things, happy. It also makes our client / boss / team / customers / etc happy. When the people for whom we are building things can see the progress we are making (even if they don’t understand that progress), they know that we are working and are going to get it done eventually.Break It Down, Now
The next time you set out to conquer that One Giant Thing To Do, take a few moments and break it down in to smaller things.
When you’ve got a rough idea of the smaller things, get started on one of them. Break it apart and break it down further when you see the need.
The perception of productivity will greatly improve your outlook on the One Giant Thing To Do.
The Android development tools project has seen big changes over the last year. The original Eclipse ADT development environment was superseded late last year by Android Studio — a new IDE based on Intellij. Under the hood Android Studio also uses a new command line build system based on Gradle, replacing the previous Ant-based system. I’ve been keen to find out how these changes impact the integration of Android test reports with continuous integration servers like Pulse.Summary
- Android JUnit Report is redundant.
- Run on-device Android tests with: ./gradlew connectedAndroidTest
- Collect reports from: app/build/outputs/androidTest-results/connected/*.xml
The original Ant-based build system for Android didn’t produce XML test reports for instrumentation tests (i.e. those that run on-device), prompting me to create the Android JUnit Report project. Android JUnit Report produced XML output similar to the Ant JUnit task, making it compatible with most continuous integration servers. The good news is: Android JUnit Report is now redundant. The new Gradle-based build system produces sane XML test reports out of the box. In fact, they’re even more complete than those produced by Android JUnit Report, so should work with even more continuous integration servers.
The only downside is the documentation, which is a little confusing (while there are still documents for the old system about) and not very detailed. With a bit of experimentation and poking around I found how to run on-device (or emulator) tests and where the XML reports were stored. With a default project layout as created by Android Studio:
ASDemo.iml app/ app.iml build.gradle libs/ proguard-rules.pro src/ androidTest/ main/ build.gradle gradle gradle.properties gradlew gradlew.bat local.properties settings.gradle
You get a built-in version of Gradle to use for building your project, launched via gradlew. To see available tasks, run:
$ ./gradlew tasks
(This will download a bunch of dependencies when first run.) Amongst plenty of output, take a look at the Verification Tasks section:
Verification tasks ------------------ check - Runs all checks. connectedAndroidTest - Installs and runs the tests for Debug build on connected devices. connectedCheck - Runs all device checks on currently connected devices. deviceCheck - Runs all device checks using Device Providers and Test Servers. lint - Runs lint on all variants. lintDebug - Runs lint on the Debug build. lintRelease - Runs lint on the Release build. test - Run all unit tests. testDebug - Run unit tests for the Debug build. testRelease - Run unit tests for the Release build.
The main testing target test does not run on-device tests, only unit tests that run locally. For on-device tests you use the connectedAndroidTest task. Try it:
$ ./gradlew connectedAndroidTest ... :app:compileDebugAndroidTestJava :app:preDexDebugAndroidTest :app:dexDebugAndroidTest :app:processDebugAndroidTestJavaRes UP-TO-DATE :app:packageDebugAndroidTest :app:assembleDebugAndroidTest :app:connectedAndroidTest :app:connectedCheck BUILD SUCCESSFUL Total time: 33.372 secs
It’s not obvious, but this produces compatible XML reports under:
with names based on the application module and device. In your continuous integration setup you can just collect all *.xml files in this directory for reporting.
Although the new build system has killed the need for my little Android JUnit Report project, this is a welcome development. Now all Android developers get better test reporting without an external dependency. Perhaps it will even encourage a few more people to use continuous integration servers like Pulse to keep close tabs on their tests!
Using the basic Dockerfile syntax it is quite easy to create a fully functional Docker image. But if you just start adding commands to the Dockerfile the resulting image can become unnecessary big. This makes it harder to move the image around.
A few basic actions can reduce this significantly.
I’ve been running a few intro to Neo4j training sessions recently using Neo4j 2.2.0 RC1 and at some stage in every session somebody will make a typo when writing out of the example queries.
For example one of the queries that we do about half way finds the actors and directors who have worked together and aggregates the movies they were in.
This is the correct query:
MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED]-(director) RETURN actor.name, director.name, COLLECT(movie.title) AS movies ORDER BY LENGTH(movies) DESC LIMIT 5
which should yield the following results:
==> +-----------------------------------------------------------------------------------------------------------------------+ ==> | actor.name | director.name | movies | ==> +-----------------------------------------------------------------------------------------------------------------------+ ==> | "Hugo Weaving" | "Andy Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] | ==> | "Hugo Weaving" | "Lana Wachowski" | ["Cloud Atlas","The Matrix Revolutions","The Matrix Reloaded","The Matrix"] | ==> | "Laurence Fishburne" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"] | ==> | "Keanu Reeves" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"] | ==> | "Carrie-Anne Moss" | "Lana Wachowski" | ["The Matrix Revolutions","The Matrix Reloaded","The Matrix"] | ==> +-----------------------------------------------------------------------------------------------------------------------+
However, a common typo is to write ‘DIRECTED_IN’ instead of ‘DIRECTED’ in which case we’ll see no results:
MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director) RETURN actor.name, director.name, COLLECT(movie.title) AS movies ORDER BY LENGTH(movies) DESC LIMIT 5 ==> +-------------------------------------+ ==> | actor.name | director.name | movies | ==> +-------------------------------------+ ==> +-------------------------------------+ ==> 0 row
It’s not immediately obvious why we aren’t seeing any results which can be quite frustrating.
However, in Neo4j 2.2 the ‘EXPLAIN’ keyword has been introduced and we can use this to see what the query planner thinks of the query we want to execute without actually executing it.
Instead the planner makes use of knowledge that it has about our schema to come up with a plan that it would run and how much of the graph it thinks that plan would touch:
EXPLAIN MATCH (actor:Person)-[:ACTED_IN]->(movie)<-[:DIRECTED_IN]-(director) RETURN actor.name, director.name, COLLECT(movie.title) AS movies ORDER BY LENGTH(movies) DESC LIMIT 5
The first row of the query plan describes an all nodes scan which tells us that the query will start from the ‘director’ but it’s the second row that’s interesting.
The estimated rows when expanding the ‘DIRECTED_IN’ relationship is 0 when we’d expect it to at least be a positive value if there were some instances of that relationship in the database.
If we compare this to the plan generated when using the proper ‘DIRECTED’ relationship we can see the difference:
Here we see an estimated 44 rows from expanding the ‘DIRECTED’ relationship so we know there are at least some nodes connected by that relationship type.
In summary if you find your query not returning anything when you expect it to, prefix an ‘EXPLAIN’ and make sure you’re not seeing the dreaded ‘0 expected rows’.
At Pivotal Tracker, we’re trying to make life better for developers all over the world, one project at a time. Our philosophy is that a good tool helps you do your job and gets out of your way, allowing you to focus on what’s important. We’re looking for a few great engineers to join our team to work on improving the greatest agile communication tool around. If this sounds like something you can get behind, read on to learn about life on the Tracker team.
Daily catered breakfasts. Start the day off right with a catered breakfast while you catch up with your team, then hit the ground running—together.
Ping-pong. When you need to get up and stretch your legs and reset your brain, grab a ping-pong paddle and show off your skills (bragging rights included).
Small team. The Pivotal Tracker team is lean and mean, which means you’ll have an immediate impact.
Collaboration is key. We build things as a team, so we make decisions as a team. We believe a highly collaborative approach is part of the DNA of success.
Fun Fridays. At the end of the week, get the weekend started a little early with some head-to-head gaming.
The City. Denver is one of the fastest-growing cities in the country for a reason. Try out one of the innovative, chef-owned restaurants that are popping up everywhere or catch a show in the second-largest performing arts center in the country.
Get outside. With more than 300 days of sunshine a year, Denver is a city of active people. Whether you want to hike a fourteener, explore the trails on a mountain bike in the summer, or hit the slopes in the winter, Colorado has something to inspire you to get off the couch.
Brand-new office. We recently finished building a brand-new office building in the LoHi neighborhood. We have a full coffee bar, a dedicated B-Cycle station, and great patios for when you need a breath of fresh Colorado air.
We pair—all the time. Two heads are better than one, which is why pairing is a core part of our discipline. You’ll ramp up faster and spend less time dealing with roadblocks.
TDD. Good engineers write good code; great engineers write tests. We practice test-driven development as much as possible.
Refactoring. We think that there’s always room to do things better, which is why we encourage refactoring as a regular part of the process.
Regular retros. Our process is just as important as our code, which is why we have regular retros to check in often and make sure everything is running as smoothly as possible.
You got this far, so why not go ahead and apply?!
Does this sound too good to be true? Look, we’re not making this up; come see for yourself! If you think we’d be a good match, apply online now and let’s get to work.
The post Who has two thumbs, excels at pair programming, and wants to work with us in Denver? Is it you? appeared first on Pivotal Tracker.
If you were to ask a Scrum Master what they do a common response is “we protect the team.” From the context of protecting the team from themselves or an aggressive product owner as Mike Cohn describes, I would agree. Protecting the team from complacency or overwork is a worthy endeavor.
For many Scrum Masters, protecting means shielding the team from outside distractions and interferences. These distractions and interferences come in different forms but most of them are from other humans. Here are three I have witnessed and experienced:
- The “trespassers” have lost their voice of influence on a product or project. This may be a senior leader with a history of ownership on a product. As an organization grows, there is a need for them to relinquish control over their product but this is often a challenge for many senior leaders. They feel the need to strongly interject their opinions on the direction of a product vision or backlog. For the product owner, this leads to a lack of autonomy and a feeling of frustration. For the senior leader, this leads to intruding on product owner territory to get their ideas heard.
- The “uninvited guests” have lost their assignment to direct the team. This is typically a manager with direct reports on the team. Prior to agile, they would be the ones who would assign work to the team and would always know what the team was doing. Status reports often originate from the uninvited guests (who are now looking from the outside in).
- The “requestors” have lost their direct connection to team. This is typically a business person who in the past, had the ear of a developer and now must bypass the product owner. When something needs to be fixed or tweaked, a quick call to the developer and in just a few minutes the changes were made. This behavior often continues even after a team has assigned a product owner.
Our natural response to these situations are to protect, to shield, and to make life easier for the team by limiting the number of “distractions.” But just how should a Scrum Master handle them?
As an example, when the “trespasser” attempts to influence a product backlog, is a Scrum Master expected to tell the leader to back off? I have found very few who will. Most recognize their performance review, salary, bonus, and reputation are tied to the perception the leader has of them and are not willing to take the risk.
Beyond the personal impact, being in a mode of protecting also:
- Increases isolation. As we continue to deflect people away from the team without creating an avenue for communication and conversation, we are conditioning them to never return. While this may seem like a good thing, this is where silos are born.
- Fosters distrust. When people are isolated it is natural for doubt and suspicion to begin. For leaders, this is typically the time they will feel the need to get involved.
- Solves nothing. Shielding the team will buy some time…until the next time. There is a short-term alleviation of discomfort or inconvenience but the real issues triggering the need to protect won’t go away.
As an alternative to protecting the team, here are a few areas for the Scrum Master and team to focus on to begin transforming into a culture where protection is no longer necessary:
Become a radiating team. I mentioned this in my last blog post. By naturally radiating work progress, the team begins to feel open and welcoming. Nothing feels hidden or mysterious.
Create connection points and conversations. The sprint review is a great place to start. Make this session open to all and facilitate healthy dialog around what reviewed and the direction of the product. Design other serendipitous occasions for people on the team to interact and engage with stakeholders and leaders.
Focus on co-creating opportunities. When the feeling or sense of protection emerges, use it to seek out ways to build things together. There are advantages to this:
- Co-creation will illuminate lack of trust (and build trust) very quickly. For many organizations, a culture of distrust is just below the surface and is rarely addressed. By co-creating, we can begin to address this painful dysfunction and find ways to rebuild trust where needed.
- Co-creation will amplify the strengths of each participant. When we spend time with each other, we learn how to leverage the best each has to offer.
- Co-creation has transparency built-in. No need for status reports or additional meetings as vested parties have all contributed to the work. The Agile Leadership Engagement Grid walks through an approach for this type of transparency and connection at different levels in the enterprise.
SHARE YOUR THOUGHTS: Are there situations where you feel you must protect your team? Do you have any techniques to welcome interaction and co-creation? Please add your comments below.Becoming a Catalyst - Scrum Master Edition
Here is a question that just showed up in my in-box regarding how to calculate a scrum team’s velocity when they are doing stabilization sprints. This notion of stabilization sprints has become more popular lately, as they are included in SAFe (Scaled Agile Framework).Question
We do a 2-week stabilization sprint every 4th sprint where we complete regression testing, etc. but don’t take any new stories. Is there a rule of thumb around including a stabilization sprint in the team’s velocity?Answer
The purpose of tracking a scrum team’s velocity is to give stakeholders (and the team) predictability into the rate at which they will complete the planned deliverables (the stories). Velocity is the rate of delivery. The stabilization work doesn’t represent specific deliverables that the stakeholders have asked for; it is simply a cost that you are paying every 4th sprint, because you aren’t really done with the stories during the non-stabilization sprints.
You can reduce this cost by having a more robust definition of done. Look at each thing that gets done during stabilization and ask “How could we do that during each sprint, for each story, so that done really means done?” As you move more work out of stabilization and into your definition of done, your predictability gets better because there are fewer surprises to be discovered during stabilization. The amount of stabilization time that you need goes down, and you can measure the cost savings in terms of reduced time and effort (which is money). By the way, you can learn more about definition of done this Wednesday at the Scrum Professionals MeetUp.
Therefore, my recommendation is to not assign points to the stabilization work.
Here are a couple of other posts related to velocity:
- Should Management Use Velocity as a Metric?
- A Scrum Master’s Perspective on Story Point Accounting
- Story Point Accounting Across Sprints
- Appeal to ignorance – Thinking a claim is true (or false) because it can’t be proven true (or false).
- Ad hominem – Making a personal attack against the person saying the argument, rather than directly addressing the issue.
- Strawman fallacy – Misrepresenting or exaggerating another person’s argument to make it easier to attack.
- Bandwagon fallacy – Thinking an argument must be true because it’s popular.
- Naturalistic fallacy – Believing something is good or beneficial just because it’s natural.
- Cherry picking – Only choosing a few examples that support your argument, rather than looking at the full picture.
- False dilemma – Thinking there are only two possibilities when there may be other alternatives you haven’t considered.
- Begging the question – Making an argument that something is true by repeating the same thing in different words.
- Appeal to tradition – Believing something is right just because it’s been done around for a really long time.
- Appeal to emotions – Trying to persuade someone by manipulating their emotions – such as fear, anger, or ridicule – rather than making a rational case.
- Shifting the burden of proof – Thinking instead of proving your claim is true, the other person has to prove it’s false.
- Appeal to authority – Believing just because an authority or “expert” believes something than it must be true.
- Red herring – When you change the subject to a topic that’s easier to attack.
- Slippery slope – Taking an argument to an exaggerated extreme. “If we let A happen, then Z will happen.”
- Correlation proves causation – Believing that just because two things happen at the same time, that one must have caused the other.
- Anecdotal evidence – Thinking that just because something applies toyou that it must be true for most people.
- Equivocation – Using two different meanings of a word to prove your argument.
- Non sequitur – Implying a logical connection between two things that doesn’t exist. “It doesn’t follow…”
- Ecological fallacy – Making an assumption about a specific person based on general tendencies within a group they belong to.
- Fallacy fallacy – Thinking just because a claim follows a logical fallacy that it must be false.
Faulty thinking is part of life. We’re not perfect, nor do we think perfectly. It is, however, helpful to identify faulty thinking in our own mental processes. Sometimes, merely being aware of how we think can help us stay away from potential pitfalls in our logic.
It also helps to be aware when people use logical fallacies, especially to ‘rationalize’ their thinking. Don’t be afraid to call it out for what it is. Getting people together to collaborate can be a challenge in itself, candor, honesty, and arriving at a shared understanding is crucial for any decision making process.
Be a head above. Bring people together when making decisions, just make sure we aren’t dealing with dissonance in irrational ways…