Skip to content

Companies

Xebia IT Architects Innovation Day

Xebia Blog - Sat, 08/23/2014 - 18:51

Friday August 22nd was Xebia’s first Innovation Day. We spent a full day experimenting with technology. I helped organizing the day for XITA, Xebia’s IT Architects department (Hmm. Department doesn’t feel quite right to describe what we are, but anyway). Innovation days are intended to inspire as well as educate. We split up in small teams and focused on a particular technology. Below is as list of project teams:

• Docker-izing enterprise software
• Run a web application high-available across multiple CoreOS nodes using Kubernetes
• Application architecture (team 1)
• Application architecture (team 2)
• Replace Puppet with Salt
• Scale "infinitely" with Apache Mesos

In the coming weeks we will publish what we learned in separate blogs.

First Xebia Innovation Day

Categories: Companies

You shall not pass – Control your code quality gates with a wizard – Part III

Danube - Fri, 08/22/2014 - 13:25
You shall not pass – Control your code quality gates with a wizard – Part III

If you read the previous blog post in this series, you should already have a pretty good understanding on how to design your own quality gates with our wizard. When you finish reading this one, you can call yourself a wizard too. We will design a very powerful policy consisting of quite complex quality gates. All steps are first performed within the graphical quality gate wizard. For those of you who are interested in what is going on under the hood, we will also show the corresponding snippets of the XML document which is generated by the wizard. You can safely ignore those details if you do not intend to develop your own tooling around our quality gate enforcing backend. If you play with this thought though, we will also show you how to deploy quality gates specified in our declarative language without using our graphical wizard.

Your reward – The Power Example powerexample1 You shall not pass – Control your code quality gates with a wizard – Part III

Power example with six quality gates

Before we reveal the last secrets of our wizard and the submit rule evaluation algorithm, you probably like to know the reward for joining us. The policy we are going to design consists of the following steps:

1. At least one user has to give Code-Review+2 , authors cannot approve their own commits (their votes will be ignored)

2. Code-Review -2 blocks submit

3. Verified -1 blocks submit

4. At least two CI users (belonging to Gerrit group CI-Role) have to give Verified +1 before a change can be submitted

5. Only team leads (a list of Gerrit users) can submit

6. If a file called COPYRIGHT is changed within a commit, a Gerrit group called Legal has to approve (Code-Review +2) the Gerrit change

The final policy can be downloaded from here. Please note that it will not work out of the box for you as your technical group ids for the Legal and CI groups as well as the concrete user names for team leads will differ. We will guide you step by step how you come up with a result that fits your specific situation.

Starting with something known – Gerrit’s Default Submit Policy

powerexample step1to3 You shall not pass – Control your code quality gates with a wizard – Part III

Looking at steps 1, 2 and 3, you probably realized that they are quite similar to Gerrit’s Default Submit policy. Because of that, let’s start by loading the template Default Gerrit Submit Policy. Once you see the first tab of the editor that opens, adjust name and description as shown in the screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

If you now switch to the Source tab (the third one), you can see the XML the wizard generated for the default policy:

 You shall not pass – Control your code quality gates with a wizard – Part III

The XML based language you can see here is enforced by our Gerrit Quality Gate backend. We believe that this language is way easier to learn than writing custom Prolog snippets (the default way of customizing Gerrit’s submit behavior). Furthermore, it exposes some features of Gerrit (like user group info) which are not exposed as Prolog facts. Our Quality Gate backend is implemented as a Gerrit plugin that contributes a custom Prolog predicate which in turn parses the XML based language and instructs Gerrit’s Prolog engine accordingly. This amount of detail is probably only relevant to you if you intend to mix your own Prolog snippets with policies generated by our wizard.

The schema describing our language can be found here. Looking at the screenshot above, you can clearly see that the XML top element GerritWorkflow contains all settings of the first tab of our wizard. You have probably spotted the attributes for name, description, enableCodeReview and enableVerification. The latter two store the info whether to present users with the ability to vote on the Code-Review/Verified categories (given appropriate permissions).

The only child elements accepted by the GerritWorkflow element are SubmitRules. You can clearly see the three submit rules of the default policy, we have covered in detail in our second blog post. Let’s examine the first submit rule named Code-Review+2-And-Verified-To-Submit. If all its voting conditions are satisfied, it will be evaluated to allow, making submit possible if no other rule gets evaluated to block. As this rule has not specified any value for its actionIfNotSatisfied attribute, it will evaluate to ignore if not all its voting conditions are satisfied. Talking about voting conditions, you can see two VotingCondition child elements. The first one is satisfied if somebody gave Code-Review +2, the second one if somebody gave Verified +1. The second SubmitRule element maps directly to step 2 of our power example ( Code-Review -2 blocks submit), the third one directly to step 3 (Verified -1 blocks submit).

Ignore author votes by introducing a voting filter

powerexample step1 You shall not pass – Control your code quality gates with a wizard – Part III

Let’s modify the first submit rule that it matches the first step of our power example policy:

“ At least one user has to give Code-Review+2 , authors cannot approve their own commits (their votes will be ignored)”

For this, we first switch to the second tab of our wizard (Submit Rules) and double click on the first submit rule. Right after, we double click on the first voting condition (Code-Review) and check the Ignore author votes checkbox in the dialog that opens, see screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

Once we save this change (press Finish in the two dialogs) and switch back to the Source tab, we can see that the XML of the first submit rule has changed:

 You shall not pass – Control your code quality gates with a wizard – Part III

The first VotingCondition element now has a VoteAuthorFilter child element. This one has its ignoreAuthorVotes attributes set to true, which in turn will make sure that only votes of non authors will be taken under consideration when this voting condition gets evaluated. You also notice the ignoreNonAuthorVotes attribute. With that one, it would be possible to turn the condition around (if set to true) and ignore all but the author’s votes. If both attributes are set to true, all votes will be ignored. Voting conditions always apply to the latest change set of the Gerrit change in question.

Adding a group filter to the verified voting condition

powerexample step4 You shall not pass – Control your code quality gates with a wizard – Part III

Now that we have realized step 1 of our power example and step 2 and 3 could be just left unmodified from the default policy, let’s focus on step 4:

“At least two CI users (belonging to Gerrit group CI-Role) have to give Verified +1 before a change can be submitted”.

This can be achieved by modifying the second voting condition (Verified) of the first submit rule. This time, we do not ignore Verified votes from authors (we could by just checking the same box again) but by adding a group and a count filter.

 You shall not pass – Control your code quality gates with a wizard – Part III

Like shown in the screenshot above, enter 2 into the Vote Count Min field and add the Gerrit group of your choice that represents your CI users. The wizard allows you to select TeamForge groups, TeamForge project roles and internal Gerrit groups.

If we finish the dialogs and switch back to the Source tab, we can see that the second voting condition of our first submit rule has changed:

 You shall not pass – Control your code quality gates with a wizard – Part III

Two filters appeared, one VoteVoterFilter and one VoteCountingFilter. The first one makes sure that only votes casted by the CI_ROLE (we chose TeamForge project role role1086 here) will be recognized when evaluating the surrounding VotingCondition.

The second filter is a counting filter. Counting and summing filters are applied after all other filters within the same VotingCondition have been already applied. In our case, it will be applied after all votes which

a) do not fit into voting category Verified (votingCategory attribute of parent element)

b) do not have verdict +1 (value attribute of parent element)

c) have not been casted by a user which is part of the CI_ROLE (see paragraph above)

have been filtered out.

After that, our VoteCounting filter will only match if at least two (minCount attribute) votes are left. If this is not the case, the surrounding VotingCondition will not be satisfied and as a consequence, its surrounding SubmitRule will not be satisfied either.

Introducing SubmitRule filters

powerexample step5 You shall not pass – Control your code quality gates with a wizard – Part III

So far, we only talked about voting conditions and its child filter elements. Sometimes, you do not want an entire submit rule to be evaluated if a certain condition is not fulfilled. Our second blog post already used a submit rule filter for a rule that should only be evaluated if a commit was targeted for the experimental branch.

Step 5  of our power policy is another example:  “Only team leads (a list of Gerrit users) can submit”

We will add a filter to our first submit rule that will make sure that it only gets evaluated if a team lead looks at the Gerrit change. As we only have three submit rules so far and the first one is the only one which can potentially be evaluated to allow, it is sufficient to add this filter only to the first one. To do that, we switch back to the Submit Rules tab, double click on the first submit rule and click on the Next button in the dialog that opens. After that, you can see four tabs, grouping all available submit rule filters. You probably remember those tabs from the second blog post where the values for those filters have been automatically populated based on the characteristics of an existing Gerrit change (more precisely, its latest change set).

This time, we will manually enter the filter values we need. Let’s switch to the User tab and select the accounts of your team leads. In the screenshot below you can see that we chose the accounts of eszymanski and dsheta as team leads.

 You shall not pass – Control your code quality gates with a wizard – Part III

Once you select your team leads instead (our wizard makes it possible to interactively select any TeamForge user or internal Gerrit account), let’s click on Back and finally adjust the display name of our submit rule to its new meaning: Code-Review+2-Verified-From-2-CI-And-Project-Lead-To-Submit

If we finish the dialog and switch back to the Source tab, you can see that our first submit rule has not only changed its displayName but also got a new child element:

 You shall not pass – Control your code quality gates with a wizard – Part III

The UserFilter element makes sure that the surrounding submit rule will only be evaluated if at least one of its CurrentUser child elements matches the user currently looking at the Gerrit change.

If there are multiple submit rule filters, all of them have to match if their surrounding submit rule should be evaluated. You may ask what happens if no submit rule can be evaluated because none of them has matching filters. In that case, submit will be blocked and a corresponding message displayed in Gerrit’s Web UI. The same will happen if you have not defined any submit rule at all. As always, you can test your submit rules directly in the wizard against existing changes before deploying.

Providing guidance to your users with display only rules

powerexample step5 You shall not pass – Control your code quality gates with a wizard – Part III

Before we design a submit rule for the final step (6), let’s try to remember the submit rule evaluation algorithm and what will happen if a non team lead looks at a Gerrit change with our current policy. Quoting from blog post two:

 You shall not pass – Control your code quality gates with a wizard – Part III

a) For every submit rule that can be evaluated, figure out whether its voting conditions are satisfied (if a submit rule does not have a voting condition, it is automatically satisfied)

b) If all voting conditions are satisfied for a submit rule, the rule gets evaluated to the action specified in the actionIfSatisfiedField (ignore if no value set), otherwise the rule gets evaluated to the action specified in actionIfNotSatisfied field

c) If any of the evaluated submit rules got evaluated to block, submit will be disabled and the display name of all blocking rules displayed in Gerrit’s UI as reason for this decision

d) If no evaluated submit rule got evaluated to block but at least one to allow, submit will be enabled

e) If all evaluated rules got evaluated to ignore, submit will be disabled and the display names of all potential submit rule candidates displayed

As our first submit rule (Code-Review+2-Verified-From-2-CI-And-Project-Lead-To-Submit) has a submit rule filter which will not match if you are not a team lead, this rule will not be evaluated. This leaves us with submit rules two (Code-Review-Veto-Blocks-Submit) and three (Verified-Veto-Blocks-Submit). Neither of those submit rules have a submit rule filter so they will always be evaluated. Both rules have one Voting Condition, checking whether there is any Code-Review -2 or Verified -1 vote. If the corresponding voting condition can be satisfied, the surrounding submit rule will be evaluated to block, blocking submit and showing its display name as reason within Gerrit’s Web UI.

Let’s pretend nobody has vetoed our Gerrit change so far. In that case, all evaluated rules will be evaluated to ignore and the final step (e) of our algorithm will kick in. Submit will be disabled and the display names of all potential submit rule candidates, IOW all evaluated submit rules which can potentially be evaluated to allow will be shown. In our case, there are no potential submit rule candidates though as the only submit rule which can potentially evaluate to allow is submit rule one. This submit rule was not evaluated though as its submit rule filter did not match (no team lead was looking at the change). As a result, Gerrit can only show a very generic message why submit is not possible, leaving non team leads confused on what to wait for.

How to give guidance under those circumstances? Should we just modify our algorithm and also display the display names of submit rules that did not get evaluated? Probably not. Imagine you have a secret quality gate for a group called Apophenia who can bypass other quality gates if the commit to the enigma branch if the number of lines added to the commit is 23 (for anybody who does not know what I am talking about, I can really recommend this movie).

The corresponding submit rule would have submit rule filters making sure that the rule only gets evaluated for that particular branch, commit stats and user group. As long as those filters are not matched, the display name of surrounding submit rule must not be revealed under any circumstances. We are sure you can imagine a more business like scenario with similar characteristics.

Fortunately, there is a way how to guide users under those circumstances: Display only rules

Display only rules are submit rules without any voting conditions and no submit rule filters. Consequently, they are always evaluated and will always satisfy. They do not have any value (or ignore for that matter) set for their actionIfSatisfied attribute though. Hence, they will never influence whether submit is enabled or not (that’s why they are called display only after all). Their actionIfNotSatisfied attribute is set to allow. This makes them potential submit rule candidates.  In other words, their display names will always be shown whenever no other submit rules allows or blocks submit, providing perfect guidance.

In our particular example, we will create a display only rule with display name Team-Lead-To-Submit which will give all non team leads guidance why they cannot submit although nobody vetoed the change.

At this point, we like to demonstrate another cool feature of the Source tab. It is bidirectional, so you can also modify the XML and your changes will be reflected in the first and second tab of our wizard. Let’s paste our display only rule as one child element of the GerritWorkflow element:

<cn:SubmitRule actionIfNotSatisfied="allow" displayName="Team-Lead-To-Submit"/>

If you switch back to the Submit Rule tab, it should look like this

 You shall not pass – Control your code quality gates with a wizard – Part III

You probably recognized that this is the first time we used the Not Satisfied Action field, admittedly for a quite exotic use case, namely display only rules. The final step in our power policy will hopefully demonstrate a more common use case to use this field.

Not Satisfied Action for Exception Driven Rules

powerexample step61 You shall not pass – Control your code quality gates with a wizard – Part III

Step 6 of our power policy is an example of what we call exception driven rule:

“If a file called COPYRIGHT is changed within a commit, a Gerrit group called Legal has to approve (Code-Review +2) the Gerrit change”

Why exception driven? Well, having somebody from Legal approving a change is not sufficient by itself to enable submit, so having a separate submit rule with actionIfSatisfied set to allow is not the answer. Should we then just add legal approval as voting condition to all submit rules which can potentially enable submit? This is probably not a good idea either. Not every commit has to be approved by legal, only the ones changing the COPYRIGHT file.

Hence the best idea is to keep the existing submit rules unmodified and add a new submit rule which will

I) if evaluated, checks whether legal has approved the change and if not blocks submit (exception driven)

II) only be evaluated if legal has to actually approve the change (if the COPYRIGHT file changed)

Let’s tackle I) first by creating a new submit rule (push Adding Rule Manually button) with display name Legal-To-Approve-Changes-In-Copyright-File and setting Not Satisfied Action to block.

 You shall not pass – Control your code quality gates with a wizard – Part III

If we kept our new submit rule like this, it would not block a single change as it does not have any voting condition (and hence would always evaluate to satisfied). So let’s add a voting condition that requires a Gerrit group called Legal to give Code-Review +2. The screenshot below shows how this condition should look like. In our case, Legal is a TeamForge user group (group1008).

 You shall not pass – Control your code quality gates with a wizard – Part III

In the current state, all changes which do not satisfy our new voting condition would be blocked.

Implementing II) will make sure we only evaluate this submit rule (and its voting condition) if the corresponding commit changed the COPYRIGHT file. To do that, we have to click on Next, and switch to the Commit Detail tab which contains all submit rule filters which match characteristics of the commit associated with the evaluated change. The only field to fill in is the Commit delta file pattern. Its value has to be set to ^COPYRIGHT as shown in the screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

Why ^COPYRIGHT and not just COPYRIGHT? If a filter name does not end with Pattern, it only matches exact values. If a filter ends with Pattern though, it depends on the field value.

If the field value starts with ^, the field value is treated as a regular expression. ^COPYRIGHT will match any file change list that contains COPYRIGHT somewhere. If the field value does not start with ^, it is treated as an exact value. If we entered just COPYRIGHT, this would have only matched commits where only the COPYRIGHT (and no other file) got changed. Keep this logic in mind whenever you deal with pattern filters. Branch filters and commit message filters are other prominent examples where using a regular expression is probably better than an exact value.

If we finish the dialogs and switch to the Source tab, you can see the XML for our new submit rule:

 You shall not pass – Control your code quality gates with a wizard – Part III

The actionIfNotSatisfied attribute is set to block, we have one submit rule filter (CommitDetailFilter) and one voting condition with a filter (VoteVoterFilter).

Congratulations, you have successfully designed the power policy and can now test and deploy it!

powerexample1 You shall not pass – Control your code quality gates with a wizard – Part III

Power example with six quality gates

Learning more about the XML based quality gate language

Although you have seen quite a bit of our XML based language so far, we fully realize that we have not shown you every single feature. We do not believe this is necessary though, as our graphical wizard supports all features of the language. If you are unsure how a certain filter works, just create one example with the wizard, switch to the Source tab and find out how to do it properly. Our schema is another great resource as it is fully documented and will make sure that you do not come up with any invalid XML document. Last but not least, our wizard ships with many predefined templates. We tried to cover every single feature of the language within those templates.

For those of you who are familiar with Gerrit’s Prolog cookbook, we turned all Prolog examples into our declarative language and were able to cover the entire functionality demonstrated. The results can be found here.

As always, if you have any questions regarding the language, feel also free to drop a comment on this blog.

How to deploy quality gates without the graphical wizard

As explained before, our Quality Gate enforcing plugin ties into Gerrit’s Prolog based mechanism to customize its submit behavior. Gerrit expects the current submit rules in a Prolog file called rules.pl in a special ref called refs/meta/config. The deployment process for rules.pl is explained here.

Whenever our wizard generates a rules.pl file, it makes use of a custom Prolog predicate called cn:workflow/2 which is provided by our Quality Gate enforcing plugin. This predicate has two arguments. The first one takes the XML content as is, the second one will be bound to the body of Gerrit’s submit_rule/1 predicate. In a nutshell, the generated rules.pl looks like this:

submit_rule(Z):-cn:workflow(‘<XML document describing your quality gate policy>’, Z).

Our wizard does not use any other Prolog predicates. You can use our predicate as part of your own Prolog programs if you decide to come up with your own tooling and generate rules.pl by yourself. While passing the XML content, make sure it does not contain any character which would break Prolog quoting (no ‘ characters no newlines or XML encode then). Our graphical wizard takes care of this step.

Final words and Call for Participation

If you made it through the entire third blog post you can proudly call yourself a wizard too icon cool You shall not pass – Control your code quality gates with a wizard – Part III

Designing quality gates from scratch can be a complex matter. Fortunately, our wizard comes with many predefined templates you can just deploy. In addition, we turned any example from the Prolog cookbook into our format. If you are unsure how to match a certain state of a Gerrit change, just use the built in functionality of our wizard to turn it into a submit rule and adopt it according to your needs. Before you deploy, you can always simulate your quality gates within the wizard. It will follow the submit rule evaluation algorithm step by step and shows the evaluation result for every rule. If you do not like our wizard and do not like Prolog either, feel free to use our XML based language independently. This blog post has demonstrated how to do that.

Talking about the XML based language, its specification is Open Source. We encourage you to build your own wizard or other frontends and will happily assist if you have any questions regarding its functionality. Gerrit’s functionality to customize submit behavior is unmatched in the industry. We hope that with our contributions we made it a little easier to tap into it.

Coming up with the wizard, the language and our backend was a team effort. About half a dozen people worked for two months to get to the current state. We like to know from you whether it is worth investing further in this area. Want to have more examples? Better documentation? A tutorial video? A Web UI based wizard? Performance is not right? Cannot express the rules you like to express? Want to use the feature with vanilla Gerrit?

Please, spread the word about this new feature and give us feedback!

The post You shall not pass – Control your code quality gates with a wizard – Part III appeared first on blogs.collab.net.

Categories: Companies

New Sprintly Feature: Change Item Type

sprint.ly - scrum software - Thu, 08/21/2014 - 20:49

One of our top requested Sprintly features is “how do I change the item type?” Ever file a defect in Sprintly and realize that it should have been a task? Today we’ve shipped this ever useful feature!

Place an item in edit mode via the gear icon, select the new item type and hit Update. In this example, I changed a defect into a task:

image

You won’t be able to change a Story into another item type at this point. Stories are unique in that they can have sub-items. Tasks, defects and tests cannot have sub-items.

We hope you enjoy this Sprintly product update and always, let us know how we can be of help.

Categories: Companies

You shall not pass – Control your code quality gates with a wizard – Part II

Danube - Thu, 08/21/2014 - 14:56
You shall not pass – Control your code quality gates with a wizard – Part II

In the previous blog post you learned how to select, test and deploy predefined quality gates with CollabNet’s code quality gate wizard for Gerrit. Those quality gates will make sure that all conditions regarding code quality and compliance are met before a commit can be merged into your master branch and trigger a pipeline that will eventually promote it into production.

In this blog post we will focus on how you can define quality gates by yourself, using a methodology very close to setting up email filter rules.

Underlying technical concepts of quality gates

Before we jump into the functionality of the wizard, let’s give a short overview on the underlying foundations (I am sure most of the technical folks will appreciate). TeamForge is using Gerrit (current version 2.8, 2.9 coming soon) as its Git backend. If you are new to Gerrit, we recommend to have a look into this brilliant series of free, recorded webinars. The third and final part of his series is the best introduction into Gerrit’s change based workflow I have seen so far.

If you prefer written explanations and pictures over webinars, I can recommend this blog post from my colleague Dharmesh.

When we are talking about commits, we are referring to commits which are controlled by Gerrit’s review workflow, in other words, Gerrit changes. TeamForge comes with multiple review policies for Gerrit repositories which automatically make sure that Gerrit access rights are setup in a way that you cannot directly push to real branches but always have to go through the Gerrit review process (which does not mean you have to involve any manual reviewers or build systems, see below). When we are talking about pushing commits into production, we are referring to Gerrit’s functionality to submit/merge changes to their anticipated target branch which in turn trigger CI systems which will be responsible for all further actions. Quality gates are a collection of submit rules which define under which conditions Gerrit allows to submit a change. In a vanilla Gerrit, the only way to customize those conditions is to write Prolog programs. CollabNet’s Code Quality Gate Wizard for Gerrit shields its users from the complexity of Gerrit’s Prolog based system (more details in the third part of our blog post series).

The following Prezi shows how our idea was presented at the latest Gerrit User Summit.

You shall always pass – Simple example to start with

submitalwaysenabled You shall not pass   Control your code quality gates with a wizard   Part II

Enough with grey theory, haven’t we mentioned multiple times that setting up quality gates is as easy as setting up email filters? Let’s stick to our word. Inside the quality gate wizard, we have three tabs. The first one allows you to edit the title and description of your policy, testing it against Gerrit Changes (dry mode) as well as deploying it to Gerrit.

 You shall not pass   Control your code quality gates with a wizard   Part II

The picture above shows the first tab of the wizard with a very simple policy: Submit is always enabled, in other words “You shall always pass”. This policy makes it possible to push any commit into production, more precisely, any Gerrit change will be submittable immediately. In addition to the title, description, test and deploy elements, you will notice two check boxes: Enable code review and Enable verification. Those checkboxes define whether users looking at a commit will be presented with the ability to give a Code-Review/Verified votes on the corresponding Gerrit change. This is independent of their actual permissions. In other words, if a user does not have permissions to cast a Code-Review or Verified vote, then this check box will not magically enable them to. As the Submit is always enabled policy allows to merge a commits unconditionally (i.e. as long as the user submitting the corresponding Gerrit change has Gerrit Submit permissions), there is no need to present users with Code-Review or Verify options, so those check boxes are not checked.

The second tab of the quality gate wizard shows the actual submit rules. Submit rules are the technical details behind every policy you can define in the quality gate wizard. Like an email filter, submit rules have actions. While an email filter allows you to specify whether to delete or move an email into a subfolder, submit rule actions define whether to allow a commit to be merged into production, whether to block merging or not to do anything (ignore).

 You shall not pass   Control your code quality gates with a wizard   Part II

The Submit is always enabled policy has only one submit rule whith action set to allow if it is satisfied. As there is no condition associated with this rule, it will always be satisfied, so it will always result in allow and hence will always enable Gerrit’s submit action.

You shall never pass – Designing the opposite policy

submitneverenabled1 You shall not pass   Control your code quality gates with a wizard   Part II

Let’s have a look at the opposite policy – Submit is never enabled

 You shall not pass   Control your code quality gates with a wizard   Part II

In contrast, the satisfied action is set to block. If a submit rule gets evaluated and results into block, Gerrit’s submit action is disabled, IOW the Gerrit change cannot be submitted, so the corresponding commit cannot be merged into its target branch and hence this code cannot be pushed into production.

You shall pass as usual – Gerrit’s Default Submit Policy

Too simple examples? Let’s move on then and examine Gerrit’s factory settings. CollabNet’s Quality Gate Wizard includes a template called Default Gerrit Submit Policy which simulates Gerrit’s standard submit behavior.

gerritdefaultpolicy You shall not pass   Control your code quality gates with a wizard   Part II

This policy consists of three submit rules:

  1. If somebody gave Code-Review +2 and there is also at least one Verified +1 vote, submit is allowed

  2. Any Code-Review -2 vote blocks submit

  3. Any Verified -1 vote blocks submit

The screenshot below shows the conditions of the first rule in detail (you get to the voting condition dialog if you double click on a submit rule).

 You shall not pass   Control your code quality gates with a wizard   Part II

The algorithm deciding who shall pass

At this point, we have to elaborate a bit what happens if multiple rules are satisfied. First of all, the order of rules does not play any role, it can be completely ignored. The algorithm used to decide whether submit is enabled or not, looks as follows:

 You shall not pass   Control your code quality gates with a wizard   Part II

a) For every submit rule that can be evaluated, figure out whether its voting conditions are satisfied (if a submit rule does not have a voting condition, it is automatically satisfied)

b) If all voting conditions are satisfied for a submit rule, the rule gets evaluated to the action specified in the actionIfSatisfiedField (ignore if no value set), otherwise the rule gets evaluated to the action specified in actionIfNotSatisfied field

c) If any of the evaluated submit rules got evaluated to block, submit will be disabled and the display name of all blocking rules displayed in Gerrit’s UI as reason for this decision

d) If no evaluated submit rule got evaluated to block but at least one to allow, submit will be enabled

e) If all evaluated rules got evaluated to ignore, submit will be disabled and the display names of all potential submit rule candidates displayed (details below)

Potential candidates explained

gerritdefaultpolicy You shall not pass   Control your code quality gates with a wizard   Part II

Going back to Gerrit’s Default Submit Policy, this means that if rule two (Code-Review-Veto-Blocks-Submit) or three (Verified-Veto-Blocks-Submit) are satisfied, Gerrit will block submit and show their display names as reasons, no matter whether the voting conditions for rule one (Code-Review+2-And-Verified-To-Submit) are satisfied. Only if those two rules are evaluated to ignore (as their actionIfNotSatisfied field is implicitly set to ignore) and rule one is evaluated to allow, submit will be possible.

So what happens if there is no Code-Review or Verifed-Veto but the change has not received both Code-Review +2 and Verified +1 votes yet? In that case we end up with the last step in our algorithm where you probably stumbled over the term potential submit rule candidates. Potential candidates are all submit rules that got evaluated and have at least one of their actionIfSatisfied and actionIfNotSatisfied fields set to allow. What does this mean exactly? Well, let’s think about a user who is looking at a Gerrit change in that very state.

The screenshot below shows a change like that.

 You shall not pass   Control your code quality gates with a wizard   Part II

There are no hard blockers (rule two and three) but as long as rule one is not satisfied, submit is still not enabled. How would the user looking at the Gerrit change even know what to do in order to enable submit? This is where potential candidates come into play. These are all rules which could potentially evaluate to allow, hence making the change submittable. Their display names are now shown in Gerrit’s UI as a hint for the user what possibilities exist to merge the corresponding commit and push it to production. In our case, that is the display name of rule one (Code-Review+2-And-Verified-To-Submit, see last line of screenshot above).

Turning a Gerrit change into a submit rule changebaserule step4 You shall not pass   Control your code quality gates with a wizard   Part II

Changed based quality gate

So far, the examples provided did not add any particular value to what is already working out of the box in a standard Gerrit environment. Let’s switch gears and add a more interesting submit rule: If the target branch of a Gerrit change is called refs/heads/experimental, we will not need a Code-Review+2 anymore. Instead, having at least two Code-Review +1’s will be sufficient, or more precisely, the sum of all Code-Review Votes should be at least 2 in order to allow submit. Verified votes are not needed in this case but the blocking behavior of rules two and three will still apply. Like an email filter rule wizard, our quality gate wizard supports turning existing Gerrit changes into submit rules. We will use that very feature now to turn a Gerrit change for the experimental branch which has two Code-Review +1 votes into a submit rule.

If you press on the button Add Change-Based Rule, a wizard similar to the one below appears. We will select the very Gerrit change we were looking at in the screenshot above as it already has the conditions we wanted to capture in a submit rule: Two Code-Review +1 votes, one Verified +1 vote and target branch experimental.

 You shall not pass   Control your code quality gates with a wizard   Part II

We are only interested in copying the Voting details and Commit and change details. If you like to design a submit rule that copies Commit stats (like lines modified, deleted within the commit) or User and group details (who voted, who owns the change, which groups those users belong to), you have to check additional check boxes in the wizard. Once you click finish, a new dialog will open with the generated submit rule.

The generated submit rule will match the Gerrit change it was based upon as close as possible. In practice, this is probably too narrow, as you like to match similar (same branch) but different (other files changed, different commit message) Gerrit changes too. Consequently, we now have to remove some voting conditions and filters. First though, we will change the display name of the new rule to Experimental-Branch-Adds-Up-Votes. Then, we will remove all but the last (Code-Review) voting condition as those are too narrow for our use case. Now, we double click on the last remaining voting conditions, clear out all filters and only keep the minSum=2 setting:

 You shall not pass   Control your code quality gates with a wizard   Part II

like in the screenshot above. Once we click Next, we will see some further filters with values populated by our wizard based on the characteristics of our change. If any of those filters is not matched by a Gerrit change, the whole submit rule will not be evaluated, so we have to make sure to only keep the filters which are relevant for our use case. In our example, the only relevant filter is a change detail filter specified in the branch pattern field (see screenshot below). All other filters should be cleared as they are too narrow for our use case.

 You shall not pass   Control your code quality gates with a wizard   Part II

Finally, click on Finish and deploy the new policy into your repository. The end result should look like this:

 You shall not pass   Control your code quality gates with a wizard   Part II

You can use the testing facilities shown in the first blog post if you want to simulate the impact of the policy first. If we now look at the Gerrit change again, it will actually be submittable:

 You shall not pass   Control your code quality gates with a wizard   Part II

Sharing your results

Like your new policy and want to share it with your organization? Good! There are two ways of sharing your results: Saving them directly in the repository as an XML file and saving as an XML file on your hard disk. If you save them within a Git repository, the wizard will automatically create a commit and push it to a special Gerrit reference (refs/meta/config), so your ordinary code and branches inside that repository will not be affected. You may wonder what the difference is between deploying a policy to Gerrit and committing the XML file to refs/meta/config. The policy currently enforced by Gerrit is saved in a file called rules.pl and the Deploy button in our wizard makes sure this file name is used whenever you deploy. If you want to store a policy within the repository but do not want to enforce it there, just enter any other filename, the wizard will automatically append an .xml suffix to it.

Whether you like to save an XML file to your local file system or save the file within the repository, it all starts with the File -> Save action for your editor (you can use CTRL+S shortcut or the floppy icon too).

 You shall not pass   Control your code quality gates with a wizard   Part II

When opening the quality gate wizard, you can select to open an XML file from your file system, the currently enforced policy (rules.pl)  or any stored but not enforced policy inside the repository you are currently looking at.

One idea how to centrally manage your quality gate policies is to create a dedicated Git repository where you can save all your policies. This is possible because you can load policies from different repositories of the same TeamForge project as where you deploy them. Please tell us if you see the need for an even more centralized solution.

Summary

In this blog post we demonstrated how you can design simple quality gate policies by yourself. A policy consists of one or more submit rules. When submit rules are evaluated, their voting conditions will be checked. Similar to an email filter rule, you can define actions based on the result of this evaluation, namely allow, block or ignore. You also learned about the algorithm used to determine whether a Gerrit change is submittable based on those actions. Basically, block actions will block submit no matter what, if no block is there, allow actions will make changes submittable and if there are only ignore actions, Gerrit will guide the users by displaying the names of all evaluated submit rules which could potentially still result in an allow action.

If you have a Gerrit change with characteristics that should enable or disable commits to be pushed into production, you can turn them into a submit rule. The wizard will inspect the Gerrit change and setup the submit rule filters and voting conditions as precise as possible (similar to an email filter rule wizard you point to an existing email). You will have to clear the filters you are not interested in to make sure that your filter criteria are not too narrow for your use case. Only submit rules which match all filters specified will be evaluated by the algorithm that decides whether to allow or block the change from entering its target branch.

While going through the change based submit rule wizard, you have probably wondered about all the possible filters and conditions you can specify. We encourage you to go through the list of all predefined quality gate policies and learn how those filters are used in detail. The next blog post will also show some further filters in action.

Once you came up with your own policies, you can share them with your organization by either exporting them into an XML file or saving them in a dedicated Git repository.

Coming up next …

There is one tab in the quality gate wizard we have left out so far: The last tab is showing an XML representation of the language used to describe the policy you are currently looking at. In the third blog post we will exploit the full capabilities of that language, designing a real powerful policy, namely

  • Author Must Not Approve own changes (4 Eye Principle)

  • Legal has to approve changes in copyright file

  • Only verified from 2 CI users counts

  • Only team leads can submit

powerexample You shall not pass   Control your code quality gates with a wizard   Part II

Quality Gates enforced in part three of our blog post series

The power policy is developed step by step, covering all details of our submit rule evaluation algorithm. Finally, we will also explain how to use this language independently from our graphical wizard for your own tooling. If you would like to become a quality gate wizard yourself, read on!

The post You shall not pass – Control your code quality gates with a wizard – Part II appeared first on blogs.collab.net.

Categories: Companies

Happy and Healthy!

Growing Agile - Thu, 08/21/2014 - 14:00

It’s that time of the year again, the time where summer is approaching and I’m tired of being lazy. We all have moments where we are inspired to change. The problem comes in with maintaining that energy and inspiration for longer.

So, I am trying something different with my wife. We both want to stop smoking, the coughing in the morning sucks, the smell is horrid. This is something we have tried and failed at NUMEROUS times in the last 3 years. We also want to be healthier. We have uh, grown a bit round with love icon smile Happy and Healthy!

So we’re trying #HappyAndHealthy .

HappyAndHealthy 300x300 Happy and Healthy!

STEP 1 : Set up your rules. We have 2 rules.

Rule 1 : No smoking cigarettes. The distinction is important. We both have Twisps (electronic cigarettes) now. I am on zero nicotine, but I still need my Twisp as when around smoking friends and red wine, the need to smoke is huge. The twisp is helping me not have that craving. My wife is still having a bit of nicotine but a fraction of a cigarette and no tar. She is slowly weening herself way off the nicotine.

Rule 2 : Exercise. This is anything from a walk to a run to a cycle. We don’t have to do it everyday, but it needs to happen more often than not – so 50% or more.

STEP 2: Create a big visible chart.

We set up a piece of flip chart paper with blocks (10 * 10). We specifically didn’t make it a week, so that if we have a few bad days, we don’t automatically decide it’s a bad week and give up till next week. Stick your chart up somewhere where you will see it most of the time (not hidden away in a cupboard). Ours is by the front door.

photo 18 e1408356932788 225x300 Happy and Healthy!

STEP 3: Measure daily.

Everyday we update the block with 2 dots – one if we exercised and one if we didn’t smoke. At the end of a row (10 days) we count up how many exercise dots we had – 5 or more is our aim.

STEP 4: High Five!

Every time you walk past the board, high five it and say out loud “Happy and Healthy”. This reinforces why you are doing this.


That’s nice – but does it work?

So far yes. We have just finished the first row. I managed 6 exercise sessions and my wife 5. We did not touch cigarettes. This by itself is great news for us as our two friends are back on cigarettes (the Twisp didn’t work for them). I am excited to be exercising and I want to go for a run in the morning now. Only time will tell if it lasts – I think it will.

 

 

 

Categories: Companies

Principle #5 of Capacity Planning: Tolerance for Incomplete Data

Rally Agile Blog - Thu, 08/21/2014 - 14:00

The first four Principles of Capacity Planning start us on a planning journey to run a business more effectively. Here are the topics we’ve covered so far:

This post addresses the value of tolerating incomplete data in portfolio planning -- a principle that applies to both demand and supply. Here are some specific examples for each.

Demand Tolerance: Detail Initiatives Only As You Get Close to Scheduling Them

When we plan out 12 to 18 months, we’ll make decisions on less-accurate data than we’ll have for, say, the upcoming quarter. In other words, we’ll be less certain as our planning horizon moves farther into the future.  

Experts and highly competent professionals strive for perfection. Big planning mistakes have ruined careers. Yet to be successful, leaders must make decisions using imperfect information. Strategic planners have to navigate uncertainty and risk as investment scenarios become plans of record. To have a reasonable chance of success, we must know something about the investments we select to execute on: the rest of the information is either knowable or unknowable. There is a huge cost associated with gathering this knowledge. On the other hand, tolerating incomplete data is no excuse for ignorance.  

A prominent, former presidential cabinet member advised that you not take action if you have only enough information to give you less than a 40 percent chance of being right; but that you shouldn't wait until you have enough facts to be 100 percent sure, either, because by then it’s almost always too late. This instinct is right: excessive delays in the name of information-gathering leads to analysis paralysis. Procrastination in the name of reducing risk actually increases risk. For example: one Fortune 500 organization was spending so much time to have 100 percent of the decision-making information that its approval process took years -- longer than the time required to implement the approved projects and programs.

When applied to an annual cadence, expect to be on the low end of this information-gathering range (around 40 percent.) At this point you’re planning at more of the initiative level, maybe with some features identified. With roughly right estimates, we will set ourselves up for more accurate near term plans.  Within a quarter, I expect to be closer to 70% because I should have more complete information.  As you embark on continuous planning cadence, the ability to manage uncertainty becomes much more tolerable because you know you will have the opportunity to inspect and adapt at more frequent intervals.  In today’s fast-paced world, the cost of delay can be so high you have no choice but to get comfortable with operating on good-enough data.  

One of the proven benefits of Agile software development methods is that you can adapt to necessary changes in schedule and priorities, and avoid the misalignment of scheduling work far ahead of execution. In a continuous planning cadence, annual planning becomes part of long range business commitments, forecasting and budgeting, while scheduling becomes part of rolling wave prioritization and value delivery.

Ironically, traditional sources of guidance support the notion of tolerating incomplete data. According to PMBOK, “A Planning Package is a work breakdown structure component below the control account with known work content but without detailed schedule activities.” Each organization should determine its policies for when it’s feasible to refine details and schedule them. Our experience shows that this planning horizon for prioritization and value delivery in today’s fast-paced world is about one quarter (three months.) With these rhythms, we can get better at operating on “good enough” data.

Supply Tolerance: Fuss Only When You Must

When working with large customers, we’ve found that most managers have been burned by failing to pay attention to the scarce capacity of specialization areas. Examples include UX designers and DBAs, as well as expertise specific to a company technology environment, such as network or security engineers. The cost of not accounting for this scarcity of talent is an overly optimistic plan that does not match the reality of what can be delivered. This is one criticism of Agile methods to-date: they lack a good approach for handling exceptions to cross-functional teams.

The key to tolerating incomplete data is to plan at the delivery group level and, if necessary, the delivery team level. This respects the principle of the team as the resource unit and has the added advantage of simplifying the capacity planning exercise by a factor of 10 (roughly assuming 10 individuals per team.) Only when planning at the team level isn’t roughly right do we fuss more, usually because of the need to pay special attention to scarce expertise. Because managing expertise adds overhead to matching supply to demand, the goal is to fuss only where you must!

How do we fuss just enough? Let's take the example of a fictitious company that has both retail brick and mortar stores and a successful online presence. Initially, this organization had a platform delivery group that provides all the backend services. Every other delivery group was dependent on the platform group. With so many dependencies, managers were constantly scrambling to remove bottlenecks and resolve schedule conflicts. The solution was to distribute the platform delivery group onto several of the other delivery groups so they could be made “whole” (each group had what is needed to deliver value with minimal external dependencies.) By designating “platform” as an expertise, and just a little extra fussing to account for platform constraints, we can match supply and demand and have better results.  

Roles and Expertise

Expertise can be used to support flexible resourcing. Take, for example, an organization that has several delivery groups, with some aligned to specific products.  Most of the time, each delivery group is self-sufficient in delivering their allocated work.  When there’s an unusually high need for a given expertise, the delivery group can be augmented by providing additional capacity from other delivery groups for that expertise.  One customer told us, “We rob teams from Peter to pay Paul all the time in order to deliver maximum value.” Expertise should only be fussed with when we must.

What about roles? Modern capacity planning strives to use role as an attribute of a team. Applying roles to teams helps identify team competencies and provide convenient capacity planning building blocks. A team’s role can be thought of as expertise. Although we value teams over individuals, we recognize that individuals are the basis of great teams (it’s commonly cited that a good programmer can outperform a mediocre one by a factor of 10.) When the team is the fundamental planning currency, the need to fuss about the roles of individuals diminishes. Thus, resist the urge to track capacity of individual roles within each team. For planning purposes, this would be artificially precise (read more about the capacity planning principle of “roughly right.”)

This blog rounds out the five principles of modern capacity planning that should help you have a less dreadful annual planning season. If you'd rather listen to an overview of these principles, check out the "Business Agility and Annual Planning: Solving the Paradox" webinar.  

Most importantly, don’t waste any more time creating precisely wrong plans when you can leverage Rally’s expertise in portfolio capacity planning.

Brent Barton
Categories: Companies

Help! Too Many Incidents! - Capacity Assignment Policy In Agile Teams

Xebia Blog - Wed, 08/20/2014 - 23:26

As an Agile coach, scrum master, product owner, or team member you probably have been in the situation before in which more work is thrown at the team than the team has capacity to resolve.

In case of work that is already known this basically is a scheduling problem of determining the optimal order that the team will complete the work so as to maximise the business value and outcome. This typically applies to the case that a team is working to build or extend a new product.

The other interesting case is e.g. operational teams that work on items that arrive in an ad hoc way. Examples include production incidents. Work arrives ad hoc and the product owner needs to allocate a certain capacity of the team to certain types of incidents. E.g. should the team work on database related issues, or on front-end related issues?

If the team has more than enough capacity the answer is easy: solve them all! This blog will show how to determine what capacity of the team is best allocated to what type of incident.

What are we trying to solve?

Before going into details, let's define what problem we want to solve.

Assume that the team recognises various types of incidents, e.g. database related, GUI related, perhaps some more. Each type of incident will have an associated average resolution time. Also, each type will arrive at the team at a certain rate, the input rate. E.g. database related incidents arrive 3 times per month, whereas GUI related incidents occur 4 times per week. Finally, each incident type will have different operational costs assigned to it. The effect of database related incidents might be that 30 users are unable to work. GUI related incidents e.g. affect only part of the application affecting a few users.

At any time, the team has a backlog of incidents to resolve. With this backlog an operational cost is concerned. This operational we want to minimise.

What makes this problem interesting is that we want to minimise this cost under the constraint of having limited number of resources, or capacity. The product owner may wish to deliberately ignore GUI type of incidents and let the team work on database related incidents. Or assign 20% of the capacity to GUI related and 80% of the available capacity to database related incidents?

Types of Work

For each type of work we define the input rate, production rate, cost rate, waiting time, and average resolution time:

 \lambda_i = \text{average input rate for type '$i$'}, \lambda_i = \text{average input rate for type '$i$'},

 C_i = \text{operational cost rate for type '$i$'}, C_i = \text{operational cost rate for type '$i$'},

 x_i = \text{average resolution time for type '$i$'}, x_i = \text{average resolution time for type '$i$'},

 w_i = \text{average waiting time for type '$i$'}, w_i = \text{average waiting time for type '$i$'},

 s_i = \text{average time spend in the system for type '$i$'}, s_i = \text{average time spend in the system for type '$i$'},

 \mu_i = \text{average production rate for type '$i$'} \mu_i = \text{average production rate for type '$i$'}

Some items get resolved and spend the time s_i = x_i + w_is_i = x_i + w_i in the system. Other items never get resolved and spend time  s_i = w_i s_i = w_i in the system.

In the previous blog Little's Law in 3D the average total operational cost is expressed as:

 \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{S_i(S_i+T)} \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{S_i(S_i+T)}

To get the goal cost we need to sum this for all work types 'i'.

System

The process for work items is that they enter the system (team) as soon as they are found or detected. When they are found these items will contribute immediately to the total operational cost. This stops as soon as they are resolved. For some the product owner decides that the team will start working on them. The point that the team start working on an item the waiting time w_iw_i is known and on average they spend a time x_ix_i before it is resolved.

As the team has limited resources, they cannot work on all the items. Over time the average time spent in the system will increase. As shown in the previous blog Why Little's Law Works...Always Little's Law still applies when we consider a finite time interval.

This process is depicted below:

new doc 13_2

 \overline{M} = \text{fixed team capacity}, \overline{M} = \text{fixed team capacity},

 \overline{M_i} = \text{team capacity allocated to working on problems type '$i$'}, \overline{M_i} = \text{team capacity allocated to working on problems type '$i$'},

 \overline{N} = \text{total number of items in the system} \overline{N} = \text{total number of items in the system}

The total number of items allowed in the 'green' area is restricted by the team's capacity. The team may set a WiP limit to enforce this. In contrast the number of items in the 'orange' area is not constrained: incidents flow into the system as they are found and leave the system only after they have been resolved.

Without going into the details, the total operational cost can be rewritten in terms of x_ix_i and w_iw_i:

(1)  \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{w_i(w_i+T)} + \mu_i C_i \overline{x_i} \,\, \overline{w_i} + \frac{1}{2} \mu_i C_i \overline{x_i(x_i+T)} \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{w_i(w_i+T)} + \mu_i C_i \overline{x_i} \,\, \overline{w_i} + \frac{1}{2} \mu_i C_i \overline{x_i(x_i+T)}

What are we trying to solve? Again.

Now that I have shown the system, defined exactly what I mean with the variables, I will refine what exactly we will be solving.

Find M_iM_i such that this will minimise (1) under the constraint that the team has a fixed and limited capacity.

Important note

The system we are considering is not stable. Therefore we need to be careful when applying and using Little's Law. To circumvent necessary conditions for Little's Law to hold, I will consider the average total operational cost over a finite time interval. This means that we will minimise the average of the cost over the time interval from start to a certain time. As the accumulated cost increases over time the average is not the same as the cost at the end of the time interval.

Note: For our optimisation problem to make sense the system needs to be unstable. For a stable system it follows from Little's Law that the average input rate for type i is equal to the average production rate for type 'i'. In case there is no optimisation since we cannot choose those to be different. The ability to choose them differently is the essence of our optimisation problem.

Little's Law

At this point Little's Law provides a few relations between the variables  M, M_i, N, w_i, x_i, \mu_i, \lambda_i M, M_i, N, w_i, x_i, \mu_i, \lambda_i . These relations we can use to find what values of M_iM_i will minimise the average total operational cost.

As described in the previous blog Little's Law in 3D Little's Law gives relations for the system as a whole, per work item type and for each subsystem. These relations are:

 \overline{N_i} = \lambda_i \,\, \overline{s_i} \overline{N_i} = \lambda_i \,\, \overline{s_i}

 \overline{N_i} - \overline{M_i} = \lambda_i \,\, \overline{w_i} \overline{N_i} - \overline{M_i} = \lambda_i \,\, \overline{w_i}

 \overline{M_i} = \mu_i \,\,\overline{x_i} \overline{M_i} = \mu_i \,\,\overline{x_i}

 M_1 + M_2 + ... = M M_1 + M_2 + ... = M

The latter relation is not derived from Little's Law but merely states that total capacity of the team is fixed.

Note that Little's Law also has given us relation (1) above.

Result

Again, without going into the very interesting details of the calculation I will just state the result and show how to use it to calculate the capacities to allocate to certain work item types.

First, for each work item type determine the product between the average input rate (\lambda_i\lambda_i) and the average resolution time (x_ix_i). The interpretation of this is the average number of new incidents arriving while the team works on resolving an item. Put the result in a row vector and name it 'V':

(2)  V = (\lambda_1 x_1, \lambda_2 x_2, ...) V = (\lambda_1 x_1, \lambda_2 x_2, ...)

Next, add all at the components of this vector and denote this by ||V||||V||.

Second, multiply the result of the previous step for each item by the quotient of the average resolution time (x_ix_i) and the cost rate (C_iC_i). Put the result in a row vector and name it 'W':

(3)  W = (\lambda_1 x_1 \frac{x_1}{C_1}, \lambda_2 x_2 \frac{x_2}{C_2}, ...) W = (\lambda_1 x_1 \frac{x_1}{C_1}, \lambda_2 x_2 \frac{x_2}{C_2}, ...)

Again, add all components of this row vector and call this ||W||||W||.

Then, the capacity to allocate to item of type 'k' is proportional to:

(4)  \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||) \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||)

Here, V_kV_k denotes the k-th component of the row vector 'V'. So, V_1V_1 is equal to \lambda_1 x_1\lambda_1 x_1. Likewise for W_kW_k.

Finally, because these should add up to 1, each of (4) is divided by the sum of all of them.

Example

If this seems complicated, let's do a real calculation and see how the formulas of the previous section are applied.

Two types of incidents

As a first example consider a team that collects data on all incidents and types of work. The data collected over time includes the resolution time, dates that the incident occurred and the date the issue was resolved. The product owner assigns a business value to each incident which corresponds to the cost rate of the incident which in this case is measured in the number of (business) uses affected. Any other means of assigning a cost rate will do also.

The team consist of 6 team members, so the team's capacity MM is equal to 12 where each member is allowed to work on a maximum of 2 incidents.

From their data they discover that they have 2 main types of incidents. See the so-called Cycle Time Histogram below.

new doc 13_9

The picture above shows two types of incidents, having typical average resolution times of around 2 days and 2 weeks. Analysis shows that these are related to the GUI and database components respectively. From their data the team determines that they have an average input rate of 6 per week and 2 per month respectively. The average cost rate for each type is 10 per day and 200 per day respectively.

That is, the database related issues have: \lambda = 2 \text{per month} = 2/20 = 1/10 \text{per day} \lambda = 2 \text{per month} = 2/20 = 1/10 \text{per day} ,  C = 200 \text{per day} C = 200 \text{per day} , and resolution time  x = 2 \text{weeks} = 10 \text{days} x = 2 \text{weeks} = 10 \text{days} . While the GUI related issues have:  \lambda = 6 \text{per week} = 6/5 \text{per day} \lambda = 6 \text{per week} = 6/5 \text{per day} ,  C = 10 \text{per day} C = 10 \text{per day} , and resolution time  x = 2 \text{days} x = 2 \text{days} .

The row vector 'V' becomes (product of \lambda\lambda and xx:

 V = (1/10 * 10, 6/5 * 2) = (1, 12/5) V = (1/10 * 10, 6/5 * 2) = (1, 12/5) ,   ||V|| = 1 + 12/5 = 17/5 ||V|| = 1 + 12/5 = 17/5

The row vector 'W' becomes:

 W = (1/10 * 10 * 10 / 200, 6/5 * 2 * 2 / 10) = (1/20, 12/25) W = (1/10 * 10 * 10 / 200, 6/5 * 2 * 2 / 10) = (1/20, 12/25) ,  ||W|| = 1/20 + 12/25 = 53/100 ||W|| = 1/20 + 12/25 = 53/100

Putting this together we obtain the result that a percentage of the team's capacity should be allocated to resolve database related issues that is equal to:

 M_\text{database}/M \sim 1/20 - 1/12 *(1/20 * 17/5 - 1 * 53/100) = 1/20 + 1/12 * 36/100 = 1/20 + 3/100 = 8/100 = 40/500 M_\text{database}/M \sim 1/20 - 1/12 *(1/20 * 17/5 - 1 * 53/100) = 1/20 + 1/12 * 36/100 = 1/20 + 3/100 = 8/100 = 40/500

and a percentage should be allocated to work on GUI related items that is

 M_\text{GUI}/M \sim 12/25 - 1/12 *(12/25 * 17/5 - 12/5 * 53/100) = 12/25 - 1/12 * 9/125 = 12/25 - 3/500 = 237/500 M_\text{GUI}/M \sim 12/25 - 1/12 *(12/25 * 17/5 - 12/5 * 53/100) = 12/25 - 1/12 * 9/125 = 12/25 - 3/500 = 237/500

Summing these two we get as the sum 277/500. This means that we allocate 40/237 ~ 16% and 237/277 ~ 84% of the team's capacity to database and GUI work items respectively.

Kanban teams may define a class of service to each of these incident types and put a WiP limit on the database related incident lane of 2 cards and a WiP limit of 10 to the number of cards in the GUI related lane.

Scrum teams may allocate part of the team's velocity to user stories related to database and GUI related items based on the percentages calculated above.

Conclusion

Starting with the expression for the average total operational cost I have shown that this leads to an interesting optimisation problem in which we ant to determine the optimal allocation of a team's capacity to different work item type in such a way that it will on average minimise the average total operation cost present in the system.

The division of the team's capacity over the various work item types is determined by the work item types' average input rate, resolution time, and cost rate and is proportional to

(4)  \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||) \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||)

The data needed to perform this calculation is easily gathered by teams. Teams may use a cycle time histogram to find appropriate work item types. See this article on control charts for more information.

 

Categories: Companies

BE Agile before you Become Agile

Xebia Blog - Wed, 08/20/2014 - 21:49

People dislike change. It disrupts our routines and we need to invest to adapt. We only go along if we understand why change is needed and how we benefit from it.
The key to intrinsic motivation is to experience the benefits of the change yourself, rather than having someone else explain it to you.

Agility is almost an acronym for change. It is critical to let people experience the benefits of Agility before asking them to buy into this new way of working. This post explains how to create a great Agile experience in a fun, simple, cost efficient and highly effective way. BEing agile, before BEcoming agile!

The concept of a “Company Innovation Day”

Have you seen this clip about Dan Pinks’ Drive? According to him, the key factors for more motivation and better performance are: autonomy, mastery and purpose.
If you have some scrum experience this might sound familiar, right? That is because these 3 things really tie in nicely with agile and scrum, for example:

Autonomy = being able to self-direct;
• Let the team plan their own work
• Let the team decide how to best solve problems

Mastery = learning, applying and mastering new skills and abilities, a.k.a. "get better at stuff";
• Retrospect and improve
• Learn, apply and master new skills to get achieve goals as a team.

Purpose = understanding necessity and being as effective as possible;
• Write user stories that add value
• Define sprint goals that tie in to product- and business goals.

In the clip, the company "Atlassian" is mentioned. This is the company that makes "JIRA", one of the most popular Agile support tools. Atlassian tries to facilitate autonomy, mastery and purpose by organizing one day per quarter of “management free” innovation. They call it a “ship it day”.

Now this is cool! According to Dan, their people had fun (most important), fixed a whole array of bugs and delivered new product ideas as well. They have to ship all this in one day, again showing similarities with the time boxed scrum approach. When I first saw this, I realized that this kind of fast delivery of value is pretty much something you would like to achieve with Agile Scrum too! Doing Scrum right would feel like a continuous series of ship it days.

My own experience with innovation days

Recently I organized an innovation day with a client (for tips see on how to organize yours, click here). We invited the whole department to volunteer. If you didn’t feel like it, you could just skip it and focus on sprint work. Next we promoted the day and this resulted in a growing list of ideas coming in.
Except for the framing of the day, the formation of ideas and teams was totally self-organized and also result driven as we asked for the expected result. Ultimately we had 20 initiatives to be completed in one day.
On the day itself, almost everyone joined in and people worked hard to achieve results at the end of the day.
The day ended in presenting the results and having pizzas. Only some ideas just missed the deadline, but most were finished including usable and fresh new stuff with direct business value. When looking at the photos of that day it struck me that 9 out of ten photos showed smiling faces. Sweet!

The first innovation day was concluded with an evaluation. In my opinion evaluation is essential, because this is the perfect moment discuss deeper lessons and insights. Questions like; “how can we create the innovation day energy levels during sprints”, and “how can we utilize self-organizing abilities more” are invaluable as they could lead to new insights, inspiration and experiments for day-to-day operations.

The value of an innovation day as a starting point for Agile

All in all, I think an innovation day is the perfect way to get people experiencing the power of Agile.
Doing the innovation day on “day one” offers huge benefits when added to standard stuff like training and games. This is because the context is real. You have a real goal, a real timebox and you need to self-organize to achieve the desired result.
People doing the work get to experience their potential and the power of doing stuff within a simplified context. Managers get to experience unleashing the human potential when they focus only on the context and environment for that day.
I can only imagine the amazement and renewed joy when people experience the possibilities coming from a strong waterfall setting. All that good stuff from just a one-day investment!

Conclusion

It would be great if you would start out an Agile change initiative with an innovation day. Get people enthusiastic and inspired (e.g. motivated for change) first and then tell them why it works and how we are going to apply the same principles in day-to-day operations. This will result in less friction and resistance and give people a better sense for where they are heading.

Do you want to start doing innovation days or do you want to share your experience, feel free to leave a comment below.

Categories: Companies

Introducing Annotations

Pivotal Tracker Blog - Wed, 08/20/2014 - 20:51

Say farewell to time spent writing a long comment describing where something is wrong. Gone are the days of seeing a comment and wondering, “What?! I don’t see it.” Here(!) are the days of taking a screenshot, dropping a pin on it and appending a note to succinctly and specifically say what you want to say.

We’re pretty excited about annotations. We believe annotations should cut down on miscommunications quite a bit. Whether you’re a designer pointing out the wrong padding, or a PM noting that an interaction is incorrect, or you’re a developer asking for specifications on a UI – annotations will help you big time.

Annotations are currently on iPhone and iPad, so they are particularly terrific for iOS development; any image in your Camera Roll can be used for annotations. If you see something wrong in your latest build, merely take a screenshot, drop a pin on the problem area and write about how to correct your issue. You can even leave multiple pins, pins are numbered as you drop them. As you drop those pins and leave notes, it will create a Markdown numbered a list with those notes. It’s easy.

Check out the video to see a quick demonstration of how annotations work. Big thanks to fellow Pivot Drew McKinney for letting us use his app, Listacular, in our example.

Pivotal Tracker – Annotations from Pivotal Tracker on Vimeo.

The post Introducing Annotations appeared first on Pivotal Tracker.

Categories: Companies

You shall not pass – Control your code quality gates with a wizard – Part I

Danube - Wed, 08/20/2014 - 14:50
You shall not pass – Control your code quality gates with a wizard Now as easy as designing an email filter

Every project has different policies defining when code can be pushed into production. CollabNet’s code quality gate wizard for Gerrit comes with a bunch of predefined policies and lets you graphically design your own quality gates as easy as defining email filter rules.

Four-eye peer review, legal has to approve copyright file changes, senior staff has to approve the work of juniors, democratic feature voting? – Regardless of what your code quality gates look like, chances are very high you can now enforce it without having to write a single line of code.

What are Quality Gates – And Why should I care?

concept 22 You shall not pass   Control your code quality gates with a wizard   Part I

Quality Gates applied before commits get directly pushed into production

The days where version control was just a passive component where you store your source code are long gone. In the world of continuous integration and delivery, version control is an active component, typically the first step in a (semi) automated pipeline straight through production. This pipeline is typically automatically triggered whenever a commit gets merged into master (or any other branch used for production). If a commit that does not meet audit compliance or production quality gets merged accidentally, this can have immediate effect on your business. In the worst case, you face data loss, customers cannot interact with your business anymore or you are getting sued for having introduced a security hole or serious malfunction.

Code quality gates define the conditions to be met before a commit can be merged into master, i.e. when code is ready to be pushed for production. Typically those conditions are a mixture of automated checks like passing unit and integration tests, code quality and guideline checkers as well human checks like peer review, approval from legal and product management.

Having those rules automatically enforced is a big win for every team as it will make sure you always have the quality level and compliance conformance you need in production.

With CollabNet’s new quality gate wizard for Gerrit – TeamForge’s Git backend – you can now select from a number of predefined policies (best practices quality gates) which will be automatically enforced once deployed. In addition, you can design your own quality gates without having to write a single line of code. The way it works is very similar to email filter rules: You define the characteristics of a commit and related context (like associated peer reviews and feedback from code quality tools, system and integration tests) and decide whether under those conditions the commit can go in or not. You can even point to already existing commits and their context to automatically create quality gates and simulate them within the wizard.

This blog post series consists of three blogs. In the first one (the one you are reading right now), you will learn how to install the quality gate wizard and how to deploy the out of the box policies (collection of best practice quality gates), that come with the wizard.

In the second blog post, you will learn how to design your own quality gates based on the email filter metaphor. Furthermore, you will get an answer on how to define and distribute your own best practice policies for your organization.

The third blog post gets pretty technical and will dive into the more advanced concepts of the wizard, like defining filters on commit characteristics, counting peer review and CI votes. It will also explain the specifics of the language that is generated by the wizard to implement the quality gates.

With that said, let’s jump right in.

Make sure your Git/Gerrit Backend supports Quality Gates

If you are using TeamForge with our Git/Gerrit integration version 8.2.0 or higher, the quality gate backend is already installed. Otherwise, you would have to upgrade to this version which is supported by both TeamForge 7.1 as well as TeamForge 7.2. More details can be found on http://help.collab.net

Installing the code quality gate wizard

Designing code quality gates is a feature for power users. For that reason, we decided to implement the first version of the wizard inside our CollabNet Desktop for Eclipse and GitEye. If it turns out that you really love this feature and need a Web UI for it, we can make that happen too. As usual, just drop a comment in this blog post for any kind of feedback.

You can install any of the tools mentioned, my colleague Steve wrote two blog posts on how to install GitEye and how to set it up with TeamForge and Gerrit. If you already have GitEye or any other Eclipse based application installed and want to add the Quality Gate Wizard, point Eclipse to our update site http://downloads.open.collab.net/eclipse/update-site/gerrit-workflow and install all plugins available there.

Opening the quality gate wizard and selecting a predefined policy

Once you have installed GitEye or CollabNet’s Desktop for Eclipse and configured your TeamForge site, let’s navigate to the Git repository where you want to deploy some quality gates. Right click on the repository of your choice (in our case TeamForge-Git-Integration) and select the option Define Gerrit Review Rules …

 You shall not pass   Control your code quality gates with a wizard   Part I

A screen similar to the one depicted below will open. Within that screen, the option Load from template is already pre-selected. It contains a number of predefined policies (collection of best practice quality gates). The one we are using is called Relaxed 4 Eye Principle and 1+1.

 You shall not pass   Control your code quality gates with a wizard   Part I

You can skip over the details of this policy now, but if you are interested, here are the quality gates enforced:

  • Every commit has to be verified by a continuous integration (CI) system like Jenkins. The job of this system is to ensure that the code compiles, unit, integration and system tests are running through fine and all coding guidelines and code quality metrics are satisfied.

  • Every commit has to be peer reviewed by at least one person other than the author of the commit (4 Eye-Principle)

  • If a peer reviewer vetoes the commit, it cannot go in

  • If at least one reviewer strongly approves the commit (Code-Review +2) or at least two reviewers agree that the commit has reasonable quality (sum of Code-Review votes >=2), the commit can be merged if all conditions above are satisfied

We chose this policy as an example as this is the one we are internally following while developing our TeamForge-Git-Integration.

Testing the policy in the wizard

Once you click Finish in the wizard, an editor will open within Eclipse. We will cover most of its functionality in subsequent blog posts. For now, all we need to know are two buttons: Test Against Gerrit Change and Deploy to Gerrit. With the first one, you can test your quality gates against any commit not yet merged into a branch (to be more technically precise, any Gerrit change). The screenshot below shows how the current selection of quality gates would react to a particular commit. In the case below, the continuous integration system which tried to build the commit, ran unit and integration tests and checked code quality metrics, voted against the commit, so it cannot be pushed into production in its current form (red traffic lights). The yellow traffic lights indicate that no quality gate vetoed against the particular commits but there are still elements missing in order to let it pass (CI feedback or peer review feedback from a non-author). One commit (associated with Gerrit change 1985) has a green traffic light and could be pushed into production if needed.

 You shall not pass   Control your code quality gates with a wizard   Part I

Deploying the policy 

Once you are satisfied with your code quality gates, you can deploy them, i.e. make sure they are enforced for any commit in the repository in question. To do that, just hit the Deploy to Gerrit button (you need SCM admin permissions in TeamForge to make this work). A wizard will open that lets you enter your credentials for the Git repository in question and lets you specify a message that goes with your quality gate deployment (behind the scenes, quality gates are versioned in Gerrit as Git commits, so you can see any change to your policies and even revert back if needed).

 You shall not pass   Control your code quality gates with a wizard   Part I

Checking the result in Gerrit

If you now log into Gerrit’s Web UI (or use the Gerrit Mylyn Plugin), you can see the quality gates in action. In the screenshot below, you can see that

  • the commit in question has been already verified (green verified checkbox)

  • the commit in question has been already strongly recommended by a reviewer (green checkbox in Code-Review)

 You shall not pass   Control your code quality gates with a wizard   Part I

However, as the commit in question has been authored by the reviewer himself (see owner and reviewer fields), it cannot go into production yet. At the bottom of the screenshot, you see a message indicating that a Non-Author has to do Code-Review.

Summary

In this blog post you learned how to select, test and deploy predefined quality gates with CollabNet’s code quality gate wizard for Gerrit. Those quality gates will make sure that all conditions regarding code quality and compliance are met before a commit can be merged into your master branch and trigger a pipeline that will eventually promote it into production.

In the next blog posts we will focus on how you can define quality gates by yourself, using a methodology very close to setting up email filter rules.

The post You shall not pass – Control your code quality gates with a wizard – Part I appeared first on blogs.collab.net.

Categories: Companies

The Impact of Agile and Lean Startup on Project Portfolio Management

Agile Management Blog - VersionOne - Wed, 08/20/2014 - 13:55

With the large number of organizations now adopting agile methods, the existing body of literature has paid significant attention to the function of project management, business analysis, and more recently program management.  This is understandable as individuals filling these roles are ubiquitous and critical to the operation of their respective organizations.

Many organizations have an additional formalized function, project portfolio management (PPM), that is also critical to the organization but gets little attention — especially in light of the considerable desire being shown to scaling agile to the enterprise level.  The focus, objectives, and responsibilities of agile PPM must fundamentally shift when transitioning to an agile model, structure, and culture.  The reason for this is simple—the same agile principles that are being applied to individual projects can also be leveraged to manage the portfolio.

Below are two ways that agile PPM differs from traditional PPM:

Traditional PPM:  Optimize portfolio resources (individuals) by skill set
Agile PPM:  Maximize value delivery to customers by team capability

Traditional projects, while still delivered by teams, are much more focused on optimizing skill set across a portfolio.  One reason for this is because most traditional organizations are structured and organized by functional specialty.  That is, the organization’s structure is very hierarchical and often has individuals within a particular functional specialty (business analysis, quality assurance, project management, etc.) reporting to the same manager.

Another reason is that projects move through the process by passing through one of several phase gates such as requirements, design, test, etc.  When this is the case, project execution may be throttled by a particular skill set at each gate.  For example, if you have five business analysts, you will be limited to the number of projects that can be active.  However, most organizations ignore this fact and still have far too many projects active at any time; this only adds needless risk.  The sad truth is that most organizations really have no idea of their true project capacity.

In agile organizations, the team (not the individual) is the unit of capacity measure.  Therefore, if you have three teams that are capable of delivering an initiative or feature, you are limited by the number of teams.  So, how many projects of each type can you have active at any one time?  I don’t know; each situation will vary by organization, team, and context.  However, to get started, try setting the limit to be equal to the number of teams with the capability of delivering that type of solution.  If it doesn’t help, experiment.

For example, if you have five products that need mobile solutions, but only have three teams capable of doing the work, only start the three that will deliver the highest customer value.  Of course, that assumes that the teams are not already working on other items.

Traditional PPM:  Maximize Revenue and Evaluate Project Health
Agile PPM:  Govern Empirically through Validated Learning

One of the primary goals of traditional PPM is maximizing revenue… that is, how much money a particular project or product can add to the “bottom line” of a company’s balance sheet.  In today’s economy that is characterized by pervasive, disruptive technology and consumers that demand choice and flexibility focusing on revenue alone misses the point.

Revenue is the metric of wildly satisfied customers.

Stated another way, many would say that the sole objective of PPM is to maximize shareholder value.  This is done through increasing revenue, but it misses the point.  Because customers have flexibility and plentiful choices, the focus must be on maximizing customer value.  By focusing on customer value, if shareholder value doesn’t increase, it may be because you’re building the wrong thing.  Wouldn’t it be appealing to find that out sooner rather than later?

Further, traditional PPM typically measures the health of the agile portfolio by evaluating the health of its component projects.  This is great—in theory.  But one of the big problems with this approach is the way in which health is typically measured.  It’s most commonly done through subjective mechanisms like project status reports, achieved milestones, and progress stoplight indicators.  None of these approaches offer an objective mechanism of determining if the project is actually building the right thing.  Personally, I’ve managed projects that have delivered the wrong solution on time and within budget.  The kind of objectivity that’s required is customer validation.

A more agile PPM approach would be to introduce some mechanism of validated learning to help us make more sound and responsible decisions for our customers about what projects or products to continue funding.  Validated learning is a key aspect of the Lean Startup approach made popular by Eric Ries’ book of the same name.  Agile projects aim to build small increments of a product.  This means we are dealing with smaller return-on-investment (ROI) horizons.

Through agile PPM it’s possible to incrementally fund two projects to experiment with two different solutions to a (perceived) customer problem.  This is known as A/B testing, a.k.a., “split testing.”  Because agile methods allow us to get solutions into the hands of customers more quickly, we can evaluate the results of our experiments and shift funding to investments that are more promising and pertinent.  Because the funding is done incrementally, we need not fund an entire project for an extended period before finding out whether our assumptions were incorrect.

Summary

While these are only two of many considerations when adopting agile PPM, each has the potential to make an immediate and lasting impact on your organization and its customers, thereby, positively impacting your shareholders as well.  In my opinion, the sooner organizations can sow the seeds of customer satisfaction through validated learning, engagement, and collaboration, the sooner they will reap the rewards of increased shareholder value.

What are your thoughts?  How can you begin to apply these concepts within your own unique context?

Categories: Companies

Using Card Types to Manage Your Work More Effectively

  In LeanKit, teams can customize their card types to match the nature of their work. Card types for a software development team might include user stories, defects, features, and improvements. For an IT Operations team, they could be desktop support, server support, maintenance, and implementation. LeanKit lets you define your own card types as colors or icons, […]

The post Using Card Types to Manage Your Work More Effectively appeared first on Blog | LeanKit.

Categories: Companies

Enterprise Lean Startup Webinar Series: Metrics and Analytics to Support Innovation and Learning

BigVisible Solutions :: An Agile Company - Tue, 08/19/2014 - 20:13

Join us for the next installment of our Enterprise Lean Startup Webinar series: ” Metrics and Analytics to Support Innovation and Learning”. Evan Campbell will introduce several of the measurement frameworks commonly used to define the critical levers to your business success, and methods for validating changes to your product through observed changes in usage. […]

The post Enterprise Lean Startup Webinar Series: Metrics and Analytics to Support Innovation and Learning appeared first on BigVisible Solutions.

Categories: Companies

Agile 2014 – speaking and attending; a summary

Xebia Blog - Tue, 08/19/2014 - 18:14

So Agile 2014 is over again… and what an interesting conference it was.

What did I find most rewarding? Meeting so many agile people! My first conclusion was that there were experts like us agile consultants or starting agile coaches, ScrumMasters and other people getting acquainted with our cool agile world. Another trend I noticed was the scaled agile movement. Everybody seems to be involved in that somehow. Some more successful than others; some more true to agile than others.

What I missed this year was the movement of scrum or agile outside IT although my talk about scrum for marketing had a lot of positive responses.  Everybody I talked to was interested in hearing more information about it.

There was a talk maybe even two about hardware agile but I did not found a lot of buzz around it. Maybe next year? I do feel that there is potential here. I believe Fullstack product development should be the future. Marketing and IT teams? Hardware and software teams?  Splitting these still sounds as design and developer teams to me.

But what a great conference it was. I met a lot of awesome people. Some just entering the agile world; some authors of books I had read which got me further in the agile movement. I talked to the guys from Spotify. The company which is unique in its agile adoption / maturity. And they don’t even think that they are there yet. But then again will somebody ever truly BE agile ..?

I met the guys from scrum.inc who developed a great new scaled framework. Awesome ideas on that subject and awesome potential to treat it as a community created open framework; keep your eyes open for that!

I attended some nice talks too; also some horrible ones. Or actually 1, which should never have been presented in a 90 minute slot in a conference like this. But lets get back to the nice stories. Lyssa Adkins had a ‘talk’ about conflicts. Fun thing was that she actually facilitated the debate about scaled agile on stage. The session could have been better but the idea and potential of the subject is great.

Best session? Well probably the spotify guys. Still the greatest story out there of an agile company. The key take-out of that session for me is: agile is not an end-state, but a journey. And if you take it as serious as Spotify you might be able to make the working world a lot better. Looking at Xebia we might not even be considered to be trying agile compared to them. And that is meant in a humble way while looking up to these guys! - I know we are one of the frontiers of agile in the Netherlands. The greatest question in this session: ‘Where is the PMO in your model….’

Well you clearly understand this …

Another inspiring session was the keynote session from the CFO of Statoil about beyond budgeting. This was a good story which should become bigger in the near future as this is one of the main questions I get when implementing agile in a company: “how do we plan / estimate and budget projects when we go and do agile?” Beyond budgeting at least get’s us a little closer.
Long story short. I had a blast in Orlando. I learnt new things and met a lot of cool people.My main take out: Our community is growing which teaches us that we are not yet there by a long run. An awesome future is ahead! See you next year!

Categories: Companies

Booster gets a little identity facelift

Pivotal Tracker Blog - Mon, 08/18/2014 - 23:24

booster logo
PivotalBooster, an awesome and FREE third party OSX for Pivotal Tracker from the talented people at Railsware, has now been rebranded as Booster plain and simple.

If you’re unfamiliar with Booster, check out our earlier post or simply visit their site.

The post Booster gets a little identity facelift appeared first on Pivotal Tracker.

Categories: Companies

Little's Law in 3D

Xebia Blog - Sun, 08/17/2014 - 17:21

The much used relation between average cycle time, average total work and input rate (or throughput) is known as Little's Law. It is often used to argue that it is a good thing to work on less items at the same time (as a team or as an individual) and thus lowering the average cycle time. In this blog I will discuss the less known generalisation of Little's Law giving an almost unlimited number of additional relation. The only limit is your imagination.

I will show relations for the average 'Total Operational Cost in the system' and for the average 'Just-in-Timeness'.

First I will describe some rather straightforward generalisations and in the third part some more complex variations on Little's Law.

Little's Law Variations

As I showed in the previous blogs (Applying Little's Law in Agile Games and Why Little's Law Works...Always) Little's Law in fact states that measuring the total area from left-to-right equals summing it from top-to-bottom.

Once we realise this, it is easy to see some straightforward generalisations which are well-known. I'll mention them here briefly without ging into too much details.

Subsystem

new doc 8_1 Suppose a system that consists of 1 or more subsystems, e.g. in a kanban system consisting of 3 columns we can identify the subsystems corresponding to:

  1. first column (e.g. 'New') in 'red',
  2. second column (e.g. 'Doing') in 'yellow',
  3. third column (e.g. 'Done') in 'green'

See the figure on the right.

By colouring the subsystems different from each other we see immediately that Little's Law applies to the system as a whole as well as to every subsystem ('red' and 'yellow' area).

Note: for the average input rate consider only the rows that have the corresponding color, i.e. for the input rate of the column 'Doing' consider only the rows that have a yellow color; in this case the average input rate equals 8/3 items per round (entering the 'Doing' column). Likewise for the 'New' column.

Work Item Type

new doc 9_1Until now I assumed only 1 type of work items. In practise teams deal with more than one different work item types. Examples include class of service lanes, user stories, and production incidents. Again, by colouring the various work item type differently we see that Little's Law applies to each individual work item type.

In the example on the right, we have coloured user stories ('yellow') and production incidents ('red'). Again, Little's Law applies to both the red and yellow areas separately.

Doing the math we se that for 'user stories' (yellow area):

  • Average number in the system (N) = (6+5+4)/3 = 5 user stories,
  • Average input rate (\lambda\lambda = 6/3 = 2 user stories per round,
  • Average waiting time (W) = (3+3+3+3+2+1)/6 = 15/6 = 5/2 rounds.

As expected, the average number in the system equals the average input rate times the average waiting time.

The same calculation can be made for the production incidents which I leave as an exercise to the reader.

Expedite Items

new doc 10_1 Finally, consider items that enter and spend time in an 'expedite' lane. In Kanban an expedite lane is used for items that need special priority. Usually the policy for handling such items are that (a) there can be at most 1 such item in the system at any time, (b) the team stop working on anything but on this item so that it is completed as fast as possible, (c) they have priority over anything else, and (d) they may violate any WiP limits.

Colouring any work items blue that spend time in the expedite lane we can apply Little's Law to the expedite lane as well.

An example of the colouring is shown in the figure on the right. I leave the calculation to the reader.

3D


We can even further extend Little's Law. Until now I have considered only 'flat' areas.

The extension is that we can give each cell a certain height. See the figure to the right. A variation on Little's Law follows once we realise that measuring the volume from left-to-right is the same as calculating it from top-to-bottom. Instead of measuring areas we measure volumes instead.

The only catch here is that in order to write down Little's Law we need to give a sensible interpretation to the 'horizontal' sum of the numbers and a sensible interpretation to the 'vertical' sum of the numbers. In case of a height of '1' these are just 'Waiting Time' (W) and 'Number of items in the system' (N) respectively.

A more detailed, precise, and mathematical formulation can be found in the paper by Little himself: see section 3.2 in [Lit11].

Some Applications of 3D-Little's Law

Value

As a warming-up exercise consider as the height the (business) value of an item. Call this value 'V'. Every work item will have its own specific value.
new doc 12_1

 

 

 \overline{\mathrm{Value}} = \lambda \overline{V W} \overline{\mathrm{Value}} = \lambda \overline{V W}

The interpretation of this relation is that the 'average (business) value of unfinished work in the system at any time' is equal to the average input rate multiplied by the 'average of the product of cycle time and value'.

Teams may ant to minimise this while at the same time maximising the value output rate.

Total Operational Cost

As the next example let's take as the height for the cells a sequence of numbers 1, 2, 3, .... An example is shown in the figures below. What are the interpretations in this case?

Suppose we have a work item that has an operational cost of 1 per day. Then the sequence 1, 2, 3, ... gives the total cost to date. At day 3, the total cost is 3 times 1 which is the third number in the sequence.

new doc 12_2The 'vertical' sum is just the 'Total Cost of unfinished work in the system.

For the interpretation of the 'horizontal' sum we need to add the numbers. For a work item that is in the system for 'n' days, the total is 1+2+3+...+n1+2+3+...+n which equals 1/2 n (n+1)1/2 n (n+1). For 3 days this gives 1+2+3=1/2 * 3 * 4 = 61+2+3=1/2 * 3 * 4 = 6. Thus, the interpretation of the 'horizontal' sum is 1/2 W (W+1)1/2 W (W+1) in which 'W' represents the waiting time of the item.

Putting this together gives an additional Little's Law of the form:

 \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W(W + 1)} \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W(W + 1)}

where 'C' is the operational cost rate of a work item and \lambda\lambda is the (average) input rate. If instead of rounds in a game, the 'Total Cost in the system' is measured at a time interval 'T' the formula slightly changes into

 \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W\left(W + T\right)} \overline{\mathrm{Cost}} = \frac{1}{2} \lambda C \overline{W\left(W + T\right)}

Teams may want to minimise this which gives an interesting optimisation problem is different work item types have different associated operational cost rates. How should the capacity of the be divided over the work items? This is a topic for another blog.

Just-in-Time

For a slightly more odd relation consider items that have a deadline associated with them. Denote the date and time of the deadline by 'D'. As the height choose the number of time units before or after the deadline the item is completed. Further, call 'T' the time that the team has taken up to work on the item. Then the team finishes work on this item at time  T + W T + W , where 'W' represent the cycle time of the work item.

new doc 12_4

In the picture on the left a work item is shown that is finished 2 days before the deadline. Notice that the height decreases as the deadline is approached. Since it is finished 2 time units before the deadline, the just-in-timeness is 2 at the completion time.

new doc 12_3

The picture on the left shows a work item one time unit after the deadline and has an associated just-in-timeness of 1.

 

 \overline{\mathrm{Just-in-Time}} = \frac{1}{2} \lambda \overline{|T+W-D|(|T+W-D| + 1)} \overline{\mathrm{Just-in-Time}} = \frac{1}{2} \lambda \overline{|T+W-D|(|T+W-D| + 1)}

This example sounds like a very exotic one and not very useful. A team might want to look at what the best time is to start working on an item so as to minimise the above variable.

Conclusion

From our 'playing around' with the size of areas and volumes and realising that counting it in different ways (left-to-right and top-to-bottom) should give the same result I have been able to derive a new set of relations.

In this blog I have rederived well-known variations on Little's Law regarding subsystems and work items types. In addition I have derived new relations for the 'Average Total Operational Cost', 'Average Value', and 'Average Just-in-Timeness'.

Together with the familiar Little's Law these give rise to interesting optimisation problems and may lead to practical guidelines for teams to create even more value.

I'm curious to hear about the variations that you can come up with! Let me know by posting them here.

References

[Lit11] John D.C. Little, "Little’s Law as Viewed on Its 50th Anniversary", 2011, Operations Research, Vol. 59 , No 3, pp. 536-549, https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf

 

 

Categories: Companies

The Product Wall Release Workshop

BigVisible Solutions :: An Agile Company - Fri, 08/15/2014 - 17:01

Multi-team Release Planning, as it is often executed, fails to bring alignment beyond one-time, inter-team coordination. When the “Chief Product Owner” arrives with just descriptions of features, the teams don’t learn the connection between features and value. The Product Wall Release Workshop brings together all the elements of business needs, user experience, value proposition, dependency […]

The post The Product Wall Release Workshop appeared first on BigVisible Solutions.

Categories: Companies

Learn a new domain every year

TargetProcess - Edge of Chaos Blog - Thu, 08/14/2014 - 23:25

How diversity helps us in problem solving, creativity and overall intelligence? It helps a lot. Diverse groups of people can produce better results and radiate more creativity. But what about your own, personal diversity? Is it a good idea to accumulate knowledge from wide range of disciplines? Does knowledge of music theory help to write better code? Does knowledge from biology make you a better user experience designer? I believe yes, and here is why.

untitled_3_by_mpcine-d5simvn

source: Escher Butterfly Wallpaper by MPCine

Douglas Hofstadter and Emmanuel Sander wrote a very controversial book Surfaces and Essences. It is not an easy read, but it is time spent well. Authors unfold thinking process from language to high level constructs. They show how analogy-making helps us think, generate new ideas and fuel our creativity, including scientific insights.

This book deeply resonated with me. In general I agree that analogy-making is a core of our creativity. I even tried to apply knowledge from Running domain to Software Development domain and generated some quite interesting ideas like Interval development. Sure these ideas can’t be proved easily, since analogy doesn’t mean the idea is great. But still it is relatively easy to take knowledge from one domain and apply it to another domain.

How can it help me?

All that brought me to the idea to increase my personal diversity and expand my knowledge beyond typical areas like system thinking, software architecture, groups dynamic, innovation models, user experience and other stuff every CEO learns. I read books and took courses about quite diverse topics already, but I did that in a chaotic way.

Suddenly it became obvious to me how all these new domains can help me to be more creative and solve problems better.

What domains should I explore?

I think you should try anything you always wanted to learn, but didn’t have time to. It is quite hard to predict what analogies can be generated from unknown domains. For example, you always wanted to know how people paint, how art evolved and how Michelangelo painted a fresco of The Last Judgement on the altar wall of the Sistine Chapel. Dig into the art domain and learn as much as you can in a single year. Will it help you to be a better software developer? Why not? If you try to paint something you can train patience and learn how to sketch (everybody should sketch, you know). Michelangelo’s approaches may give you some ideas how to structure your work. As I said, it is hard to predict exact ideas that you’ll generate in the end, but I promise you will generate some.

I personally want to study biology, music theory, architecture, education, medicine, go and swimming. If a simple running domain gave me new insights, I believe larger and more complex domains will bring even more value.

Why one year?

A year is a good timeframe to focus on something. It will be your new hobby for a full year. You can read 20+ books, take 1-3 online courses, maybe take offline courses, try to apply your new knowledge constantly. Small domains demand less time, but larger domains are hard to grasp in 2-3 months.

I don’t believe in quick solutions. You can read a book or two about a subject and have some fresh air in your head, but it is not enough to just scratch the surface. In 10 years you will have a decent knowledge in 10 domains. That sounds cool to me.

Did you try that?

Nope. I started to dig into music theory recently. So far I’m just sharing the idea with a hope that there is always a chance you’ll like it and give it a try.

And maybe, just maybe, you’ll even find your new passion. Who knows?

Categories: Companies

What You Need to Know About Taskboards vs. Drill-Through Boards

Have you ever wondered when to use a taskboard or a drill-through board in LeanKit? Taskboards and drill-through boards are both designed to assist with visualizing the breakdown of work, yet they have distinctly different uses. In an interview with a panel of our product experts, we learned that each option offers its own unique advantages. What’s the main distinction […]

The post What You Need to Know About Taskboards vs. Drill-Through Boards appeared first on Blog | LeanKit.

Categories: Companies

Agile2014 Gratitude

Applied Frameworks - Tue, 08/12/2014 - 03:38

I really enjoyed meeting new people and seeing so many old friends at Agile2014 in Orlando. Thank you to everyone who attended my session, asked questions and provided feedback, which encouraged me and gave me ideas for future events.

Here is the feedback for "Teaching Agile to Management":

"Your session's recorded attendance was 80 attendees (at start), 76 (in the middle) and 76 (at the end). 37 attendees left feedback.

"The feedback questions are based on a 5 rating scale, with 5 being the highest score. Your average ratings are shown below:

  • Session Meets Expectations: 4.22
  • Recommend To Colleague: 4.22
  • Presentation Skills: 4.49
  • Command Of Topic: 4.73
  • Description Matches Content: 4.22
  • Overall Rating: 4.24"

The slide deck is available for download here. The Word file for the "Role-ing Doughnut Game" is also available. I print the file on Avery labels (10 to a sheet). I measure and cut 8 cards per sheet out of card stock sheets to mount the labels. The poster for the game is also available for download. I order 3' x 4' posters from FedEx Office.

Please share your experiences in the comments and feel free to send any questions our way.

Categories: Companies