Skip to content

Feed aggregator

5 Reasons why you should test your code

Xebia Blog - Mon, 07/27/2015 - 10:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Categories: Companies

Story Telling with Story Mapping

Once upon a time, a customer had a great buying experience on a website.  The customer loved how from the moment the customer was on the site to the moment they checked out a product, the process was intuitive and easy to use.   The process and design of the customer experience was not by accident.  In fact, it was done very methodically using story mapping.  What is Story MappingStory Mapping is a practice that provides framework for grouping user stories based on how a customer or user would use the system as a whole.   Established by Jeff Patton, it helps the team understand the customer experience by imaging what the customer process might be.  This promotes the team to think through elements of what the customer finds as valuable.  Benefits of Story MappingStory Mapping is a way to bridge the gap between an idea and the incremental work ahead.  It's a great way to decompose an idea to a number of unique user stories.  What are some additional benefits of story mapping?
  • It moves away from thinking of functionality first and toward the customer experience first. 
  • It provides the big picture and end-to-end view of the work ahead
  • It's a decomposition tool from idea to multiple user stories. 
  • It asks the team to identify the highest value work from a customer perspective and where you may want the most customer feedback.
  • It advocates cutting only one increment of work at a time instead realizing that feedback from the current increment will help shape subsequent increments. 
Getting started with Story Mapping
How might you get started in establishing a story map?  It starts by having wall space available to place the customer experience upon.  Next you educate the team on the story mapping process (see below).  Its best to keep to a Scrum team size (e.g., 7 +/-2), where everyone participates in the process.  Then as a team, follow these high-level steps:
  • Create the “backbone” of the story map.  These are the big tasks that the users engage with. Capture the end-to-end customer experience.  Start by asking “what do users do?”  You may use a quiet brainstorming approach to get a number of thoughts on the wall quickly
  • Then start adding steps that happen within each backbone.  
  • From there explore activities or options within each step.  Ask, what are the specific things a customer would do here?  Are there alternative things they could do?  These activities may be epics and even user stories. 
  • Create the “walking skeleton”.  This is where you slice a set of activities or options that can give you the minimum end-to end value of customer experience.  Only cut enough work that can be completed within one to three sprints that represents customer value. 
As you view the wall, the horizontal access defines the flow of which you place the backbone and steps.  The vertical access under each contains the activities or options represented by epics and user stories for that particular area. Use short verb/noun phrases to capture the backbones, steps, and activities (e.g. capture my address, view my order status, receive invoice). 

Next time your work appears to represent a customer experience, consider the story mapping tool as a way to embrace the customer perspective.  Story mapping provides a valuable tool for the team to understand the big picture, while decomposing the experience to more bite size work that allows for optionality for cutting an increment of work.  Its another tool in your requirements decomposition toolkit.  
Categories: Blogs

Neo4j: From JSON to CSV to LOAD CSV via jq

Mark Needham - Sun, 07/26/2015 - 01:05


In my last blog post I showed how to import a Chicago crime categories & sub categories JSON document using Neo4j’s cypher query language via the py2neo driver. While this is a good approach for people with a developer background, many of the users I encounter aren’t developers and favour using Cypher via the Neo4j browser.

If we’re going to do this we’ll need to transform our JSON document into a CSV file so that we can use the LOAD CSV command on it. Michael pointed me to the jq tool which comes in very handy.

To recap, this is a part of the JSON file:

{
    "categories": [
        {
            "name": "Index Crime",
            "sub_categories": [
                {
                    "code": "01A",
                    "description": "Homicide 1st & 2nd Degree"
                },
            ]
        },
        {
            "name": "Non-Index Crime",
            "sub_categories": [
                {
                    "code": "01B",
                    "description": "Involuntary Manslaughter"
                },
            ]
        },
        {
            "name": "Violent Crime",
            "sub_categories": [
                {
                    "code": "01A",
                    "description": "Homicide 1st & 2nd Degree"
                },
            ]
        }
    ]
}

We want to get one row for each sub category which contains three columns – category name, sub category code, sub category description.

First we need to pull out the categories:

$ jq ".categories[]" categories.json
 
{
  "name": "Index Crime",
  "sub_categories": [
    {
      "code": "01A",
      "description": "Homicide 1st & 2nd Degree"
    },
  ]
}
{
  "name": "Non-Index Crime",
  "sub_categories": [
    {
      "code": "01B",
      "description": "Involuntary Manslaughter"
    },
  ]
}
{
  "name": "Violent Crime",
  "sub_categories": [
    {
      "code": "01A",
      "description": "Homicide 1st & 2nd Degree"
    },
  ]
}

Next we want to create a row for each sub category with the category alongside it. We can use the pipe function to combine the two selectors:

$ jq ".categories[] | {name: .name, sub_category: .sub_categories[]}" categories.json
 
{
  "name": "Index Crime",
  "sub_category": {
    "code": "01A",
    "description": "Homicide 1st & 2nd Degree"
  }
}
...
{
  "name": "Non-Index Crime",
  "sub_category": {
    "code": "01B",
    "description": "Involuntary Manslaughter"
  }
}
...
{
  "name": "Violent Crime",
  "sub_category": {
    "code": "01A",
    "description": "Homicide 1st & 2nd Degree"
  }
}

Now we want to un-nest the sub category:

$ jq ".categories[] | {name: .name, sub_category: .sub_categories[]} | [.name, .sub_category.code, .sub_category.description]" categories.json
 
[
  "Index Crime",
  "01A",
  "Homicide 1st & 2nd Degree"
]
 
[
  "Non-Index Crime",
  "01B",
  "Involuntary Manslaughter"
]
 
[
  "Violent Crime",
  "01A",
  "Homicide 1st & 2nd Degree"
]

And finally let’s use the @csv filter to generate CSV lines:

$ jq ".categories[] | {name: .name, sub_category: .sub_categories[]} | [.name, .sub_category.code, .sub_category.description] | @csv" categories.json
"\"Index Crime\",\"01A\",\"Homicide 1st & 2nd Degree\""
"\"Index Crime\",\"02\",\"Criminal Sexual Assault\""
"\"Index Crime\",\"03\",\"Robbery\""
"\"Index Crime\",\"04A\",\"Aggravated Assault\""
"\"Index Crime\",\"04B\",\"Aggravated Battery\""
"\"Index Crime\",\"05\",\"Burglary\""
"\"Index Crime\",\"06\",\"Larceny\""
"\"Index Crime\",\"07\",\"Motor Vehicle Theft\""
"\"Index Crime\",\"09\",\"Arson\""
"\"Non-Index Crime\",\"01B\",\"Involuntary Manslaughter\""
"\"Non-Index Crime\",\"08A\",\"Simple Assault\""
"\"Non-Index Crime\",\"08B\",\"Simple Battery\""
"\"Non-Index Crime\",\"10\",\"Forgery & Counterfeiting\""
"\"Non-Index Crime\",\"11\",\"Fraud\""
"\"Non-Index Crime\",\"12\",\"Embezzlement\""
"\"Non-Index Crime\",\"13\",\"Stolen Property\""
"\"Non-Index Crime\",\"14\",\"Vandalism\""
"\"Non-Index Crime\",\"15\",\"Weapons Violation\""
"\"Non-Index Crime\",\"16\",\"Prostitution\""
"\"Non-Index Crime\",\"17\",\"Criminal Sexual Abuse\""
"\"Non-Index Crime\",\"18\",\"Drug Abuse\""
"\"Non-Index Crime\",\"19\",\"Gambling\""
"\"Non-Index Crime\",\"20\",\"Offenses Against Family\""
"\"Non-Index Crime\",\"22\",\"Liquor License\""
"\"Non-Index Crime\",\"24\",\"Disorderly Conduct\""
"\"Non-Index Crime\",\"26\",\"Misc Non-Index Offense\""
"\"Violent Crime\",\"01A\",\"Homicide 1st & 2nd Degree\""
"\"Violent Crime\",\"02\",\"Criminal Sexual Assault\""
"\"Violent Crime\",\"03\",\"Robbery\""
"\"Violent Crime\",\"04A\",\"Aggravated Assault\""
"\"Violent Crime\",\"04B\",\"Aggravated Battery\""

The only annoying thing about this output is that all the double quotes are escaped. We can sort that out by passing the ‘-r’ flag when we call jq:

$ jq -r ".categories[] | {name: .name, sub_category: .sub_categories[]} | [.name, .sub_category.code, .sub_category.description] | @csv" categories.json
"Index Crime","01A","Homicide 1st & 2nd Degree"
"Index Crime","02","Criminal Sexual Assault"
"Index Crime","03","Robbery"
"Index Crime","04A","Aggravated Assault"
"Index Crime","04B","Aggravated Battery"
"Index Crime","05","Burglary"
"Index Crime","06","Larceny"
"Index Crime","07","Motor Vehicle Theft"
"Index Crime","09","Arson"
"Non-Index Crime","01B","Involuntary Manslaughter"
"Non-Index Crime","08A","Simple Assault"
"Non-Index Crime","08B","Simple Battery"
"Non-Index Crime","10","Forgery & Counterfeiting"
"Non-Index Crime","11","Fraud"
"Non-Index Crime","12","Embezzlement"
"Non-Index Crime","13","Stolen Property"
"Non-Index Crime","14","Vandalism"
"Non-Index Crime","15","Weapons Violation"
"Non-Index Crime","16","Prostitution"
"Non-Index Crime","17","Criminal Sexual Abuse"
"Non-Index Crime","18","Drug Abuse"
"Non-Index Crime","19","Gambling"
"Non-Index Crime","20","Offenses Against Family"
"Non-Index Crime","22","Liquor License"
"Non-Index Crime","24","Disorderly Conduct"
"Non-Index Crime","26","Misc Non-Index Offense"
"Violent Crime","01A","Homicide 1st & 2nd Degree"
"Violent Crime","02","Criminal Sexual Assault"
"Violent Crime","03","Robbery"
"Violent Crime","04A","Aggravated Assault"
"Violent Crime","04B","Aggravated Battery"

Excellent. The only thing left is to write a header and then direct the output into a CSV file and get it into Neo4j:

$ echo "category,sub_category_code,sub_category_description" > categories.csv
$ jq -r ".categories[] |
         {name: .name, sub_category: .sub_categories[]} |
         [.name, .sub_category.code, .sub_category.description] |
         @csv " categories.json >> categories.csv
$ head -n10 categories.csv
category,sub_category_code,sub_category_description
"Index Crime","01A","Homicide 1st & 2nd Degree"
"Index Crime","02","Criminal Sexual Assault"
"Index Crime","03","Robbery"
"Index Crime","04A","Aggravated Assault"
"Index Crime","04B","Aggravated Battery"
"Index Crime","05","Burglary"
"Index Crime","06","Larceny"
"Index Crime","07","Motor Vehicle Theft"
"Index Crime","09","Arson"
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-spark-chicago/categories.csv" AS row
MERGE (c:CrimeCategory {name: row.category})
MERGE (sc:SubCategory {code: row.sub_category_code})
ON CREATE SET sc.description = row.sub_category_description
MERGE (c)-[:CHILD]->(sc)

And that’s it!

Graph  25
Categories: Blogs

Do you rotate Scrum masters in teams?

Ben Linders - Sat, 07/25/2015 - 17:54
Do you have the same person acting as a Scrum master for every iteration in your team(s)? Or do different team members take the role on turns? I'd like to hear how you do it, what works for you, and why. Continue reading →
Categories: Blogs

See you at Agile 2015!

Agile Product Owner - Sat, 07/25/2015 - 00:48

Hi Folks,

The Scaled Agile team has been busy getting ready for Agile2015.

In order to support our growing community, we are a conference sponsor and will have our own booth for the first time, which we think of as “SPC and Partner Central.” Please stop by the booth and say hello.

Drew and I will be both be speaking at Agile2015, as follows:

  • I’ll also be participating in the “Stalwarts” session on Tuesday, Aug 3 at 2pm – Stalwarts 411 and why?

agile2015_speakersIt’s sure to be a fun-filled and action packed week. We all look forward to sharing experiences, seeing old friends, and making new ones.

SAFe travels, see you at Agile 2015!

Categories: Blogs

Scrum Immersion workshop at GameStop - Case Study

Agile Complexification Inverter - Fri, 07/24/2015 - 23:27
Here's a overview of a Scrum Immersion workshop done at GameStop this month. A case study example.

Normally these workshops start with the leadership (the stakeholders or shareholders) which have a vision for a product (or project). This time we skipped this activity.

The purpose of the Workshop is to ensure alignment between the leadership team and the Agile Coaches with regards to the upcoming scrum workshop for the team(s). Set expectations for a transition from current (ad-hoc) practices to Scrum. Explain and educate on the role of the Product Owner.

Expected Outcomes:
  • Create a transition plan/schedule
  • Set realistic expectations for transition and next release
  • Overview of Scrum & leadership in an Agile environment
  • Identify a Scrum Product Owner – review role expectations
  • Alignment on Project/Program purpose or vision
  • Release goal (within context of Project/Program & Scrum transition)

Once we have alignment on the Product Owner role and the Project Vision we typically do a second workshop for the PO to elaborate the Product Vision into a Backlog. This time we skipped this activity.

The purpose of the Workshop is to educate the Product Owner (one person) and prepare a product backlog for the scrum immersion workshop. Also include the various consultants, SME, BA, developers, etc. in the backlog grooming process. Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Overview of Scrum & Product Owner role (and how the team supports this role)
  • Set PO role responsibilities and expectations
  • Alignment of Release goal (within context of Project/Program & Scrum transition)
  • Product Backlog ordered (prioritized) for the first 2 sprints
  • Agreement to Scrum cadence for planning meetings and grooming backlog and sprint review meetings

Once we have a PO engaged and we have a Product Backlog it is time to launch the team with a workshop - this activity typically requires from 2 to 5 days. This is the activity we did at GameStop this week.
The primary purpose of the workshop is to teach just enough of the Scrum process framework and the Agile mindset to get the team functioning as a Scrum team and working on the product backlog immediately after the workshop ends (begin Sprint One). Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Basic mechanics of Scrum process framework
  • Understanding of additional engineering practices required to be an effective Scrum team A groomed / refined product backlog for 1- 3 iterations
  • A backlog that is estimated for 1 – 3 iterations
  • A Release plan, and expectations of its fidelity – plans to re-plan
  • Ability to start the very next day with Sprint Planning

Images from the workshop

The team brainstormed and the prioritized the objectives and activities of the workshop.


Purpose and Objectives of the WorkshopThe team then prioritized the Meta backlog (a list of both work items and learning items and activities) for the workshop.

Meta Backlog of workshop teams - ordered by participants
Possible PBI for Next Meta Sprint
Possible PBI for Later Sprints
Possible PBI for Some Day
Possible PBI for Another Month or Never
A few examples of work products (outcomes) from the workshop.

Affinity grouping of Persona for the user role in stories
Project Success Sliders activityTeam Roster (# of teams person is on)
A few team members working hardThree stories written during elaboration activity
A few stories after Affinity Estimation
Release Planning:  Using the concept of deriving duration based upon the estimated effort.  We made some assumptions of the business desired outcome;  that was to finish the complete product backlog by a fixed date.
The 1st iteration of a Release Plan That didn't feel good to the team, so we tried a different approach.  To fix the scope and cost, but to have a variable timeframe.
The 2nd iteration of a Release Plan That didn't feel good to the PO, so we tried again.  This time we fixed the cost and time, but varied the features, and broke the product backlog into milestones of releasable, valuable software.
The 3rd iteration of a Release PlanThis initial release plan feels better to both the team and the PO, so we start here.  Ready for sprint planning tomorrow.



Categories: Blogs

Android: Custom ViewMatchers in Espresso

Xebia Blog - Fri, 07/24/2015 - 17:03

Somehow it seems that testing is still treated like an afterthought in mobile development. The introduction of the Espresso test framework in the Android Testing Support Library improved the situation a little bit, but the documentation is limited and it can be hard to debug problems. And you will run into problems, because testing is hard to learn when there are so few examples to learn from.

Anyway, I recently created my first custom ViewMatcher for Espresso and I figured I would like to share it here. I was building a simple form with some EditText views as input fields, and these fields should display an error message when the user entered an invalid input.

Android TextView with Error Message

Android TextView with error message

In order to test this, my Espresso test enters an invalid value in one of the fields, presses "submit" and checks that the field is actually displaying an error message.

@Test
public void check() {
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .perform(ViewActions.typeText("foo"));
  Espresso
      .onView(ViewMatchers.withId(R.id.submit))
      .perform(ViewActions.click());
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .check(ViewAssertions.matches(
          ErrorTextMatchers.withErrorText(Matchers.containsString("email address is invalid"))));
}

The real magic happens inside the ErrorTextMatchers helper class:

public final class ErrorTextMatchers {

  /**
   * Returns a matcher that matches {@link TextView}s based on text property value.
   *
   * @param stringMatcher {@link Matcher} of {@link String} with text to match
   */
  @NonNull
  public static Matcher<View> withErrorText(final Matcher<String> stringMatcher) {

    return new BoundedMatcher<View, TextView>(TextView.class) {

      @Override
      public void describeTo(final Description description) {
        description.appendText("with error text: ");
        stringMatcher.describeTo(description);
      }

      @Override
      public boolean matchesSafely(final TextView textView) {
        return stringMatcher.matches(textView.getError().toString());
      }
    };
  }
} 

The main details of the implementation are as follows. We make sure that the matcher will only match children of the TextView class by returning a BoundedMatcher from withErrorText(). This makes it very easy to implement the matching logic itself in BoundedMatcher.matchesSafely(): simply take the getError() method from the TextView and feed it to the next Matcher. Finally, we have a simple implementation of the describeTo() method, which is only used to generate debug output to the console.

In conclusion, it turns out to be pretty straightforward to create your own custom ViewMatcher. Who knew? Perhaps there is still hope for testing mobile apps...

You can find an example project with the ErrorTextMatchers on GitHub: github.com/smuldr/espresso-errortext-matcher.

Categories: Companies

SonarLint brings SonarQube rules to Visual Studio

Sonar - Fri, 07/24/2015 - 14:33

We are happy to announce the release of SonarLint for Visual Studio version 1.0. SonarLint is a Visual Studio 2015 extension that provides on-the-fly feedback to developers on any new bug or quality issue injected into C# code. The extension is based on and benefits from the .NET Compiler Platform (“Roslyn”) and its code analysis API to provide a fully-integrated user experience in Visual Studio 2015.

Features

There are lots of great rules in the tool. We won’t list all 76 of the rules we’ve implemented so far, but here are a couple that show what you can expect from the product:

  • Defensive programming is a good practise, but in some cases you shouldn’t simply check whether an argument is null or not. For example, value types (such as structs) can never be null, and as a consequence comparing a non-restricted generic type parameter to null might not make sense (S2955), because it will always return false for structs.
  • Did you know that a static field of a generic class is not shared among instances of different close-constructed types (S2743)? So how many instances of DefaultInnerComparer do you think will be created with the following class? It is static so you might have guessed one, but actually there will be an instance for each type parameter used for instantiating the class.
  • We’ve been using SonarLint internally for a while now, and are running it against a few open source libraries too. We’ve already found bugs in both Roslyn and Nuget with the rule “Identical expressions used on both sides of a binary operator” rule (S1764).
  • Also, as shown below, some cases of null pointer dereferencing can be detected as well (S1697):

This is just a small selection of the implemented rules. To find out more, go and check out the product.

How to get it?

SonarLint is packaged in two ways:

  • Visual Studio Extension
  • Nuget package

To install the Visual Studio Extension download the VSIX file from Visual Studio Gallery. Optionally, you can download the complete source code and build the extension for yourself. Oh, and you might have already realized: this product is open source (under LGPLv3 license), so you can contribute if you’d like.

By the way, internally the SonarQube C# plugin also uses the same code analyzers, so if you are already using the SonarQube platform for C# projects, from now on you can also get the issues directly in the IDE.

What’s next?

In the following months we’ll increase the number of supported rules, and as with all our SonarQube plugins, we are moving towards bug-detection rules. Next to this effort, we’re continuously adding code fixes to the Visual Studio Extension. That way, the issues identified by the tool can be automatically fixed inside Visual Studio. In the longer run, we aim to bring the same analyzers we’ve implemented for C# to VB.Net as well. Updates are coming frequently, so keep an eye on the Visual Studio Extension Update window.

This SonarLint for Visual Studio is one piece of this puzzle we’ve been working on since the beginning of the year: providing to the .Net community a tight, easy and native integration of the SonarQube ecosystem into the Microsoft ALM suite. This 1.0 release of SonarLint is a good opportunity to warmly thanks again Jean-Marc Prieur, Duncan Pocklington and Bogdan Gavril from Microsoft. They have been highly and daily contributing to this effort to make SonarQube a central piece of any .Net development environment.

For more information on the product go to http://vs.sonarlint.org or follow us on Twitter.

Categories: Open Source

Personal message from Net Objectives' new COO, Marc Danziger

NetObjectives - Fri, 07/24/2015 - 08:26
Two months ago, I was contentedly leading my life as a singleton consultant down in Los Angeles. I’d managed to get to work for some pretty cool companies – Amgen, CarsDirect.com, the Academy of Motion Pictures Arts and Sciences (the Oscars folks), NBC, International Lease Finance Corp. (AIG), Lynda.com and Toyota. But, as the cliché goes, something was missing. I was finding that I could get a...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Lean Climbing

I’m on my way to Colorado for a weekend of outdoor adventures, climbing and mountain biking. As I was looking over the route for the climb (Kelso’s Ridge up Torrey’s Peak and a possible double with Gray’s Peak) I remembered an article I recently read on applying the principles of lean climbing to business (here).
I’ve done a couple 14ers before, so this will be my third and maybe fourth, all with my son. We plan on being lean on this climb. Enough water and food, but not much more. Our timebox is getting down the mountain before the afternoon thunderstorms. 
My current client seems to get lean. As we were working through the user stories for the project, they made the decision to prioritize and pull out the lower priority stories without any prompting from me. With so many of my clients, there are many discussions before they think about taking any stories out of the backlog for the current release. 

I think if I could take my clients on a climb, with each user story represented by a small rock in their backpack, they would soon realize the benefit of keeping it lean. There’s a cost with each user story, and being able to defer some stories means you get up the mountain sooner. 
Categories: Blogs

Good Software

Doc On Dev - Michael Norton - Thu, 07/23/2015 - 17:01
I saw a tweet this morning that caught me off-guard.
If your software makes money it is good software by definition. Nothing else matters. #Agile ^ @SkankworksAgile pic.twitter.com/KwRFl9imjT— AgileFortune (@AgileFortune) July 23, 2015 It doesn't strike me as consistent with the type of thing AgileFortune usually tweets. My initial reaction was to reply via twitter, but didn't feel I could express my thoughts well in 140 characters or less.

What is "good"?And by extension what is "good" software? Good has many meanings. To be morally excellent or virtuous, to be satisfactory in quality, quantity, or degree, to be proper or fit, well-behaved, beneficent, or honorable. These are all definitions of good.

Morals and Software?One might argue that moral excellence, beneficence, or honorableness are not relevant to measures of "good" in software. I disagree. They are as relevant to software as they are to medical care, banking, manufacturing, and any other human endeavor. Dishonorable businesses that are morally questionable and do harm to others are not "good" businesses. Sex trafficking is not "good" business. Exploitation of others is not "good" business. Profits alone do not make a business "good". Businesses do not exist to make money, or rather they should not. Businesses should exist to provide an offering of value to others. Money is a means of measuring the value provided to others. With that money, we can continue to provide and improve our offering, should we choose.

Money is to a business as food is to a human. We do not live to eat; we eat to live. We do not run a business to make money, we make money to run a business. Greed and gluttony are entirely different matters; neither of which is healthy or "good".

Other aspect of "goodness"Let's set aside the moral and social aspects for a moment. This leaves us with satisfactory in quality, quantity, or degree, proper or fit, and well-behaved. Distilling it down to these elements, one could assert that if software makes money it is "good", for it behaves well enough for people to use it for some given purpose that results in profit to the software creator, be this in subscriptions, transaction fees, exchange for goods or service, or ad revenue.

Reliable? Must be good enough. It makes money. And nothing else matters.
Maintainable? What difference does that make? It makes money. And nothing else matters.
Secure? Bah! It makes money. And nothing else matters.
Scalable? Who cares? It makes money. And nothing else matters.

Let's look at this measure of success in another endeavor. An electrician whose installations fail on occasion (reliable), makes junctions inaccessible and does not follow industry standards (maintainable), and fails to adhere to safety standards (secure), but gets paid for the job has provided a "good" service. Because nothing other than money matters.

Good and Money are not the sameAs a software developer; a professional software developer, your measure of "good" needs to include factors far beyond revenue generation. Open Source software is often quite good and makes no money. Commercial software is often poor and makes a lot of money. "Good" and "Money" are not the same. Software that makes money is not automatically good.


Categories: Blogs

Let’s Learn to Do Our Jobs Better, Not a Framework/Method, Better

NetObjectives - Thu, 07/23/2015 - 16:11
The software industry seems so caught up in certifications and learning the latest framework, method or approach. The focus is on the wrong thing. It should be on learning how to learn, learning how to do our job better, learning how to collaborate better.  This has sometimes been talked about with the shu-ha-ri metaphor: Shu – initially obey traditional wisdom – that is, just do the techniques...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Why Timeboxing Sucks For Your New Agile Teams

Leading Agile - Mike Cottmeyer - Thu, 07/23/2015 - 15:42

Ever heard these from teams before regarding timeboxing?

  • “What we do won’t fit into a 2 week window”
  • “The nature of our work is too creative for a timebox”
  • <<<Fill in your variation here>>>

If you work with agile teams, some variation of these statements will be familiar. I consistently hear new teams say this over and over when I form new sets of teams.

Here’s the deal, the problem is not the time box. In fact, the time box is exposing problems you need to solve!

Let’s check out some examples of the team’s timeboxing complaints above.

“What we do won’t fit into a 2 week window”

An example team that could have this problem is a one that is siloed by discipline. I am currently working with a set of agile data teams that fit this mold. The team members are very much specialists.

They have:

  • Data Architecture via Modeling and Data Flows
  • ETL
  • Data Analysis
  • Coding in Scala and Java
  • Data Analytics
  • Data Warehousing
  • Stored Procedures
  • Testing and Automation
  • Reporting
  • Deployment
  • And probably a few more things that I don’t even know about

In total these capabilities make up  a cross-functional team. Inside that team they typically do these tasks in sequence waiting for each other to finish. It’s no wonder they can’t fit anything into a sprint!

To fix this

This team is finding ways to work in parallel. This feels a lot like design by contract for each of these sequential items.

To make this even faster

The team will generalize a bit more to swarm around work that can’t be broken into the design by contract method.

Swarming will require team members learn how to maintain their specialty while learning how to help out with another capability needed by the team. That doesn’t mean be a specialist in both, keep your sanity. So, “If I know ETL, can I learn to help with Data Analysis or Stored Procedures?”

Let’s take a look at another common one.

“The nature of our work is too creative for the time box”

Regarding creatives… there is a large belief that most every discipline in product development is creative by that discipline.

We all see ourselves that way. There is a natural tendency to want to go somewhere, be creative, and come back with our creation. However, the importance of feedback is never more apparent than in truly creative environments.

Let’s consider a company creating it’s brand identity. The brand is going to have a significant impact on how customers pick and choose the company or a competitor. Even in this environment, we don’t want to have a team go away and comeback with some gold-plated logo that misses the mark after 3 months.

To fix this

Instead we want to see steady progress toward success. By keeping the actual creative property transparent while collaborating around it, we can maintain visibility of the value that is being created and decide when enough is good enough.

Getting small

Multiple creatives will be taken into the timebox because no one likes to commit to one thing and have it miss after two weeks. That’s too much risk, so they break it down into smaller deliverables. This benefits the “creative” and the customer.

To wrap up

Timeboxing is not the problem. Timeboxing exposes problems and encourages innovation to find new solutions. From our sprints to daily commitments and on to release planning you will find timeboxes everywhere in agile systems. If you use something like the pomodoro technique, it’s in most every part of your day. This is a critical part for teams to realize as a benefit, not a problem.

The post Why Timeboxing Sucks For Your New Agile Teams appeared first on LeadingAgile.

Categories: Blogs

7 Tips for Valuing Features in a Backlog

Johanna Rothman - Thu, 07/23/2015 - 15:37

Many product owners have a tough problem. They need so many of the potential features in the roadmap, that they feel as if everything is #1 priority. They realize they can’t actually have everything as #1, and it’s quite difficult for them to rank the features.

This is the same problem as ranking for the project portfolio. You can apply similar thinking.

Once you have a roadmap, use these tips to help you rank the features in the backlog:

  1. Should you do this feature at all? I normally ask this question about small features, not epics. However, you can start with the epic (or theme) and apply this question there. Especially if you ask, “Should we do this epic for this release?”
  2. Use Business Value Points to see the relative importance of a feature. Assign each feature/story a unique value. If you do this with the team, you can explain why you rank this feature in this way. The discussion is what’s most valuable about this.

  3. Use Cost of Delay to understand the delay that not having this feature would incur for the release.

  4. Who has Waste from not having this feature? Who cannot do their work, or has a workaround because this feature is not done yet?

  5. Who is waiting for this feature? Is it a specific customer, or all customers, or someone else?

  6. Pair-wise and other comparison methods work. You can use single or double elimination as a way to say, “Let’s do this one now and that feature later.”

  7. What is the risk of doing this feature or not doing this feature?

Don Reinertsen advocates doing the Weighted Shortest Job first.  That requires knowing the cost of delay for the work and the estimated duration of the work. If you keep your stories small, you might have a good estimate. If not, you might not know what the weighted shortest job is.

And, if you keep your stories small, you can just use the cost of delay.

Jason Yip wrote Problems I have with SAFe-style WSJF, which is a great primer on Weighted Shortest Job First.

I’ll be helping product owners work through how to value their backlogs in Product Owner Training for Your Agency, starting in August. When you are not inside the organization, but supplying services to the organization, these decisions can be even more difficult to make. Want to join us?

Categories: Blogs

Monthly Retrospective Plan

Growing Agile - Thu, 07/23/2015 - 14:33
We hold a retrospective each month, to see how we are d […]
Categories: Companies

DevOps Culture and the Informed Workspace

Agile Management Blog - VersionOne - Thu, 07/23/2015 - 14:30

DevOps-Culture

 

 

 

 

 

 

 

While the DevOps culture has been heavy focused on what tools to use, little thought has been giving to what type of workspaces is needed. Ever since the early days of agile, the importance of an informative workspace has been known. Many of the practices around working together, pair programming and the Onsite Customer from Extreme Programming were to enable the rapid flow and visualization of how the team is doing. Other aspects, such as the Big Visual Chart, were included to keep the information flowing. We have made great progress in this category, but we still have more to do.

Now, fast forward to the DevOps Culture. Much like the original agile movement, DevOps is a relatively unique change in the world of work that involves both cultural and technical shifts. It’s just not enough to have cool new tools like Puppet and Chef, or any of the other cool tools that make continuous delivery “a thing.” We need to be able to think about how we are planning our stories. We need to be able to be including acceptance criteria that go beyond just “is it done,” but also all the way to “is it staying done?”

We often run into this as agile consultants. I have often gone to work with a client and their number one concern as we are going through the engagement is “ok, and what do we do on day one of the sprint when we don’t have you here coaching us?” Now, they are fine on their own, but part of the plan is to know what to do after the training wheels are off. Let’s look at that same idea in terms of DevOps Culture. Stories have a very limited life. Once the product owner has accepted the story, we tear it up and throw it away, metaphorically speaking. But that isn’t the end of the story. The software now needs to live and breathe in the big wide world. How do we do that? DevOps is of course the answer, but what exactly does that mean?

Workspaces in the DevOps Culture

As mentioned earlier in this article, the idea of an informed workspace is a valuable tool in our belt for moving deeper and wider into the DevOps culture. Think back to one of the biggest cultural changes called for in the early adoption of agile. Agile called for bringing QA into the room. We aren’t treating QA as a separate team, but part of the team. All of a sudden we are paying close attention to the number of tests that are passing. So now part of our Big Visual Chart is focused on the pass/fail rate of our tests, not just the status of the story itself. This shift took a lot of effort, and I think that if we are honest with ourselves it’s not done yet. But that is an article for another time. We want to take a deeper look at the keys to successfully affecting a further transformation to the DevOps culture. What aspects will really help us do more than just have lots of automated builds that we call done, with no thought to what happens after?

The first step is to think about the cultural changes required. What is it that we will need to change in our thinking in order to make DevOps more than just another buzzword at our shop? The first, and hardest, change is to stop thinking in terms of the “DevOps team”. The whole team is part of Dev and Ops. There just is no wall to throw “finished” product over anymore. It’s all about creating great and long lasting software. There are many steps that we have already taken to get there, but this is one of the biggest. So let’s take a look at the different activities that really make a DevOps culture thrive.

DevOps Activities

Of course, the first thing one thinks about when discussing DevOps is the activities that support continuous delivery. This means an even higher need for all tests to be automated. Unit tests are merely the start, followed closely by automating your acceptance tests. Having these tests running continuously throughout the day is basically the cost of entry into the DevOps culture. Without a strong continuous integration server, running tests all day and every day, we just can’t be sure that what we are releasing is of a high enough quality to stay healthy in the real world. After that, the art of continuous deployment becomes an additional challenge. Orchestration tools are vital to make sure the right bits get bundled with the other right bits, and then get put where they belong. And then, since we are all part of “keeping the lights on,” we need monitoring tools to help us visualize whether our software really is behaving properly. So yes, there is a definite technical aspect to DevOps.

That’s a lot of moving parts! We need to keep track of where we are. This leads to one of the cool parts of a true DevOps implementation. All those cool monitors that the Ops guys get with the fancy graphs and uptime charts? They come into the room with the Ops people. And we need to add to them. We are going to track story progress from idea to implementation, and then into the wild. My acceptance criteria are going to include things like “must have Nagios hooks” and “will use less then x% of CPU”. And now we have to live up to it. This means it is more important than ever to be able to visualize the entire flow. Our Big Visual Charts need to be able to show us not just how the current iteration is going. They must show us the state of the build server, the state of the various builds and where they are in any of the extended process, such as UAT, etc. And, in the unlikely event of a failure anywhere along the line, or in post-production, we can follow a clear chain of events back to find the problem quickly.

Conclusion

So now we see that, while DevOps is primarily a people problem, there are a lot of technical aspects that enable a strong DevOps culture. The key to success is the union of the people and technical aspects, which in a way make DevOps a Cyborg. In order to balance these two aspects, and in order to keep ourselves from burning countless hours and countless brain cells chasing down all of the moving parts, we need to focus on Information. The more Information that we can have at our fingertips, the more effective we will be. Each team will identify which information is the most meaningful to them, and how best to interpret it. You can bet that this will be in live charts rather than stale reports. Being able to orchestrate the entire flow of a story’s life, from inception to realization to retirement will be much easier if we can visualize each step of the way. If this means our team room might start looking like the command center in war games, what’s so bad about that?

About the Author

versionone-coaches-steve-ropaSteve Ropa
CSM, CSPO, Innovation Games Facilitator, SA
Agile Coach and Product Consultant, VersionOne

Steve has more than 25 years of experience in software development and 15 years of experience working with agile methods. Steve is passionate about bridging the gap between the business and technology and nurturing the change in the nature of development. As an agile coach and VersionOne product trainer, Steve has supported clients across multiple industry verticals including: telecommunications, network security, entertainment and education. A frequent presenter at agile events, he is also a member of Agile Alliance and Scrum Alliance.

Categories: Companies

When, Why and How To Estimate

NetObjectives - Thu, 07/23/2015 - 13:09
There are three disparate views on estimation.  One is that estimation is harmful because management misuses the estimates, another is that estimation is a form for waste because it doesn’t provide value directly, and a third is that estimation is critical in order to do planning.  Let’s discuss each of these views and see what we really need to do regarding estimation.   The origins of most of...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Neo4j: Loading JSON documents with Cypher

Mark Needham - Thu, 07/23/2015 - 08:15

One of the most commonly asked questions I get asked is how to load JSON documents into Neo4j and although Cypher doesn’t have a ‘LOAD JSON’ command we can still get JSON data into the graph.

Michael shows how to do this from various languages in this blog post and I recently wanted to load a JSON document that I generated from Chicago crime types.

This is a snippet of the JSON document:

{
    "categories": [
        {
            "name": "Index Crime", 
            "sub_categories": [
                {
                    "code": "01A", 
                    "description": "Homicide 1st & 2nd Degree"
                }
            ]
        }, 
        {
            "name": "Non-Index Crime", 
            "sub_categories": [
                {
                    "code": "01B", 
                    "description": "Involuntary Manslaughter"
                }
            ]
        }, 
        {
            "name": "Violent Crime", 
            "sub_categories": [
                {
                    "code": "01A", 
                    "description": "Homicide 1st & 2nd Degree"
                }
            ]
        }
    ]
}

We want to create the following graph structure from this document:

2015 07 23 06 46 50

We can then connect the crimes to the appropriate sub category and write aggregation queries that drill down from the category.

To do this we’re going to have to pass the JSON document to Neo4j via its HTTP API rather than through the browser. Luckily there are drivers available for {insert your favourite language here} so we should still be good.

Python is my current goto language so I’m going to use py2neo to load the data in.

Let’s start by writing a simple query which passes our JSON document in and gets it straight back. Note that I’ve updated my Neo4j password to be ‘foobar’ – replace that with your equivalent if you’re following along:

import json
from py2neo import Graph, authenticate
 
# replace 'foobar' with your password
authenticate("localhost:7474", "neo4j", "foobar")
graph = Graph()
 
with open('categories.json') as data_file:
    json = json.load(data_file)
 
query = """
RETURN {json}
"""
 
# Send Cypher query.
print graph.cypher.execute(query, json = json)
$ python import_categories.py
   | document
---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 1 | {u'categories': [{u'name': u'Index Crime', u'sub_categories': [{u'code': u'01A', u'description': u'Homicide 1st & 2nd Degree'}, {u'code': u'02', u'description': u'Criminal Sexual Assault'}, {u'code': u'03', u'description': u'Robbery'}, {u'code': u'04A', u'description': u'Aggravated Assault'}, {u'code': u'04B', u'description': u'Aggravated Battery'}, {u'code': u'05', u'description': u'Burglary'}, {u'code': u'06', u'description': u'Larceny'}, {u'code': u'07', u'description': u'Motor Vehicle Theft'}, {u'code': u'09', u'description': u'Arson'}]}, {u'name': u'Non-Index Crime', u'sub_categories': [{u'code': u'01B', u'description': u'Involuntary Manslaughter'}, {u'code': u'08A', u'description': u'Simple Assault'}, {u'code': u'08B', u'description': u'Simple Battery'}, {u'code': u'10', u'description': u'Forgery & Counterfeiting'}, {u'code': u'11', u'description': u'Fraud'}, {u'code': u'12', u'description': u'Embezzlement'}, {u'code': u'13', u'description': u'Stolen Property'}, {u'code': u'14', u'description': u'Vandalism'}, {u'code': u'15', u'description': u'Weapons Violation'}, {u'code': u'16', u'description': u'Prostitution'}, {u'code': u'17', u'description': u'Criminal Sexual Abuse'}, {u'code': u'18', u'description': u'Drug Abuse'}, {u'code': u'19', u'description': u'Gambling'}, {u'code': u'20', u'description': u'Offenses Against Family'}, {u'code': u'22', u'description': u'Liquor License'}, {u'code': u'24', u'description': u'Disorderly Conduct'}, {u'code': u'26', u'description': u'Misc Non-Index Offense'}]}, {u'name': u'Violent Crime', u'sub_categories': [{u'code': u'01A', u'description': u'Homicide 1st & 2nd Degree'}, {u'code': u'02', u'description': u'Criminal Sexual Assault'}, {u'code': u'03', u'description': u'Robbery'}, {u'code': u'04A', u'description': u'Aggravated Assault'}, {u'code': u'04B', u'description': u'Aggravated Battery'}]}]}

It’s a bit ugly but we can see that everything’s there! Our next step is to extract each category into its own row. We can do this by accessing the ‘categories’ key in our JSON document and then calling the UNWIND function which allows us to expand a collection into a sequence of rows:

query = """
WITH {json} AS document
UNWIND document.categories AS category
RETURN category.name
"""
$ python import_categories.py
   | category.name
---+-----------------
 1 | Index Crime
 2 | Non-Index Crime
 3 | Violent Crime

Now we can create a node for each of those categories. We’ll use the MERGE command so that we can run this script multiple times without ending up with repeat categories:

query = """
WITH {json} AS document
UNWIND document.categories AS category
MERGE (:CrimeCategory {name: category.name}) 
"""

Let’s quickly check those categories were correctly imported:

match (category:CrimeCategory)
return category

Graph  23

Looking good so far – now for the sub categories. We’re going to use the UNWIND function to help us out here as well:

query = """
WITH {json} AS document
UNWIND document.categories AS category
UNWIND category.sub_categories AS subCategory
RETURN category.name, subCategory.code, subCategory.description
"""
$ python import_categories.py
    | category.name   | subCategory.code | subCategory.description
----+-----------------+------------------+---------------------------
  1 | Index Crime     | 01A              | Homicide 1st & 2nd Degree
  2 | Index Crime     | 02               | Criminal Sexual Assault
  3 | Index Crime     | 03               | Robbery
  4 | Index Crime     | 04A              | Aggravated Assault
  5 | Index Crime     | 04B              | Aggravated Battery
  6 | Index Crime     | 05               | Burglary
  7 | Index Crime     | 06               | Larceny
  8 | Index Crime     | 07               | Motor Vehicle Theft
  9 | Index Crime     | 09               | Arson
 10 | Non-Index Crime | 01B              | Involuntary Manslaughter
 11 | Non-Index Crime | 08A              | Simple Assault
 12 | Non-Index Crime | 08B              | Simple Battery
 13 | Non-Index Crime | 10               | Forgery & Counterfeiting
 14 | Non-Index Crime | 11               | Fraud
 15 | Non-Index Crime | 12               | Embezzlement
 16 | Non-Index Crime | 13               | Stolen Property
 17 | Non-Index Crime | 14               | Vandalism
 18 | Non-Index Crime | 15               | Weapons Violation
 19 | Non-Index Crime | 16               | Prostitution
 20 | Non-Index Crime | 17               | Criminal Sexual Abuse
 21 | Non-Index Crime | 18               | Drug Abuse
 22 | Non-Index Crime | 19               | Gambling
 23 | Non-Index Crime | 20               | Offenses Against Family
 24 | Non-Index Crime | 22               | Liquor License
 25 | Non-Index Crime | 24               | Disorderly Conduct
 26 | Non-Index Crime | 26               | Misc Non-Index Offense
 27 | Violent Crime   | 01A              | Homicide 1st & 2nd Degree
 28 | Violent Crime   | 02               | Criminal Sexual Assault
 29 | Violent Crime   | 03               | Robbery
 30 | Violent Crime   | 04A              | Aggravated Assault
 31 | Violent Crime   | 04B              | Aggravated Battery

Let’s give sub categories the MERGE treatment too:

query = """
WITH {json} AS document
UNWIND document.categories AS category
UNWIND category.sub_categories AS subCategory
MERGE (c:CrimeCategory {name: category.name})
MERGE (sc:SubCategory {code: subCategory.code})
ON CREATE SET sc.description = subCategory.description
MERGE (c)-[:CHILD]->(sc)
"""

And finally let’s write a query to check what we’ve imported:

match (category:CrimeCategory)-[:CHILD]->(subCategory)
return *
Graph  24

I hadn’t realised before running this query is that some sub categories sit under multiple categories so that’s quite an interesting insight. The final Python script is available on github – any questions let me know.

Categories: Blogs

The Best Productivity Book for Free

J.D. Meier's Blog - Wed, 07/22/2015 - 18:04

image"At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world." -- Satya Nadella

We take productivity seriously at Microsoft. Ask any Softie. I never have a lack of things to do, or too much time in my day, and I can't ever make "too much" impact.

To be super productive, I've had to learn hard-core prioritization techniques, extreme energy management, stakeholder management, time management, and a wealth of productivity hacks to produce better, faster results.

We don’t learn these skills in school.  But if we’re lucky, we learn from the right mentors and people all around us, how to bring out our best when we need it the most.

Download the 30 Days of Getting Results Free eBook

You can save years of pain for free:

30 Days of Getting Results Free eBook

There’s always a gap between books you read and what you do in the real world. I wanted to bridge this gap. I wanted 30 Days of Getting Results to be raw and real to help you learn what it really takes to master productivity and time management so you can survive and thrive with the best in the world.

It’s not pretty.  It’s super effective.

30 Days of Getting Results is a 30 Day Personal Productivity Improvement Sprint

I wrote 30 Days of Getting Results using a 30 Day Sprint. Each day for that 30 Day Sprint, I wrote down the best information I learned from the school of hard knocks about productivity, time management, work-life balance, and more.

For each day, I share a lesson, a story, and an exercise.

I wanted to make it easy to practice productivity habits.

Agile Results is a Fire Starter for Personal Productivity

The thing that’s really different about Agile Results as a time management system is that it’s focused on meaningful results.  Time is treated as a first-class citizen so that you hit your meaningful windows of opportunity, and get fresh starts each day, each week, each month, each year.  As a metaphor, you get to be the author of your life and write your story forward.

For years, I’ve received emails from people around the world how 30 Days of Getting Results was a breath of fresh air for them.

It helped them find their focus, get more productive, enjoy what they do, renew their energy, and spend more time in their strengths and their passions, while pursuing their purpose.

It’s helped doctors, teachers, students, lawyers, developers, grandmothers, and more.

Learn a New Language, Change Careers, or Start a Business

You can use Agile Results to learn better, faster, and deeper because it helps you think better, feel better, and take better action.

You can use Agile Results to help you learn a new language, build new skills, learn an instrument, or whatever your heart desires.

I used the system to accidentally write a book in a month.

I didn’t set out to write a book. I set out to share the world’s best insight and action for productivity and time management. I wrote for 20 minutes each day, during that month, to share the best lessons and the best insights I could with one purpose:

Help everyone thrive in work and life.

Over the coming months, I had more and more people ask for a book version. As much as they liked the easy to flip through Web pages, they wanted to consume it as an eBook. So I turned 30 Days of Getting Results into a free eBook and made that available.

Here's the funny part:

I forgot I had done that.

The Accidental Free Productivity Book that Might Just Change Your Life

One day, I was having a conversation with one of my readers, and they said that I should sell 30 Days of Getting Results as a $30 work book. They liked it much more than the book, Getting Results the Agile Way. They found it to be more actionable and easier to get started, and they liked that I used the system as a way to teach the system.

They said I should make the effort to put it together as a PDF and sell it as a workbook. He said people would want to pay for it because it’s high-value, real-world training, and he said it was better than any live training he had ever taken (and he had taken a lot.)

I got excited by the idea, and it made perfect sense. After all, wouldn’t people want to learn something that could impact every single day of their lives, and help them achieve more in work and life and help them adapt and compete more effectively in our ever-changing world?

I went to go put it together, and I had already done it.

Set Your Productivity on Fire

When you’re super productive, it’s easy to forget some of the things you create because they so naturally flow from spending the right time, on the right things, with the right energy. You’ll naturally leave a trail of results from experimenting and learning.

Whether you want to be super productive, or do less, but accomplish more, check out the ultimate free productivity guide:

30 Days of Getting Results Free eBook

Share it with friends, family, colleagues, and whoever else you want to have an unfair advantage in our hyper-competitive world.

Lifting others up, lifts you up in the process.

If you have a personal story of how 30 Days of Getting Results has helped you in some way, feel free to share it with me.  It’s always fun to hear how people are using Agile Results to take on new challenges, re-invent their productivity, and operate at a higher level.

Or simply get started again … like a fresh start, for the first time, full of new zest to be your best.

Categories: Blogs

Planbox and BrainBank Merge

Scrum Expert - Wed, 07/22/2015 - 17:35
Planbox, a cloud-based agile project management solution vendor, has merged with BrainBank Software the pioneering platform for cloud based collaborative innovation management. The combined company will be called Planbox. BrainBank and its Idealink product will be marketed as Planbox Innovate and the existing solution as Planbox Agile. Ludwig Melik currently CEO of BrainBank will be the CEO of the combined company which will continue the development and support for both Planbox and BrainBank products – all existing agreements and support arrangements remain unchanged. “The combined offering will allow us to benefit ...
Categories: Communities

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.