Skip to content

Open Source

Call for Papers is Open for Geneva SonarQube Conference

Sonar - Fri, 07/31/2015 - 16:05

A few weeks ago, I announced a free SonarQube User Conference in Geneva on the 23rd and 24th of September. More than 100 people have already registered.

We want this 2-day conference to be as valuable as possible for participants, and thus want to have a variety of speakers. That’s why we are opening a call for papers, and invite anyone who wants to talk about SonarQube to submit a presentation. The CFP will close on Friday the 14th of August.

To submit a talk, simply send an email to with the title, abstract and duration of the presentation. We are eagerly awaiting your proposal!

Categories: Open Source

Version R6#14.2

IceScrum - Wed, 07/29/2015 - 20:08
Will be available soon.
Categories: Open Source

New support to start with iceScrum

IceScrum - Mon, 07/27/2015 - 11:57
Hello everybody, In this blogpost we are going to present you two new features we put at your disposal to start using iceScrum in the good way and understand it better. Starting from scratch with iceScrum can be difficult if you are not familiar at all with the Scrum methodology. That is why we added…
Categories: Open Source

SonarLint brings SonarQube rules to Visual Studio

Sonar - Fri, 07/24/2015 - 14:33

We are happy to announce the release of SonarLint for Visual Studio version 1.0. SonarLint is a Visual Studio 2015 extension that provides on-the-fly feedback to developers on any new bug or quality issue injected into C# code. The extension is based on and benefits from the .NET Compiler Platform (“Roslyn”) and its code analysis API to provide a fully-integrated user experience in Visual Studio 2015.


There are lots of great rules in the tool. We won’t list all 76 of the rules we’ve implemented so far, but here are a couple that show what you can expect from the product:

  • Defensive programming is a good practise, but in some cases you shouldn’t simply check whether an argument is null or not. For example, value types (such as structs) can never be null, and as a consequence comparing a non-restricted generic type parameter to null might not make sense (S2955), because it will always return false for structs.
  • Did you know that a static field of a generic class is not shared among instances of different close-constructed types (S2743)? So how many instances of DefaultInnerComparer do you think will be created with the following class? It is static so you might have guessed one, but actually there will be an instance for each type parameter used for instantiating the class.
  • We’ve been using SonarLint internally for a while now, and are running it against a few open source libraries too. We’ve already found bugs in both Roslyn and Nuget with the rule “Identical expressions used on both sides of a binary operator” rule (S1764).
  • Also, as shown below, some cases of null pointer dereferencing can be detected as well (S1697):

This is just a small selection of the implemented rules. To find out more, go and check out the product.

How to get it?

SonarLint is packaged in two ways:

  • Visual Studio Extension
  • Nuget package

To install the Visual Studio Extension download the VSIX file from Visual Studio Gallery. Optionally, you can download the complete source code and build the extension for yourself. Oh, and you might have already realized: this product is open source (under LGPLv3 license), so you can contribute if you’d like.

By the way, internally the SonarQube C# plugin also uses the same code analyzers, so if you are already using the SonarQube platform for C# projects, from now on you can also get the issues directly in the IDE.

What’s next?

In the following months we’ll increase the number of supported rules, and as with all our SonarQube plugins, we are moving towards bug-detection rules. Next to this effort, we’re continuously adding code fixes to the Visual Studio Extension. That way, the issues identified by the tool can be automatically fixed inside Visual Studio. In the longer run, we aim to bring the same analyzers we’ve implemented for C# to VB.Net as well. Updates are coming frequently, so keep an eye on the Visual Studio Extension Update window.

This SonarLint for Visual Studio is one piece of this puzzle we’ve been working on since the beginning of the year: providing to the .Net community a tight, easy and native integration of the SonarQube ecosystem into the Microsoft ALM suite. This 1.0 release of SonarLint is a good opportunity to warmly thanks again Jean-Marc Prieur, Duncan Pocklington and Bogdan Gavril from Microsoft. They have been highly and daily contributing to this effort to make SonarQube a central piece of any .Net development environment.

For more information on the product go to or follow us on Twitter.

Categories: Open Source

Geneva SonarQube Conference – Sept. 23 & 24.

Sonar - Fri, 07/17/2015 - 10:13

We are very happy to announce the first SonarQube European User Conference! This two-day free event will take place in Geneva on September 23rd and 24th, right in the heart of the city at Côté Cour Côté Jardin – Rue de la Chapelle 8, 1207 Geneva.

Following the success of the conferences in the US and Paris earlier this year, the Geneva conference targets a broader and more international audience. It will provide a forum for users to meet, network, and share experiences and best practices around the SonarQube platform. Attendees will also be able to learn directly from the SonarSource Team about the platform roadmap and vision, as well as what was accomplished with 4.x and what is being accomplished with the 5.x series.

The program will highlight some of the exciting topics we’ve talked about these past weeks, like the Water Leak paradigm, the new SonarQube GitHub Plugin, the collaboration with Microsoft to integrate with the .NET ALM Suite, and the coverage of the security and vulnerability markets. We plan to make the conference as exciting as these recent topics and even more so!

There will also be more informal sessions, such as presentations and feedback from customers and partners on large scale implementations of SonarQube, and hands-on workshops on technical aspects such as platform monitoring, custom rules development…

We hope you will be able to make it, and are looking forward to meeting you there! To register, please click here.

Categories: Open Source

SonarQube Swift Plugin Offers Mature Functionality for Young Language

Sonar - Fri, 07/10/2015 - 10:13

The Swift programming language is only a year old, but the SonarQube plugin for code written in this “green” language has already been out for six months and already offers a mature set of features.

The SonarQube Swift plugin is absolutely easy to use. All you need to do is specify the name of the project and the folder with the source files. As the analysis output, you get a wealth of metrics (lines of code, complexity etc.), code duplication detection, and of course, the most important and interesting thing: issues raised by your code.

The Swift language introduces a lot of new features, some of which developers have been waiting for for a long time (e.g. easy-to-use optional values), while others are a bit more controversial (operator overriding, custom operators). Love the new features or hate them, no developer is indifferent.

Because the Swift language is so new, our team has made an effort to create a pile of useful rules to help developers take proper advantage of its unique features. For instance, for those who are used to ending each switch case with a break, we have this rule: “break” should be the only statement in a “case”.
For those who are addicted to using custom operators, we have rules limiting the risks of using of this feature:

And for sure, the Swift plugin provides standard types of rules like name convention rules for all possible categories, a bunch of rules detecting too-complex code and other super useful bug detection rules such as:

The Swift language is developing rapidly, regularly releasing new versions with new features and syntax. At the recent WWDC 2015, Swift 2.0 was announced. It introduces error-handling mechanisms, defer statements, guard statements and a lot of other stuff. All of which is already supported by Swift plugin 1.4!

So if you are interested in being able to develop high quality Swift code quickly, take a look at Nemo to what the SonarQube Swift plugin offers, and then try it out for yourself.

Categories: Open Source

GitHub pull request analysis helps fix the leak

Sonar - Wed, 07/08/2015 - 12:15

If you follow SonarSource, you are probably aware of a simple and yet powerful paradigm that we’re using internally: the water leak concept. That is how we’ve been working on a daily basis at SonarSource since a couple of years already, using various features of SonarQube like “New Issues” notifications“Since previous version” differential period, and quality gates. These features allows us to make sure that no technical debt is introduced on new code. More recently, we have developed a brand new plugin to go even further in this direction: the SonarQube GitHub Plugin.

Analysing GitHub pull requests to detect new issues

At SonarSource, we use GitHub to manage our codebase. Every bug fix, improvement, and new feature is developed in a Git branch and managed through a pull request on GitHub. Each pull request must be reviewed by someone else on the team before it can be merged into the master branch. Previously, it was only after the merge and the next analysis (every master branch is analysed on our internal SonarQube instance several times a day) that SonarQube feedback was available, possibly leading to another pull request-review cycle. “Wouldn’t it be great” we thought, “if the pull request could be reviewed not only by a teammate, but also by SonarQube itself before being merged?” That way, developers would have the opportunity to fix potential issues before they could be injected into the master branch (and reported on the SonarQube server).

This is what we achieved with the new SonarQube GitHub Plugin. Basically, every time a pull request is submitted by a member of team, the continuous integration system launches a SonarQube preview analysis with the parameters to activate the GitHub plugin, so that:

  1. When the SonarQube analysis starts, the GitHub plugin updates the status of the pull request to mention that there’s a pending analysis
  2. Then SonarQube executes all the required language plugins
  3. And at the end, the GitHub plugin:
    • adds an inline comment for each new issue,
    • adds a global comment with a summary of the analysis,
    • and updates the status of the pull request, setting it to “failed” if at least one new critical or blocker issue was found.

Here’s what such a pull request looks like (click to enlarge):

Pull Request Analysis with SonarQube GitHub Plugin

Thanks to the GitHub plugin, developers get quick feedback as a natural, integrated part of their normal workflow. When a GitHub analysis shows new issues, developers can choose to fix the issues and push a new commit – thus launching a new SonarQube analysis. But in the end, it is up to the developer whether or not to merge the branch into the master, whatever the status of the pull request after the analysis. The SonarQube GitHub plugin provides feedback, but the power remains where it belongs – in the hands of the developers.

What’s next?

Now that the integration with GitHub has proven to be really useful, we feel that doing a similar plugin for Atlassian Stash would be valuable, and writing it should be quite straightforward.

Also, analysing pull requests on GitHub is a great step forward, because it gives early feedback on incoming technical debt. But obviously, developers would like to have this feedback even earlier: in their IDEs. This is why in the upcoming months, we will actively work on the Eclipse and IntelliJ plugins to make sure they allow developers to efficiently fix issues before commit and adopt the “water leak” approach wholesale. To achieve this target, we’ll update the plugins to run SonarQube analyses in the blink of an eye for instantaneous feedback on the code you are developing.


Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00