Skip to content

Open Source

SonarQube 4.2 in Screenshots

Sonar - Tue, 04/15/2014 - 08:07

The team is proud to announce the release of SonarQube 4.2, which includes many exciting new features:

  • Multi-language analysis
  • Tags of rules
  • New visual measure filter representations (bubble chart, pie chart and histogram)
  • Improved Issues page

Multi-language Analysis

The most voted JIRA ticket ever is now fixed! Running an analysis on a multi-language project is now rather simple. Just point to the parent directory containing all the source code and that’s it. Then, from the very same place, you can browse issues on all your files whatever their language.

Tags of Rules

Thanks to the tagging mechanism, you can now classify coding rules, which should ease searching.

New Visual Measure Filter representations

Bubble chart, pie chart and histogram are now available to display your filters in nice and meaningful ways.

Improved Issues Page

The Issues page was redesigned to make it easier to search for and browse issues.

That’s all, Folks!

Time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

At Long Last, SonarQube Is a True Polyglot

Sonar - Wed, 04/09/2014 - 19:52

Good taste prevents me from embedding a trumpet fanfare into this post, but it does seem warranted. After all, with the release of SonarQube version 4.2 last week, SonarSource has finally implemented the all-time highest voted ticket in the SonarQube backlog: multi-language analysis.

Now, at last, you’ll be able to see the Java or C# in your web project side by side with its JavaScript and HTML. Finally, without the Views plugin, you can see aggregate measures across the multiple technologies in a single project for a unified quality assessment. Cue the angel choir.

Okay, now for a tiny dose of reality. The ability to participate in a multi-langauge analysis must be implemented in each language plugin separately, so it may be a while before your project is fully inspected with just one analysis. But very soon you’ll be able to remove the sonar.language property, and SonarQube will automatically analyze every file with an extension it recognizes.

Actually, if even one of the languages in your project already supports multi-language analysis, you can go ahead and drop the sonar.languages property and SonarQube will do its multi-language thing as much as it can. Multi-language analysis has already been released in Android, Flex, Groovy, Java, JavaScript, PHP, PL/SQL, and RPG. It’s scheduled for ABAP, C/C++, C#, COBOL, PL/I, Python, VB6, and VB.NET.

Of course, if you don’t want to turn on multi-language analysis yet, then all you have to do is retain the sonar.languages property, and you’ll get the legacy behavior: analysis only of the single language you specified.

Categories: Open Source

Ducks Make It Look Easy Too

Sonar - Thu, 03/20/2014 - 09:57

Since I joined SonarSource full time at the beginning of this month, I’ve been thinking a lot about ducks and belly dancers.

That seems like an odd combination, but they have more in common than you might think. Most people have heard the old saw about being like a duck: stay calm on the surface and paddle like hell underneath. It’s pretty much the same for belly dancers; those big, poofy pants (or skirts) they wear aren’t simply a fashion statement; they’re intended hide how hard the dancer’s legs are working in the process of making the rest of the body move.

It’s pretty much the same story for SonarSource. As a member of the SonarQube community, and even as a part-timer at SonarSource, I had no idea how much work goes into each new version of SonarQube and the language plugins. Having spent the first two weeks of this month at the La Roche office, I’m beginning to understand.

As a community member, I was concerned mainly with SonarQube itself and the language plugins I used every day. Wearing the blinders of that narrowed focus, it always seemed like a very long time between releases. With the blinders off, and seeing more clearly everything that goes on, I’m astounded by how often new versions go out the door. From November through January, there were 18 major releases, and that doesn’t count the sort of internal refactoring that will make it easier to continue pumping out high-quality code in the future.

What I saw in Jira and on the mailing lists is the above-the-water portion of the duck (or the above-the-legs portion of the dancer, depending on which metaphor you prefer). What’s going on underneath is an incredible amount of testing (manual and automated), test maintenance, management of milestones and RC’s, and yes monitoring and improvement of quality.

As a community member, I had noticed how few real issues are turned up for each new release candidate. I had also noticed that sometimes RC1 of a new version isn’t the first release candidate to be announced to the community for testing. The exhaustive tests that SonarSource itself performs mean that most issues are caught internally before the community ever sees a new version – often in the milestone phase, but sometimes in the release candidate phase.

That’s because the focus at SonarSource is not just making quality monitoring software, but making quality software. Making it look easy is just a byproduct.

Categories: Open Source

Version R6#13

IceScrum - Tue, 03/11/2014 - 17:57

Hello everybody,

Here comes a new version of iceScrum and iceScrum Pro!

The R6#13 version brings some important bug fixes. We also developed a few improvements that should please you while waiting for the big changes that will come in iceScrum R7!

At first, we expected the previous version (R6#12) to be the last of the R6 series. However, we take all the time that is needed to experiment the ideas gathered last months and ensure that the new interface will make using iceScrum much more easier and pleasant. In the meantime, we don’t want to leave you with annoying bugs so we decided to release another R6 version.

R6#13 Improvements
  • ICESCRUM-594 – Bugzilla integration [many thanks to TECH'advantage, the iceScrum Pro customer that sponsored this story] (iceScrum Pro, Documentation)
  • We promised additional bug trackers, after Mantis BT here comes the Bugzilla integration! You can now import stories automatically from Bugzilla issues and define rules to update the issues according to the changes made on the corresponding stories.

  • ICESCRUM-622 – Define custom tags on bug tracker import (iceScrum Pro)
  • ICESCRUM-649 – Custom story estimates
  • Until now, story estimates had to be chosen from either the Fibonacci or the Integer suite, there is now a third option called « Custom values ». Will still strongly recommend the use of the « story points » empirical unit for story estimates. However, it is now entirely up to the teams to choose the values that are meaningful to them.

    When custom values are enabled, you can estimate stories by typing any value between 0 and 999.99, with a precision of two decimal places.

  • ICESCRUM-652 – Update availabilities for done sprints (iceScrum Pro)
Bug fixes
  • ICESCRUM-651 – Availability dates are shifted by one backwards in the table header (iceScrum Pro)

    Warning: if you use the iceScrum Pro availabilities and if your projects has a negative offset from UTC, which is the case of most projects based in America, then you should read the following:

    If your project satisfies these two criteria, then the dates displayed in the availabilities table header are likely to be shifted by one day backwards. Despite being only a display bug, it has annoying consequences.

    Here is an example: given a sprint from 11th march to 17th march, the table has 7 columns. The first column corresponds to the 11th and the last column corresponds to the 17th. However, if you project had a negative offset timezone, then the first day is mistakenly labelled 10 and so on until the last column labelled 16.

    If your team has entered the availabilities according to these labels (this is likely to be the case) then all the availabilities are also shifted by one. When upgrading to R6#13, the column formerly labelled 10 will be labelled 11, and so on for every day. Consequently, we strongly recommend that your teams check the availabilities entered for current and upcoming sprints and update them accordingly.

    We considered an automatic way to fix the values, which would have required moving data around between sprints. However, such an automatic fix would have no way to figure out how your teams may have worked around the issue. Because of that, an automatic data fix may cause additional troubles on top of the identified bug, leading to an inextricable mess. Thus, we abandoned this solution.

    We relaxed the permissions so the Scrum Master and the Product Owner can now update availabilities for done sprints and correct the availabilities wherever it is needed (see the improvements section above).

    We are sorry for the inconvenience. If you have any question or if you want more information before upgrading, feel free to contact our support team, we will be pleased to help you.

  • ICESCRUM-645 – Product Owners don’t follow automatically new stories despite the setting enabled
  • ICESCRUM-642 – « Browse project » not displayed for admin if no project is public
  • ICESCRUM-646 – An estimated story returned to sandbox keeps its effort
  • ICESCRUM-894 – Drop-downs to select view type for embedded views are too small
  • ICESCRUM-643 – Product Owners aren’t available in task board user switch (iceScrum Pro)
  • ICESCRUM-634 – User name with apostrophe breaks availability table (iceScrum Pro)
  • ICESCRUM-644 – Availabilities aren’t created for POs when enabling availability for a project (iceScrum Pro)
  • ICESCRUM-886 – Weekend availabilities generated when adding days to a sprint are not initialized to 0 (iceScrum Pro)
  • ICESCRUM-648 – Changes in team composition are not reflected in availabilities (iceScrum Cloud)
Download & notice
Categories: Open Source

Measures, at your Service!

Sonar - Thu, 02/27/2014 - 11:11

If there’s a set of data you regularly look up in SonarQube, the Measures Service – and saved filters – are going to be your new favorite SonarQube features.

A user at my day job recently showed me a spreadsheet he’s using to track the metrics of the “worst offender” files in his COBOL project. I was afraid I already knew the answer, but I asked how he was getting the data for each file. It was one time I wasn’t happy to be right – he was doing it the hard way, manually recursing each branch of his project’s components tree to find the numbers.

That’s when I pointed him to the Measures service. It lets you search for any type of resource based on a host of criteria. He was looking for files in his project that exceeded a certain threshold. This doctored screenshot shows the kind of search I showed him how to run:

First he specified what he was looking for: files, and then the criteria by which to choose them: Components of the project, SonarQube in this case, with Coverage less than 90%. Just having the list of relevant files pulled neatly together thrilled him; I could tell by the way the corner of his mouth quirked up.

Then it was a simple matter to edit the column set to show what he wanted to see:

His mouth quirked again. He was really happy. Except…

We both started to speak, but he beat me to the gate, “Can you save…?” I knew he didn’t want to have to reconfigure this every time – who would?

I had him close column editing mode (a crucial step), and the “Save As” link reappeared:

He gave it a name, and knew that from then on, it would always be waiting for him in the saved filters menu:

I got the mouth quirk again.

Then, I showed him how to use a “Measure Filter as List” widget on his own private dashboard to display the saved filter automatically, and pointed out that he could make that dashboard the first page he saw in SonarQube.

He actually smiled.

Categories: Open Source

Three options for pre-commit analysis

Sonar - Thu, 02/20/2014 - 11:04

As a quality-first focus becomes increasingly important in modern software development, more and more developers are asking how to find new issues before they check their code in.

For some of you, it’s a point of pride. For others, it’s a question of keeping management off your back, and for still others it’s simply a matter of not embarrassing yourself publicly. Fortunately, the SonarQube developers (being developers themselves) understand the problem and have come up with three different ways of dealing with it: the Eclipse plugin, the IntelliJ plugin, and the Issues Report plugin.

All three allow you to perform a pre-commit check on your code, and the two IDE plugins use incremental mode, which shortens analysis time by looking only at the files you’ve edited, rather than re-analyzing every file in the project. This recent improvement takes running a pre-commit check on a large project from a productivity drag to just another simple step in the process. You can use incremental mode with the Issues Report plugin too, it’s just not the default.

Both IDE plugins support Java, and the Eclipse plugin supports C++ and Python as well. For any other language, regardless of your IDE, you’ll want to use the Issues Report plugin, which isn’t an IDE plugin at all, but one you install in SonarQube itself.


If you’ve heard of pre-commit analysis before, it was probably in the context of Eclipse, because the Eclipse plugin has been around the longest. Once you have it installed and configured, you’re ready to start working with it.

The first thing you may notice after linking your local project with its SonarQube analog is that extra decorators show up in your code.

Each decorator marks a line with an existing issue. Mouse over a decorator to get a tooltip listing the issues. There’s also a SonarQube Issues view, which gives you a listing of all the issues in the project, but can also be narrowed to show only new issues. Double click any issue to open the relevant file and jump to the appropriate (or rather, “inappropriate”) line of code.

When you’re ready to commit new code, checking it in SonarQube is easy: right-click the project in the Project or Package Explorer and choose SonarQube > Analyze. By default, any new issues you’ve introduced will be marked as errors in the Problems tab, so you don’t have to go hunting for them; they jump out at you.

By the way, that behavior’s configurable, so if you want new issues demoted from errors to warnings (like some of my day job colleagues) it’s easy to do.


The IntelliJ plugin is the newest addition to SonarQube’s pre-commit analysis offerings. As with the Eclipse plugin, you’ll need to install and configure it before you can really begin using it.

After you link your local project in IntelliJ with its SonarQube analog, lines with existing issues will be highlighted. You can mouse over the line or the corresponding right-margin marker to see the issues.

When you’re ready to check your code in, scanning it for new issues has a few more steps than in Eclipse, but still isn’t hard. Right-click on the project, choose Analyze > Run Inspection by Name…, search for SonarQube Issues in the dialog, and run the analysis on the whole project (in the next dialog).

An Inspection Results section is added to the window, and new issues are marked as such.

Issues Report

The third way to perform a pre-commit analysis is to use the Issues Report plugin. It installs directly into SonarQube. Once it’s in place, you’re still not quite done; you’ll need to install SonarQube Runner locally. Don’t worry about configuring the connection to your SonarQube database, like the installation instructions call for. For the analysis you’ll be doing, you only need to specify

Then you need to set up a file in your project root if you don’t already have one. Make sure it includes the property sonar.analysis.mode=incremental. That’s what narrows your pre-commit check to only the files you’ve changed and prevents SonarQube Runner from trying to commit the results to the database.

Before you fire your first analysis, there are a few more options to consider. The Issues Report plugin has a couple of configuragions that can be turned on at either the global level or the individual analysis level: sonar.issuesReport.console.enable, and sonar.issuesReport.html.enable. By default both are set to false. As you might guess, sonar.issuesReport.console.enable enables summary reporting in the analysis console. Here’s what it looks like:

You can use the console report to see if you need to look at the HTML report. (That’s assuming you set sonar.issuesReport.html.enable=true. Otherwise all you got was .sonar/sonar-report.json.) Two versions of the HTML report are automatically created, issues-report.html and issues-report-light.html. By default, they land in .sonar/issues-report, but that’s configurable. The difference between them is that the light version only shows new issues. The “heavy” version contains all issues, but defaults to showing new issues only:

So that’s it. Now, no matter what your language, no matter what your IDE, you too can run a pre-commit check. Happy coding!

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00