Skip to content

Open Source

Eating The Dog Food… In Public

Sonar - Thu, 02/16/2017 - 10:55

At SonarSource, we’ve always eaten our own dog food, but that hasn’t always been visible outside the company. I talked about how dogfooding works at SonarSource a couple years ago. Today, the process is much the same, but the visibility is quite different.

When I wrote about this in 2015, we used a private SonarQube server named “Dory” for dogfooding. Every project in the company was analyzed there, and it was Dory’s standards we were held to. Today, that’s still the case, but the server’s no longer private, and it’s no longer named “Dory”.

Today, we use (nee Dory) for dogfooding, and it’s open to the public. That means you can follow along as, for instance, we run new rule implementations against our own code bases before releasing them to you. We also have a set of example projects we run new rules against before they even make it to Next, but seeing a potentially questionable issue raised against someone else’s code hits a different emotional note than seeing it raised against your own.

Of course, that’s the point of dogfooding: that we feel your pain. As an example, take the problem of new issues raised in the leak period on old code. Since we deploy new code analyzer snapshots on Next as often as daily, it means we’re always introducing new rules or improved implementations that find issues they didn’t find before. And that means that we’re always raising new issues on old code. Since we enforce the requirement to have a passing quality gate to release, this causes us the same problem you face when you do a “simple” code analyzer upgrade and suddenly see new issues on old code. Because we do feel that pain, SonarQube 6.3 includes changes to the algorithm that sets issue creation date so that issues from new rules that are raised on old code won’t be raised in the leak period.

Obviously, we’re not just testing rules on Next; we’re also testing changes to SonarQube itself. About once a day, a new version of SonarQube itself is deployed there. In fact, it happens so often, we added a notification block to our wallboard to keep up with it:

By running the latest milestone on our internal instance, each UI change is put through its paces pretty thoroughly. That’s because we all use Next, and no one in this crowd is meek or bashful.

Always running the latest milestone also means that if you decide to look over our shoulders at Next, you’ll get a sneak peek at where the next version is headed. Just don’t be surprised if details change from day to day. Because around here, change is the only constant.

Categories: Open Source

The Tweets You Missed in January

Sonar - Mon, 02/06/2017 - 11:09

Here are the tweets you likely missed last month!

SonarQube 6.2 released: read the news and see in it screenshots!

— SonarQube (@SonarQube) January 10, 2017

Governance 1.2 dissociates overall health of an application portfolio and risk identified on its projects

— SonarQube (@SonarQube) January 12, 2017

SonarPython 1.7 brings support for Cognitive Complexity

— SonarQube (@SonarQube) January 27, 2017

SonarC++ 4.4 Released: SonarLint for Eclipse CDT support, improved dataflow engine and 4 new rules

— SonarQube (@SonarQube) January 12, 2017

SonarJS 2.19 Released: 14 new rules, including 2 rules detecting invalid calls to built-in methods #javascript

— SonarQube (@SonarQube) January 12, 2017

Detecting Type Issues in #javascript with SonarJS, see

— SonarQube (@SonarQube) January 11, 2017

SonarLint for IntelliJ 2.7 shows issues context and highlights corresponding locations

— SonarLint (@SonarLint) January 31, 2017

Categories: Open Source

Detecting Type Issues in JavaScript

Sonar - Wed, 01/11/2017 - 14:21

JavaScript is very flexible and tries as much as possible to run code without raising an error. This is both a blessing and a curse. It’s a blessing for beginners who don’t understand everything they’re doing. It may become a curse when a subtle coding mistake leads to strange behavior instead of causing a clear failure.

Some of these coding mistakes are related to types. A JavaScript variable doesn’t have a defined type and its type can change during the lifetime of the variable. That’s a powerful feature, but it also makes it quite easy to make mistakes about types. The good news is that SonarJS is now able to detect some of these issues!

Let’s look at an example. It makes no sense to use a strict equality operator like === or !== on two operands which don’t have the same type: in such cases, === always returns false and !== always returns true. We have a rule to check that and this rule found the following issue in JQuery:

In this case, we know that “type” is either a string or undefined when it is compared to the boolean value false with a strict equality operator. This condition is therefore useless, and such a comparison is certainly a bug.

Of course, we can go further. SonarJS embeds some knowledge about built-in objects and their properties and methods. We added a new rule “Non-existent properties shouldn’t be accessed for reading” which is based on that knowledge. It detects issues which could be due to a typo in the name of the property or to a mistake about the type of the variable, such as the following issue which was found in the OpenCart project:

This piece of code confuses two of its variables: “number” and “s”. The first one is a number and the second is a string representation of the first. The “length” property therefore exists on “s”, but is undefined on “number”. As a result, this function does not return what it’s supposed to.

Confusion about variable types can also be revealed by another rule: “Values not convertible to numbers should not be used in numeric comparisons”. The fact that an operand cannot be converted to a number could go unnoticed because of JavaScript’s flexibility: the operand would simply be converted to NaN, and the comparison would return false. This rule should help to spot such mistakes. Here’s an example that was detected in the Dojo Toolkit extras library:

We know that “methodArgs” may be an array: when it is, comparing it to a number doesn’t make sense and that’s what the rule detects. The author of this code probably intended to use methodArgs.length in the comparison.

How can SonarJS catch such mistakes? Briefly, we rely on path-sensitive dataflow analysis: as we explained a few months ago, our analyzer can explore the various execution paths of a function and the possible constraints on the variables. In the last few versions, we improved our engine so that it tracks the types of the variables. We derive type information based on indicators in the code such as:

  • Literals, e.g. 42 is a number, [] is an array.
  • Operators, e.g. the result of a + can be either a number (addition) or a string (concatenation).
  • typeof expressions.
  • Calls to built-in functions, e.g. we know that a call to Number.isNaN returns a boolean value.

That not only allowed us to implement the rules I just described, it also improved existing rules not directly related to types. The rule which checks for conditions which are always true or false is now able to find new issues such as the following one in the YUI project:

This piece of code tests whether “config” is a function twice. However, it’s re-assigned to null if the first test returns true. We therefore know for sure that the second test will return false. This rule doesn’t specifically check the types of the variables, but it is based on all the constraints we’ve derived on the variables and type is one of them.

Detecting such issues can greatly help JavaScript developers. Try it! Two options are available. The first one is to use a SonarQube server, either your own or the detected issues will look similar to the screenshots above. The other option is to use SonarLint inside your IDE: you can then detect issues as you code. Of course, you can use both a SonarQube server and SonarLint, either way you could save hours of debugging time!

Categories: Open Source

SonarQube 6.2 in Screenshots

Sonar - Thu, 01/05/2017 - 19:58

The SonarSource team is proud to announce the release of SonarQube 6.2, which brings a lot of significant changes, both to the interface and underlying mechanisms to streamline and improve the user experience.

  • New “Projects” page
  • Enhanced “Issues” page
  • New landing page for anonymous users
  • Webhooks
  • Rating support in Quality Gates
  • Consolidated coverage
  • Authentication via HTTP header

New “Projects” page

The first big change logged-in users will notice is the new Projects space. By default it shows you an overview of the most significant metrics of each of your favorite projects:

You can also choose to browse the entire project set, or explore projects (the whole set or your favorites) using metric filters:

The Projects space replaces global dashboards, which have been dropped in this version along with project dashboards. For more on the mindset behind this change, see the post about the 6.x series.

Enhanced “Issues” page

Continuing the “me-centric” theme, the Issues page has been updated to show logged in users their own issues by default:

Issues are now sorted by date, so you’ll see your leak at the top of the page, or the project leak when you’re in that context.

New landing page for anonymous users

Anonymous users will be greeted with a page that displays key instance metrics, and SonarQube concepts:

This page includes a customizable slot just below the instance metrics, as seen on


To help you better integrate SonarQube into your ALM chain, this version also adds the ability to configure up to ten global and ten project-level webhook URLs:

These URLs are POSTed to after each analysis report is processed. The POST payload is a JSON block that includes project identifiers and quality gate status. The use cases for this are things like stopping a build pipeline for poor quality, or posting notifications to wall boards or chat rooms.

Rating support in Quality Gates

We’ve also improved Quality Gates this version, by supporting ratings (A, B, C…) rather than making you do the numeric conversions in your conditions:

We have, of course, updated the default Quality Gate to reflect this change.

Consolidated coverage

This version also introduces a change that’s more conceptual than visual in nature: the consolidation of coverage metrics. Now, you can have as many coverage reports for as many different types of testing (unit, integration, smoke, …) as you like. They’ll all be consolidated into “coverage”:

Authentication via HTTP header

Also in the non-visual realm is the introduction of authentication via HTTP headers. Now you can access SonarQube via Single-Sign-On using HTTP headers passed by an authentication proxy.

That’s all, folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

The Tweets You Missed in December

Sonar - Mon, 01/02/2017 - 16:46

Here are the tweets you likely missed last month!

"Cognitive Complexity, Because Testability != Understandability", by @GAnnCampbell

— SonarQube (@SonarQube) December 7, 2016

SonarQube has a new website, say hello to !

— SonarQube (@SonarQube) December 22, 2016

SonarQube VSTS/TFS extension 2.0 Released : with a new SQ Endpoint and support of SQ Scanner CLI cc @VSTeam

— SonarQube (@SonarQube) December 15, 2016

SonarJava 4.3 Release: cross-procedural dataflow analysis in a file and support for Cognitive Complexity #Java

— SonarQube (@SonarQube) December 16, 2016

SonarC++ 4.3 Released: activation of the path-sensitive dataflow analysis for C++

— SonarQube (@SonarQube) December 16, 2016

SonarQube COBOL 3.3 Released : 14 new rules, mainly about SQL correctness #cobol #sql

— SonarQube (@SonarQube) December 14, 2016

SonarLint for IntelliJ 2.5 allows developers to analyze at once all changed files since their last commit in VCS

— SonarQube (@SonarQube) December 15, 2016

SonarLint for Eclipse 2.4 notifies you when there are configuration changes or updates on the SonarQube server

— SonarLint (@SonarLint) December 14, 2016

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00