Skip to content

Open Source

SonarQube 5.2 in Screenshots

Sonar - Thu, 11/26/2015 - 15:14

The team is proud to announce the biggest release ever of the SonarQube server, version 5.2, which includes the second-most-anticipated feature ever: code scanners no longer access the database! In brief, this version features:

  • Scanners no longer access the database
  • Enhanced monitoring
  • Better issue management
  • Improved UI for global admin
  • Also worth noting

Scanners no longer access the database

In a significant, fundamental change, this version breaks the direct ties from the SonarQube Scanners (SonarQube Runner, Maven, Gradle, …) to the SonarQube database. From this version forward, it is no longer necessary to hand out your SonarQube database credentials to would-be analyzers, and if they’re still included in your analysis parameters, you’ll see warnings in the log:

Breaking the database connection means you’re now free to execute analysis from your CI services like travis-ci, appveyor, VSO Build, and so on without biting your nails over database security. Instead, scanners now submit analysis reports to the server, and the server processes them asynchronously. This means that analysis results are not available in the Web application right after the scanner has finished its execution, it can take some time depending on the load on the server:

But it also means that it’s no longer required to have a fat network connection between the machines analysis runs on and the database. Now you can arrange those machines on your network based solely on your own criteria.

As soon as an analysis report is sent to the server, the status of the report is displayed on the dashboard of the corresponding project:

Enhanced monitoring

Because more processing is done on server-side, more information is available server-side to monitor and understand what’s going on in SonarQube. First, the former “Analysis Reports” page has been renamed “Background Tasks” and redesigned to offer far more features, including access to the analysis report processing logs:

The page is available at project administration level too:

Server logs are also now accessible from the UI, and it’s possible to dynamically change the server log level (it reverts automatically on restart):

Better issue management

Continuing the theme of more and better information, the reporting of issues has also improved in this version. First, is the ability to have more precise issue highlighting, additional issue locations, and additional messages:

The additional highlights and messages are attached to the issues, so you have to select an issue to see its “extras”:

Of course, the platform just makes these things possible; the language plugins have to support them before you’ll see these effects. So far, you can see additional locations and messages in select rules in the Java plugin.

Another improvement is the ability to display issues by count or technical debt:

As well as a new entry page for issues with quick links to default and saved issue filters:

Speaking of filters, there’s a new issue filter widget with a wide variety of display options, so you can put the results of any search directly on your dashboard:

Wrapping up the topic of issues, we’ve improved notifications, with a new “My New Issues” notification that tells you only about what’s relevant to you, and we’ve added the ability to define a default issue assignee on a project. This account will be used for every new issue that SonarQube can’t assign automatically based on the SCM information.

Improved UI for global admin

A number of pages have been rewritten in this version for a more consistent user experience. The one available to everyone is the Quality Profiles page:

Beyond that, many administrative pages have been rewritten, including all the security pages:

As well as the Update Center:

And the Project Management page:

As a side-effect of these rewrites, web services are now available for all the types of data required to feed these pages. Check your server’s api_documentation for details, or use Nemo’s for a quick reference.

Also worth noting

As a side-effect of the ties between analysis and the database, plugins that do data manipulation beyond simply gleaning raw numbers and issues directly from source files will probably need to be rewritten because the API’s have changed, and such processing must now be done server-side.

All design-related features were dropped in this version (see SONAR-6553 for details), including Package Tangle Index and related metrics.

Also gone in 5.2, but slated to reappear in 5.3 is cross-module/project duplication detection. Why? We simply ran out of time.

That’s All, Folks!

Time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

Analysis of Visual Studio Solutions with the SonarQube Scanner for MSBuild

Sonar - Thu, 11/19/2015 - 17:19

At the end of April 2015 during the Build Conference, Microsoft and SonarSource Announced SonarQube integration with MSBuild and Team Build. Today, half a year later, we’re releasing the SonarQube Scanner for MSBuild 1.0.2. But what exactly is the SonarQube Scanner for MSBuild? Let’s find out!

The SonarQube Scanner for MSBuild is the tool of choice to perform SonarQube analysis of any Visual Studio solution and MSBuild project. From the command line, a project is analyzed in 3 simple steps:

  1. MSBuild.SonarQube.Runner.exe begin /key:project_key /name:project_name /version:project_version

  2. msbuild /t:rebuild

  3. MSBuild.SonarQube.Runner.exe end

The “begin” invocation sets up the SonarQube analysis. Mandatory analysis settings such as the SonarQube project key, name and version must be passed in, as well as any optional settings, such as paths to code coverage reports. During this phase, the scanner fetches the quality profile and settings to be used from the SonarQube server.

Then, you build your project as you would typically do. As the build happens, the SonarQube Scanner for MSBuild gathers the exact set of  projects and source files being compiled and analyzes them.

Finally, during the “end” invocation, remaining analysis data such as Git or TFVC one is gathered, and the overall results are sent to the SonarQube server.

Using the SonarQube Scanner for MSBuild from Team Foundation Server and Visual Studio Online is even easier: there is no need to install the scanner on build agents, and native build steps corresponding to the “begin” and “end” invocations are available out-of-the-box (see the complete Microsoft ALM Rangers documentation for details).

A similar experience is offered for Jenkins users as well since the Jenkins SonarQube plugin version 2.3.

Compared to analyzing Visual Studio solutions with the sonar-runner and the Visual Studio Bootstrapper plugin, this new SonarQube Scanner for MSBuild offers many advantages:

  1. Having a Visual Studio solution (*.sln) file is no longer a requirement, and customized *.csproj files are now supported! The analysis data is now extracted from MSBuild itself, instead of being retrieved by manually parsing *.sln and *.csproj files. If MSBuild understands it, the SonarQube Scanner for MSBuild will understand it!

  2. For .NET, analyzers can now run as part of the build with Roslyn, which not only speeds up the analysis but also yields better results; instead of analyzing files one by one in isolation, the MSBuild integration enables analyzers to understand the file dependencies. This translates into fewer false positives and more real issues.

  3. Enabling FxCop is now as simple as enabling its rules in the quality profile. There is no longer any need to manually set properties such as “sonar.visualstudio.outputPaths” or “sonar.cs.fxcop.assembly” for every project: All the settings are now deduced by MSBuild.

As a consequence, we are deprecating the use of sonar-runner and the Visual Studio Bootstrapper plugin to analyze Visual Studio solutions, and advise all users to migrate to the SonarQube Scanner for MSBuild instead. Before you begin your migration, here are a few things you need to be aware of:

  1. The analysis must be executed from a Windows machine, with the .NET Framework version 4.5.2+ installed, and the project must be built using MSBuild 12 or 14. Note that the project you analyze can itself target older versions of the .NET Framework, but the SonarQube Scanner for MSBuild itself requires at least version 4.5.2 to run.

  2. Obviously, you now need to be able to build the project you want to analyze!

  3. Most old analysis properties (such as “sonar.cs.fxcop.assembly“, “sonar.dotnet.version”) are no longer used and should be removed. The only useful ones are unit test result and code coverage reports paths.

  4. The “” file is no longer used and should be deleted.

Try it out for yourself and get started
! Download the SonarQube Scanner for MSBuild, install it, and start to analyze your projects! If you are new to SonarQube, the end-to-end guide produced by the Microsoft ALM Rangers will take you through every step.

Categories: Open Source

SonarQube Enters the Security Realm and Makes a Good First Showing

Sonar - Thu, 11/12/2015 - 16:45

For the last year, we’ve been quietly working to add security-related rules in SonarQube’s language plugins. At September’s SonarQube Geneva User Conference we stopped being quiet about it.

About a year ago, we realized that our tools were beginning to reach the maturity levels required to offer not just maintainability rules, but bug and security-related rules too, so we set our sights on providing an all-in-one tool and started an effort to specify and implement security-related rules in all languages. Java has gotten the furthest; it currently has nearly 50 security-related rules. Together, the other languages have offer another 50 or so.

That may not sound like a lot, but I’m pleased with our progress, particularly when tested against the OWASP Benchmark project. If you’ve heard of OWASP before, it was probably in the context of the OWASP Top 10, but OWASP is an umbrella organization with multiple projects under it (kinda like the Apache Foundation). The Top 10 is OWASP’s flagship project, and the benchmark is an up-and-comer.

The benchmark offers ~2700 Java servlets that do and do not demonstrate vulnerabilities corresponding to 11 different CWE items. The CWE (Common Weakness Enumeration) contains about 1,000 items, and broadly describes patterns of insecure and weak code.

The guys behind the benchmark are testing all they tools they can get their hands on and publishing the results. For commercial tools, they’re only publishing an average score (because the tool licenses don’t allow them to publish individual, named scores). For open source tools, they’re naming names. :-)

When I prepared my slides for my “Security Rules in SonarQube” talk, the SonarQube Java Plugin arguably had the best score, finding 50% of the things we’re supposed to and only flagging 17% of the things we should have ignored for an overall score of 33% (50-17 = 33). Compare that to the commercial average, which has a 53% True Positive Rate and 28% False Positive rate for a final score of 26%. Since then, a new version of Find Security Bugs has been released, and it’s spot on the graph has jumped some, but I’m still quite happy with our score, both in relative and absolute terms. Here’s the summary graph presented on the site:

Notice that the dots are positioned on the x and y axes based on the True Positive Rate (y-axis) and False Positive Rate (x-axis.) Find Security Bugs is higher on the True Positive axis than SonarQube, which threw me for a minute, but it’s also further out on the False Positive axis too. That’s why I graphed the tools’ overall scores:

Looked at this way, it’s probably quite clear why I’m still happy with the SonarQube Java scores. But I’ll give you some detail to show that it isn’t (merely) about one-upsmanship:

This graph shows the Java plugin’s performance on each of the 11 CWE code sets individually. I’ll start with the five 0/0 scores in the bottom-left. For B, E, G, and K we don’t yet have any rules implemented (they’re “coming soon”). So… yeah, we’re quite happy to score a 0 there. :-) For F, SQL Injection, we have a rule, but every example of the vulnerability in this benchmark slips through a hole in it. (That should be fixed soon.) On a previous version of the benchmark, we got a better score for SQL Injection, but with the newest iteration, the code has been pared from 21k files to 2.7k, and apparently all the ones we were finding got eliminated. That’s life.

For A and D, it’s interesting to note that while the dots are placed toward the upper-right of the graph, they have scores of -2% and 0% respectively. That’s because the false positives cancelled out the true positives in the scoring. Clearly, we’d rather see a lower false positive rate, but we knew we’d hit some FP’s when we decided to write security rules. And with a mindset that security-related issues require human verification, this isn’t so bad. After all, what’s worse: manually eliminating false positives, or missing a vulnerability because of a false negative?

For ‘I’, we’ve got about the best score we can get. The cases we’re missing are designed to be picked up only by dynamic analysis. Find Security Bugs gets the same score on this one: 68%.

For the rest, C, H, and J, we’ve got perfect scores: a 100% True Positive Rate and a 0% False Positive Rate. Woo hoo!

Of course, saying we’ve got 100% on item C or 33% overall is only a reflection of how we’re doing on those particular examples. We do better on some vulnerabilities and less so on others. Over time, I’m sure the benchmark will grow to cover more CWE items and cover in more depth the items it already touches on. As it does, we’ll continue to test ourselves against it to see what we’ve missed and where our holes are. I’m sure our competitors will too, and we’ll all get gradually better. That’s good for everybody. But you won’t be surprised if I say we’ll stay on top of making sure SonarQube is always the best.

Categories: Open Source

SonarLint: Fixing Issues Before They Exist

Sonar - Thu, 10/22/2015 - 08:44

I’m very happy to announce the launch of a new product series at SonarSource: SonarLint, which will help you fix code quality issues before they even exist.

SonarLint represents a new approach to code quality: instant issue checking. It sits in the IDE and is totally developer-oriented. We’ve started with three variations: SonarLint for VisualStudio, SonarLint for Eclipse, and SonarLint for IntelliJ.

Version 1.x will be available for C# via SonarLint for VisualStudio, and for Java and PHP with both SonarLint for Eclipse and SonarLint for Intellij. So now you can start catching and fixing issues from your projects’ first keystrokes.

Here’s a preview in VisualStudio:

And here’s a preview for Eclipse:

Later, we’ll add the ability to link SonarLint with a SonarQube instance.

This complete break from the approach of previous implementations is what prompted us to start over with a new brand. With SonarLint, it’s a new day in code quality.

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00