Skip to content

Open Source

SonarQube 4.4 in Screenshots

Sonar - Tue, 08/12/2014 - 11:29

The team is proud to announce the release of SonarQube 4.4, which includes many exciting new features:

  • Rules page
  • Component viewer
  • New Quality Gate widget
  • Improved multi-language support
  • Built-in web service API documentation

Rules page

With this version of SonarQube, rules come out of the shadow of profiles to stand on their own. Now you can search rules by language, tag, SQALE characteristic, severity, status (E.G. beta), and repository. Oh yes, and you can also search them by profile, activation, and profile inheritance.

Once you’ve found your rules, this is now where you activate or deactivate them in a profile – individually through controls on the rule detail or in bulk through controls in the search results list (look for the cogs). In fact, the profiles page no longer has it’s own list of rules. Instead, it offers a summary by severity, and a click through to a rule search.

Another shift in rule handling comes for what used to be called “cloneable rules”. We’ve realized that strictly speaking, these are really “templates” rather than rules, and now treat them as such.

Templates can no longer be directly activated in a profile. Instead, you create rules from them and activate those.

Component viewer

The component viewer also experienced major changes in this version. The tabs across the top now offer filtering, which controls what parts of the code you see (E.G. only show me the code that has issue), and decoration, which controls what you see layered on top of the code (show/hide the issues, the duplications, etc.).

A workspace concept debuts in this version. As you navigate from file to file through either code coverage or duplications, it helps you track where you are and where you’ve been.

New Quality Gate widget

A new Quality Gate widget makes it clearer just what’s wrong if your project isn’t making the grade. Now you can see exactly which measures are out of line:

Improved multi-language support

Multi-language analysis was introduced in 4.2 and it just keeps getting better. Now we’ve added the distribution of LOC by language in the size widget for multi-language projects.

We’ve also added a language criterion to the Issues search:

Built-in web service API documentation

To find this last feature, look closely at at 4.4′s footer.

We now offer on-board API documentation.

That’s all, Folks!

Time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

Unit Test Execution in SonarQube

Sonar - Wed, 08/06/2014 - 15:26

Starting with Java Ecosystem version 2.2 (compatible with SonarQube version 4.2+), we no longer drive the execution of unit tests during Maven analysis. Dropping this feature seemed like such a natural step to us that we were a little surprised when people asked us why we’d taken it.

Contrary to popular belief we didn’t drop test execution simply to mess with people. :-) Actually, we’ve been on this path for a while now. We had previously dropped test execution during PHP and .NET analyses, so this Java-only, Maven-only execution was the last holdout. But that’s trivial as a reason. Actually, it’s something we never should have done in the first place.

In the early days of SonarQube, there was a focus on Maven for analysis, and an attempt to add all the bells and whistles. From a functional point of view, the execution of tests is something that never belonged to the analysis step; we just did it because we could. But really, it’s the development team’s responsibility to provide test execution reports. Because of the potential for conflicts among testing tools, the dev team are the only ones who truly know how to correctly execute a project’s test suite. And in the words of SonarSource co-founder and CEO, Olivier Gaudin, “it was pretentious of us to think that we’d be able to master this in all cases.”

And master it, we did not. So there we were, left supporting a misguided, gratuitous feature that we weren’t sure we had full test coverage on. There are so many different, complex surefire configuration cases to cover that we just couldn’t be sure we’d implemented tests for all of them.

Plus, This automated test execution during Java/Maven analysis had an ugly technical underbelly. It was the last thing standing in the way of removing some crufty, thorn-in-the-side, old code that we really needed to get rid of in order to be able to move forward efficiently. It had to go.

We realize that switching from test execution during analysis to test execution before analysis is a change, but it shouldn’t be an onerous one. You simply go from

mvn clean install
mvn sonar:sonar

to

mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true
mvn sonar:sonar

Your analysis will show the same results as before, and we’re left with a cleaner code base that’s easier to evolve.

Categories: Open Source

.NET in SonarQube: bright future

Sonar - Thu, 07/10/2014 - 11:12

A few months ago, we started on an innocuous-seeming task: make the .NET Ecosystem compatible with the multi-language feature in SonarQube 4.2. What followed was a bit like one of those cartoons where you pull a string on the character’s sweater and the whole cartoon character starts to unravel. Oops.

Once we stopped pulling the string and started knitting again (to torture a metaphor), what came off the needles was a different sweater than what we’d started with. The changes we made along the way – fewer external tools, simpler configuration – were well-intentioned, and we still believe they were the right things to do. But many people were at pains to tell us that the old way had been just fine, thank you. It had gotten the job done on a day-to-day basis for hundreds of projects, and hundreds-of-thousands of lines of code, they said. It had been crafted by .NETers for .NETers, and as Java geeks, they said, we really didn’t understand the domain.

And they were right. But when we started, we didn’t understand how much we didn’t understand. Fortunately, we have a better handle on our ignorance now, and a plan for overcoming it and emerging with industry leading C# and VB.NET analysis tools.

First, we’re planning to hire a C# developer. This person will be first and foremost our “really get .NET” person, and represents a real commitment to the future of SonarQube’s .NET plugins. She or he will be able to head off our most boneheaded notions at the pass, and guide us in the ways of righteousness. Or at least in the ways of .NETness.

Of course it’s not just a guru position. We’ll call on this person to help us progressively improve and evolve the C# and VB.NET plugins, and their associated helpers, such as the Analysis Bootstrapper. He (or she) will also help us fill the gaps back in. When we reworked the .NET ecosystem there were gains, but there were also loses. For instance, there are corner cases not covered today by the C# and VB.NET plugins which were covered with the old .NET Ecosystem.

We also plan to start moving these plugins into C#. We’ve realized that just can’t do the job as well in Java as we need to. But the move to C# code will be a gradual one, and we’ll do our best to make it painless and transparent. Also on the list will be identifying the most valuable rules from FxCop and ReSharper and re-implementing them in our code.

At the same time, we’ll be advancing on these fronts for both C# and VB.NET:

  • Push “cartography” information to SonarQube.
  • Implement bug detection rules.
  • Implement framework-specific rules, for things like SharePoint.

All of that with the ultimate goal of becoming the leader in analyzing .NET code. We’ve got a long way to go, but we know we’ll bring it home in the end.

Categories: Open Source

With great power comes great configuration

Sonar - Thu, 06/26/2014 - 16:16

We’ve got an ambitious vision for the C/C++ plugin this year. To fulfill it, we started with some under-the-cover improvements to the parser and the internal data model. Those improvements were really just a means to an end, but they’ve had the effect of markedly improving our ability to parse and analyze C and C++ code.

Unfortunately, they came with a downside: a higher analysis configuration burden. For instance, in order to correctly expand macros in the code (and we can, now), we need to know what the macro means. Which means that the macro definition needs to be passed in to the analysis.

Just contemplating the configuration update required for a single large system made me queasy, and I wasn’t the only one. So we set the main plugin aside for a little while this spring and wrote a build wrapper, which will eavesdrop on the tool of your choice (e.g. Make or MSBuild) to gather all the extra configuration data for you.

The build wrapper supports the Clang, GCC and MSVC compilers, and is available in 32-bit and 64-bit versions for Windows and Linux and a 64-bit version is available for OS X. Using it couldn’t be simpler. You drop it somewhere on your machine (make sure it’s executable on ‘nix systems), and prepend your build command with it:


build-wrapper --out-dir [output directory] make

Of course, it needs to be a full build that the wrapper is eavesdropping on, so ideally this command would have come after a make clean. And for MsBuild it would be something like:


build-wrapper --out-dir [output directory] msbuild /t:rebuild [other options]

The output directory is where the build wrapper writes its data files, creating the directory if it doesn’t exist. Currently, the build wrapper simply adds its files to the specified directory, but that behavior could change in the future (E.G. someday it might start by issuing rm [output directory]/*).

The build wrapper writes two files: build-wrapper.log and build-wrapper-dump.json. The .log file is just that – a log that Support may ask for if you ever contact them with questions. The .json file is the one that’s actually used during analysis. This screenshot of the build-wrapper-dump.json from my Linux build of CMake should give you an idea what these files look like:
build-wrapper-dump.json

I’m only posting a brief screenshot because the full file is 43,614 lines long (plus a blank line at the end). I’m not saying that all the information in the file is absolutely required for analysis, but it would have taken me a very long time to identify and specify the pieces that are.

Once the build is complete, and your .json file is written, it’s time to kick off a SonarQube analysis. But first you’ll need to tell SonarQube where to find all that extra configuration data the build wrapper just logged. In your sonar-project.properties add the following:

sonar.cfamily.build-wrapper-output=[output directory]

I end up with a properties file that’s only six lines long (including whitespace), and SonarQube has everything it needs to analyze my project:

sonar.projectKey=cmake-linux-clang
sonar.projectName=CMake Linux CLang build
sonar.projectVersion=1.0

sonar.sources=Source
sonar.cfamily.build-wrapper-output=build

If you haven’t used the build-wrapper on your C/C++ projects yet, you should give it a try and let us know how it goes. Hopefully, it will help you drastically improve the quality of your analyses while dramatically decreasing the configuration.

But you won’t have to tell people it was easy.

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00