Skip to content

Open Source

The Tweets You Missed in September

Sonar - Wed, 10/05/2016 - 10:21

Here are the tweets you likely missed last month!

No barriers on! Sign up & start analyzing your OSS projects today!

— SonarQube (@SonarQube) September 1, 2016

SonarLint for @VisualStudio 2.7 Released: with 30 new rules targeting

— SonarLint (@SonarLint) September 23, 2016

SonarQube #JavaScript plugin 2.16 Released :

— SonarQube (@SonarQube) September 9, 2016

SonarQube ABAP 3.3 Released: Five new rules @SAPdevs #abap @SAP

— SonarQube (@SonarQube) September 7, 2016

SonarQube Scanner for Gradle 2.1 natively supports Android projects, and brings other improvements.

— SonarQube (@SonarQube) September 26, 2016

Categories: Open Source

Version 7 Beta 7

IceScrum - Tue, 09/27/2016 - 10:41
7.0.0-beta.7 Here comes iceScrum 7.0.0-beta.7. This version brings the “Team availability” feature back, making a new step towards a final release. It works as before, including the recent addition of the useful Sprint Burndown Chart that compares the remaining time to the remaining availability. Apart from that, this new beta also fixes a few bugs…
Categories: Open Source

We Are Adjusting Rules Severities

Sonar - Thu, 09/08/2016 - 09:31

With the release of SonarQube 5.6, we introduced the SonarQube Quality Model, which pulls Bugs and Vulnerabilities out into separate categories to give them the prominence they deserve. Now we’re tackling the other half of the job: “sane-itizing” rule severities, because not every bug is Critical.

Before the SonarQube Quality Model, we had no way of bringing attention to bugs and security vulnerabilities except to give them high severity ratings. So all rules with a Blocker or Critical severity were related to reliability (bugs) or security (vulnerabilities), and vice versa as a tautology. That made sense before the SonarQube Quality Model, but it doesn’t now. Now, just being a Bug is enough to draw the right attention to an issue. Now, having every Bug or Vulnerability at the Blocker or Critical level is actually a distraction.

So we’re fixing it. We’ve reclassified the severity on every single rule specification in the RSpec repository. The changes to existing reliability/bug rules are reflected in version 4.2 of the Java plugin, and future releases of Java and other languages should reflect the rest of the necessary changes. In some cases, the changes are significant (perhaps even startling), so it makes sense to explain the thinking.

The first thing to know is that the reclassifications are done based on a truth table:

  Impact Likelihood Blocker high high Critical high low Major low high Minor low low

For each rule, we first asked ourselves: What’s the worst thing that can reasonably happen as a result of an issue raised by this rule, factoring in Murphy’s Law without predicting Armageddon?

With the worst thing in mind, the rest is easy. For bugs we evaluate impact and severity with these questions:
Impact: Will the “worst thing” take down the application (either immediately or eventually), or corrupt stored data? If the answer is “yes”, impact is high.
Likelihood: What is the probability the worst will happen?

For vulnerabilities, the questions are:
Impact: Could the exploitation of the vulnerability result in significant damage to your assets or your users?
Likelihood: What is the probability a hacker will be able to exploit the issue?

And for code smells:
Impact: Could the code smell lead a maintainer to introduce a bug?
Likelihood: What is the probability the worst will happen?

That’s it. Rule severities are now transparent and easy to understand. And as these changes roll out in new versions of the language plugins, severity inflation should quickly become a thing of the past!

Categories: Open Source

The Tweets You Missed in August

Sonar - Tue, 09/06/2016 - 16:36

Here are the tweets you likely missed last month!

SonarQube Java 4.1 provides 7 new rules and improves try-catch handling.

— SonarQube (@SonarQube) August 4, 2016

SonarQube Swift Plugin 1.7 Released: native support of XCode 7+ coverage reports, … #swift

— SonarQube (@SonarQube) August 26, 2016

SonarLint for Eclipse 2.2 supports Python, custom rules and issue/file exclusions

— SonarLint (@SonarLint) August 1, 2016

SonarLint for IntelliJ 2.3 supports Python, custom rules and issue/file exclusions

— SonarLint (@SonarLint) August 5, 2016

SonarLint for Command Line 2.0 Released: with full support of the Connected Mode

— SonarLint (@SonarLint) August 17, 2016

Categories: Open Source

SonarAnalyzer for C#: The Rule Engine You Want to Use

Sonar - Thu, 09/01/2016 - 14:49

If you’ve been following the releases of the Scanner for MsBuild and the C# plugin over the last two years, you must have noticed that we significantly improved our integration with the build tool and at the same time added a lot of new rules. Also, we introduced SonarLint for Visual Studio, a new tool to analyze code inside the IDE. With these steps completed we are deprecating the SonarQube ReSharper plugin to be able to provide a consistent, high-level experience among our tools.

In the last couple years we’ve worked in close collaboration with Microsoft to make our products fit easily into the .NET ecosystem. The goal of the collaboration was two-fold:

  • integrate the SonarQube Scanner for MsBuild seamlessly into the build process
  • develop the Connected Mode in SonarLint for Visual Studio to propagate analysis settings from SonarQube to Visual Studio.

The improvements to the SonarQube Scanner for MsBuild resulted in pre- and post-build command line steps that respectively download settings from, and upload analysis results to your SonarQube server. And in between these steps, your MsBuild step doesn’t need to be changed at all. In addition to the SonarLint Connected Mode, we achieved our main goal of showing the exact same issues inside the IDE as you’d see on the SonarQube server.

From a technology perspective, both of these integration pieces are highly dependent on the new .NET compiler platform, Roslyn. Additionally, we’ve put a great deal of effort into implementing rules based on Roslyn. From SonarLint for Visual Studio version 1.0, which was released on July 20, 2015 with 76 rules, we’ve increased our C# offerings to 173 rules. Our C# rule engine, the SonarAnalyzer for C#, is the underlying rule engine in both SonarLint for Visual Studio and the C# plugin. So no matter where you’re running the analysis, you benefit from the new rules. Many of the rules might have already been familiar to you, because we prioritized the implementation of ReSharper-like rules. We went through all the C# warning rules that are enabled by default in ReSharper and in the end we found that more than 80% of them are now covered by the SonarAnalyzer for C#.

We even went a step further, and made the SonarQube Roslyn SDK to provide a way to integrate your Roslyn-based rules into the analysis process both inside the IDE and with the Scanner for MSBuild. However, we can’t provide the same consistent user experience with ReSharper because it’s not based on Roslyn. ReSharper analysis in the build process isn’t MSBuild-based; it requires a dedicated command line tool. And inside Visual Studio, ReSharper is a completely separate analysis tool, so there’s no way to make the Connected Mode support ReSharper settings. As a result, we decided to deprecate the ReSharper plugin and move it to the community maintained extensions.

To sum up, in order to best focus our efforts on valuable features and provide you with the best user experience, we decided to drop support for the ReSharper plugin. “Less is More” is our frequently repeated mantra at SonarSource. With this step, you’ll have fewer tools to worry about, and a more consistent experience across our products. Additionally, you’ll benefit from our quick release cycles, and get updates every months or so. Recently, we’ve focused our efforts on advanced bug detection rules. Did you know that our brand new symbolic execution engine found a NullReferenceException bug in Roslyn?

Categories: Open Source

Version 7 Beta 6

IceScrum - Mon, 08/29/2016 - 17:41
7.0.0-beta.6 Here is a new iceScrum 7: 7.0.0-beta.6. A promised, after Git/SVN integration we worked on another big iceScrum Pro feature: integration with bug trackers, namely Bugzilla, Jira, Mantis, Redmine and Trac. These integrations work quite like before (here is the old documentation for reference: A nifty addition is that you can now apply…
Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00