Skip to content

Open Source

Language Plugins Rock SonarQube Life!

Sonar - Thu, 06/23/2016 - 13:43

SonarAnalyzers are fundamental pillars of our ecosystem. The language analyzers play a central role, but the value they bring isn’t always obvious. The aim of this post is to highlight the ins and outs of SonarAnalyzers.

The basics

The goal of the SonarAnalyzers (packaged either as SonarQube plugins or in SonarLint) is to raise issues on problems detected in source code written in a given programming language. The detection of issues relies on the static analysis of source code and the analyzer’s rule implementations. Each programming language requires a specific SonarAnalyzer implementation.

The analyzer


The SonarAnalyzer’s static analysis engine is at the core of source code interpretation. The scope of the analysis engine is quite large. It goes from basic syntax parsing to the advanced determination of the potential states of a piece of code. At minimum, it provides the bare features required for the analysis: basic recognition of the language’s syntax. The better the analyzer is, the more advanced it’s analysis can be, and the trickier the bugs it can find.

Driven by the will to perform more and more advanced analyses, the analyzers are continuously improved. New ambitions in terms of validation require constant efforts in the development of the SonarAnalyzers. In addition, to be able to handle updates to each programming language, regular updates are required in the analyzers to keep up with each language’s evolution.

The rules



The genesis of a rule starts with the writing of its specification. The specification of each rule is an important step. The description should be clear and unequivocal in order to be explicit about what issue is being detected. Not only must the description of the rule be clear and accurate, but code snippets must also be supplied to demonstrate both the bad practice and it’s fix. The specification is available from each issue raised by the rule to help users understand why the issue was raised.

Rules also have tags. The issues raised by a rule inherit the rule’s tags, so that both rules and issues are more searchable in SonarQube.

Once the specification of a rule is complete, next comes the implementation. Based on the capabilities offered by the analyzer, rule implementations detect increasingly tricky patterns of maintainability issues, bugs, and security vulnerabilities.


Continuous Improvement


By default, SonarQube ships with three SonarAnalyzers: Java, PHP, and JavaScript.
The analysis of other languages can be enabled by the installation of additional SonarAnalyzer plugins.

SonarQube community officially supports 24 language analyzers. Currently about 3500 rules are implemented across all SonarAnalyzers.

More than half of SonarSource developers work on SonarAnalyzers. Thanks to the efforts of our SonarAnalyzer developers, there are new SonarAnalyzer versions nearly every week.

A particular focus is currently made on Java, JavaScript, C#, and C/C++ plugins. The target is to deliver a new version of each one every month, and each delivery embeds new rules.

In 2015, we delivered a total of 61 new SonarAnalyser releases, and so far this year, another 30 versions have been released.


What it means for you


You can easily benefit from the regular delivery of SonarAnalyzers. At each release, analyzer enhancements and new rules are provided. But, you don’t need to upgrade SonarQube to upgrade your analysis; as a rule, new releases of each analyzers are compatible with the latest LTS.

When you update a SonarAnalyzer, the static analysis engine is replaced and new rules are made available. But at this step, you’re not yet benefiting from those new rules. During the update of your SonarAnalyzer, the quality profile remains unchanged. The rules executed during the analysis are the same ones you previously configured in your quality profile.
It means that if you want to benefit from new rules you must update your quality profile to add them.

Categories: Open Source

Sonar ecosystem upgrades to Java 8

Sonar - Tue, 06/14/2016 - 17:55

With the release of SonarQube version 5.6, the entire Sonar ecosystem will drop support for Java 7. This means you won’t be able to run new versions of the SonarQube server, execute an analysis, or use SonarLint with a JVM < 8.

Why? Well, its been over two years since Java 8′s initial release, and a year since Oracle stopped supporting Java 7, so we figured it was time for us to stop to. Doing so allows us to simplify our development processes and begin using the spiffy new features in Java 8. Plus, performance is up to 20% better with Java 8!

Of course, we’ll still support running older versions of ecosystem products, e.g. SonarQube 4.5, with Java 7, and you’ll still be able to compile your project with a lower version of Java. You’ll just have to bump up the JVM version to run the analysis.

Categories: Open Source

Version 7 Beta 2

IceScrum - Mon, 06/13/2016 - 19:49
A week ago, we were glad to publish the first Beta of the version that embodies the future of iceScrum! iceScrum Version 7 Beta If you did not hear about it yet, you can read the blog post named “A bright future for iceScrum”. First, we would like to thank our early users for their…
Categories: Open Source

Agile is not magic

IceScrum - Thu, 06/09/2016 - 12:30
After years helping companies to adapt their practices and change their mindset toward agile values, a comment we hear very often is “Ok, we know that we don’t follow the recommendations… But our context is quite singular and Agile/Scrum is meant to adapt to everything, right? Thus, we adapt it to our context.”. Unfortunately, it…
Categories: Open Source

SonarQube 5.6 (LTS) in Screenshots:

Sonar - Wed, 06/08/2016 - 13:45

The wait is over! The new SonarQube Long Term Support (LTS) version is out, and it’s packed with new features to help you better manage your technical debt and operational security. It has been over a year and a half since the last Long Term Support (LTS) version was announced – a very busy year and a half. In that time, we’ve pursued three main themes:

  • Fixing the Leak
  • Adding More for Developers
  • Increasing Scalability and Security
Fixing the Leak

The Water Leak concept says you should fix new issues before bothering with old ones. After all, an issue in two-year-old code has been tested by time. Its the one you added yesterday that should be fixed immediately – while the code is still fresh in your mind.

To that end, we’ve added a number of features to keep you focused on the leak. The first is a new, fixed project home page which puts the leak front and center (okay, front and right) by highlighting the metrics on new code:

And just to make sure it doesn’t slip from view, we’ve updated the default quality gate to focus on new code as well:

Of course, it’s best of all if new problems never hit the code base. In an effort to shorten the cycle we also added the ability to analyze pull requests. Now you no longer need to wait for your code to hit the SonarQube server to see what you need to fix. Instead, you can see new issues as comments on your GitHub pull request (PR):

This is enabled as a GitHub status check, so analysis is automatic with each new push to the PR and you get a tidy summary in the check list:

Adding More for Developers

As a company of developers, and our own first users and harshest critics, we’re always focused on making the platform more usable for developers. It should come as no surprise then, that there’s a lot for developers in this version!

I’ll start with the SonarQube Quality Model, which is an easy to understand, actionable model that takes the best from SQALE and adds what was missing. It draws bugs and security vulnerabilities out of the mass of maintainability issues to clearly highlight project risk, while retaining the calculation of technical debt.

Click through on any of these issue counts, and you land at the new issues page, which is available at both global and project levels. It features an easy-to-use search, totals by either count or technical debt, and super-easy keyboard (or mouse!) navigation:

On that issues page, you may notice the next developer-centric feature: precise issue location. Now we can highlight exactly, and only the portion(s) of a line relevant to the issue:

Last but not least on the topic of Issue improvements is False Positive’s long-awaited sister: Won’t Fix:

We’ve also reworked the presentation of Metric details. The old drilldowns have been replaced by a new project Measures space, which offers a general overview:

A domain view:

A treemap, a list of files, a component tree, and of course a file listing

Increasing Scalability and Security

Even though SonarSource is a developer-centric company, we didn’t forget devops. In fact, this new LTS makes great strides in that area.

The most significant change is that analyzers no longer talk to the database. This means you don’t have to hand out your DB credentials to every Joe who wants to run an analysis. Instead, scanners talk only to the web server, and the server takes it from there.

“But wait,” you’re thinking, “you still have to pass around the user credentials to submit an analysis.”

No you don’t. We’ve added the ability to generate user tokens, so you can run an analysis without exposing your password (or user name!).

Also Worth Noting

While it shouldn’t be major news, it’s also worth noting that the new LTS drops support for Java 7. It’s Java 8+ from here on out. Among other things, the change should make your SonarQube server even faster than before!

That’s all, Folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

If you’ve already worked with the 5.x series, few of these things will come as a surprise. If you’re still on the previous LTS, you should fasten your seat belt. It’s gonna blow your socks off!

Categories: Open Source

Bugs and Vulnerabilities are 1st Class Citizens in SonarQube Quality Model along with Code Smells

Sonar - Thu, 06/02/2016 - 12:46

In SonarQube 5.5 we adopted an evolved quality model, the SonarQube Quality Model, that takes the best from SQALE and adds what was missing. In doing so, we’ve highlighted project risks while retaining technical debt.

Why? Well, SQALE is good as far as it goes, but it’s primarily about maintainability, with no concept of risk. For instance, if a new, blocker security issue cropped up in your application tomorrow, under a strict adherence to the SQALE methodology you’d have to ignore it until you fixed all the Testability, Reliability, Changeability, &etc issues. When in reality, new issues (i.e. leak period issues) of any type are more important than time-tested ones, and new bugs and security vulnerabilities are the most important of all.

Further, SQALE is primarily about maintainability, but the SQALE quality model also encompasses bugs and vulnerabilities. So those important issues get lost in the crowd. The result is that a project can have blocker-level bugs, but still get an A SQALE rating. For us, that was kinda like seeing a green light at the intersection while cross-traffic is still flowing. Yes, it’s recoverable if you’re paying attention, but still dangerous.

So for the SonarQube Quality Model, we took a step back to re-evaluate what’s important. For us it was these things:

  1. The quality model should be dead simple to use
  2. Bugs and security vulnerabilities shouldn’t be lost in the crowd of maintainability issues
  3. The presence of serious bugs or vulnerabilities in a project should raise a red flag
  4. Maintainability issues are still important and shouldn’t be ignored
  5. The calculation of remediation cost (the use of the SQALE analysis model) is still important and should still be done

To meet those criteria, we started by pulling Reliability and Security issues (bugs and vulnerabilities) out into their own categories. They’ll never be lost in the crowd again. Then we consolidated what was left into Maintainability issues, a.k.a. code smells. Now there are three simple categories, and prioritization is easy.

We gave bugs and vulnerabilities their own risk-based ratings, so the presence of a serious Security or Reliability issue in a project will raise that red flag we wanted. Then we renamed the SQALE rating to the Maintainability rating. It’s calculated based on the SQALE analysis model (technical debt) the same way it always was, except that it no longer includes the remediation time for bugs and vulnerabilities:

To go help enforce the new quality model, we updated the default Quality Gate:

  • 0 New Bugs
  • 0 New Vulnerabilities
  • New Code Maintainability rating = A
  • Coverage on New Code >= 80%

The end result is an understandable, actionable quality model you can master out of the box; quality model 2.0, if you will. Because managing code quality should be fun and simple.

Categories: Open Source

SonarLint 2.0 Is Now Available

Sonar - Wed, 05/25/2016 - 15:25

SonarLint is a pretty recent product that we released for the first time a few months ago for Eclipse, IntelliJ and Visual Studio. We have recently released the version 2.0 which brings the ability to connect SonarLint with a SonarQube server and was greatly expected by the community. I think the addition of this new feature is a good chance to recap SonarLint features. But before I do this, let me remind you of the SonarLint’s mission: to help developers spot as many coding issues as possible in their IDE, while they code. It has to be instant, integrated to the IDE, and valuable.

Since SonarLint 1.0, you can install the product from the market place for all 3 IDEs we currently support: Eclipse Marketplace, Jetbrains Plugin Repository or Visual Studio Gallery. Et voilà… You can continue your coding as usual and you will start seeing SonarLint issues reported as you type. If you open a file, it will get decorated immediately with issues.

You also benefit from a nice panel containing a list of issues that have been detected. Each issue comes with a short message and if that is not enough you can open a more detailed description of the problem, with code snippets and references to well known coding standards.

As I am sure you guessed already, all of this does not require any configuration. And this is actually the reason why version 2.0 was so expected: people who have defined their quality profile in SonarQube want to be able to use the same profile in SonarLint. This is the main feature provided by SonarLint 2.0.

In order to have SonarLint use the same quality profile as SonarQube you have to bind your project in your IDE to the remote project in SonarQube. This is done in two steps:

  • Configure a connection to your SonarQube server (URL + credentials)
  • Bind your project with the remote one

Et voilà… again… SonarLint will fetch configuration from the SonarQube server and use it when inspecting code.

That’s it for today!

Categories: Open Source

SonarQube 5.5 in Screenshots

Sonar - Thu, 05/19/2016 - 14:37

The team is proud to announce the release of 5.5, which features simplified concepts for easier triage and management of issues:

  • New SonarQube Quality Model
  • New Measures project page
  • Increased vertical scalability, performance, and stability

New SonarQube Quality Model

With 5.5 we’ve boiled quality down to its essentials to make it easier to understand and work with. Now there are Reliability Issues, Vulnerability Issues and Maintainability Issues. Or, in plainer talk: bugs, vulnerabilities, and code smells. By drawing bugs and vulnerabilities out of the mass of issues, any operational risks in your projects are highlighted for expedited handling.

You’ll see the change wherever there are issues and technichal debt, including the Rules and Issues domains:

We’ve also updated the default Quality Gate to match the importance of these newly isolated concepts:

New Measures project page

Also new in this version is a project-level Measures interface, which replaces the old metric drilldown. Click a metric from the project home page and you land at the new per-file metric value listing:

The Tree view offers a compact, per-directory aggregation, and the Treemap gives the familiar, colorful overview:

The metric domain page also offers a listing of metrics and project-level values, with a graphical presentation:

Increased vertical scalability, performance, and stability

The Compute Engine has been moved into a separate process, and the number of worker threads made configurable (SONARQUBE_HOME/conf/sonar.properties), so users will no longer have to deal with a sluggish interface when analysis reports are being processed, and the memory requirements for processing can be handled separately from those for the web application:

That’s All, Folks!

Time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide first! Even if you’ve read it before, it may be worth taking another look at the upgrade guide; we changed it recently.

Categories: Open Source

What’s New in SonarEcosystem – April 2016

Sonar - Wed, 05/11/2016 - 10:52

SonarQube JavaScript 2.12 Released: symbolic execution and full support of @reactjs JSXhttps://t.co/yGPSamFqGH pic.twitter.com/JSEEhjfanC

— SonarQube (@SonarQube) April 21, 2016

SonarQube C/C++ 3.11 Released: with new path-sensitive data-flow analysis engine for C. See https://t.co/mycs42eEKJ pic.twitter.com/AMXks9OGwU

— SonarQube (@SonarQube) April 13, 2016

SonarQube Java 3.13 Released: 7 new rules and numerous improvements, see https://t.co/3bMZCU5Fi5 #java pic.twitter.com/EzeBuepZq4

— SonarQube (@SonarQube) April 13, 2016

New @SonarQube JavaScript rule just caught a bug in @Ionicframework https://t.co/FzRLcaaCkg pic.twitter.com/hsVdMfVzKM

— Elena Vilchik (@vilchik_elena) April 11, 2016

Categories: Open Source

The best bug tracker is no bug tracker

IceScrum - Tue, 04/19/2016 - 16:08
We are often told that iceScrum doesn’t do much to help manage bugs / defects, backed by the evidence that it doesn’t provide a separate bug system with its own workflow and that defects are seldom mentioned in the user interface. Actually, this doesn’t denote an absence of feature, it is a deliberate feature! Well,…
Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00