Skip to content

Open Source

Version 7 Beta 5

IceScrum - 19 hours 27 min ago
7.0.0-beta.5 A new iceScrum 7 beta is here: 7.0.0-beta.5. As planned, this version comes with Source Code Management and Continuous Integration (Git, SVN, Jenkins…). For the moment, they work like before (here is the old documentation for those who don’t know about these integrations: Git & SVN – Jenkins). That means that you can know…
Categories: Open Source

Version 7 Beta 4

IceScrum - Wed, 07/13/2016 - 16:40
7.0.0-beta.4 The fourth version of iceScrum 7 beta is now online: 7.0.0-beta.4. Improvements: Velocity, capacity & remaining time in Task board Medium post-its by default in backlogs and features view Bug fixes: Loading logo has no color on IE11 On IE11, when displaying details panel, details info are not displayed and post-it layout doesn’t adjust…
Categories: Open Source

Version 7 Beta 3

IceScrum - Wed, 07/06/2016 - 10:29
7.0.0-beta.3 Here comes a third version of iceScrum 7 beta: 7.0.0-beta.3. Unlike previous versions, this one doesn’t bring much visible changes. However, we worked a lot under the hood to provide more consistency and stability. Here are the main improvements and bug fixes: Improvements: Big overhaul of error management New order in story and task…
Categories: Open Source

The Tweets You Missed in June

Sonar - Wed, 07/06/2016 - 09:52

Here are the tweets you likely missed last month!

SonarQube 5.6 LTS is available! Enjoy!

— SonarQube (@SonarQube) June 8, 2016

Governance 1.0 gives a dedicated dashboard to get the big picture of application portfolios

— SonarQube (@SonarQube) June 14, 2016

Issue found in #git source code by the C rule in charge to detect unconditionally true/false conditions

— SonarQube (@SonarQube) June 15, 2016

Categories: Open Source

JavaScript Plugin Finds Tricky Bugs, Thanks to Execution Flow

Sonar - Wed, 06/29/2016 - 09:17

Over the last few months, the SonarAnalyzer for JavaScript has made major advances in bug detection. Until recently, it only caught rather simple bugs, like function calls passing extra arguments, which didn’t really need more than a correct identification of symbols. Things changed a lot when we made the analyzer aware of execution flow: in other words, it is now able to determine the precise order of execution inside a JavaScript function and detect bugs based on it.

The latest rule we implemented based on execution flow aims at no less than detecting when a property is accessed on a value which might be null or undefined. In such a case, a TypeError may be thrown at runtime, and the application may crash. That rule catches obvious bugs in poor quality code, but it can also find more subtle issues, such as when a value is sometimes tested for nullability and sometimes not. That’s the case in the following code in the React project:

Conditions which are either always true or always false represent another bug pattern for which we have a relatively new rule. Sometimes, such a condition is simply redundant with the rest of the code as in in the following example in the Closure Library:

(In JavaScript every value has an inherent boolean value and is therefore either “truthy” or “falsy”. False, null, undefined, NaN, 0, and the empty string are “falsy”. Everything else is “truthy”.)

In other cases, a condition which is always true or false may be the visible part of a real bug, especially when it means that a full block of code will never be executed. Here’s an example from the Ionic framework which looks like a serious bug:

Detecting dead stores is another rule we added recently and that is based on execution flow. A dead store is a useless assignment to a variable, where the variable that’s assigned is never read after the assignment. Most often, this is not a bug, just useless and potentially confusing code. However, it’s so common that thousands of dead stores can be found in open-source projects. Here’s a very simple example in the AngularJS project:

Now, we’ve said all of these rules are based on execution flow, but some curious readers may ask: how do we describe execution flow? Except in the simplest cases, execution flow is rarely linear. As soon as a piece of code contains an if statement, its execution flow has to be described with alternatives branches: either the condition of the if statement is true and we execute the associated block, or it’s false and we don’t execute the block. Moreover, execution flow may go back to a point which was already executed thanks to loops. In the end, in order to represent all the possible paths, we use a graph structure which is known as a control flow graph (CFG).

Based on a control flow graph, it’s rather easy to identify dead stores by checking all paths which come out of an assignment. However, a CFG is certainly not enough to detect potential TypeErrors or conditions which are always true or false. To do that, we need symbolic execution. That is, we need to track the values which are referenced by variables. We can walk through the CFG and evaluate some parts of the code:

  • Based on assignments, we may know the precise value of a variable at a given point.
  • Based on conditions in if statements or loops, we may know which constraints are met by the value of a variable inside a given block.

Running symbolic execution means that we explore the possible execution paths based on the CFG and the possible constraints on variables.

  • When looking for possible TypeErrors, we raise an issue as soon as one of the execution paths leads to a property access on a value which is constrained to null or undefined.
  • When looking for conditions which are always true or false, we have to check all of the execution paths which go through a condition.

Our symbolic execution engine is still in its early stages and can only evaluate simple constructs right now, but the current results of these new rules look very promising to us. As we improve our engine, the rules which are based on it will get more accurate. We’ve gotten this far by following the lead of the SonarAnalyzer for Java, which overshadowed FindBugs, turning it from a great tool to a great tool of the past. We hope we can bring as much value to JavaScript developers.

However, following Java’s lead is only part of the story. Because of the dynamic nature of JavaScript, symbolic execution is more crucial than for other languages like Java. For example, the type of a variable may not be the same for all branches of a given piece of code: many rules will therefore be improved as soon as we start to infer types based on symbolic execution. We have a lot to do, so stay tuned!

Categories: Open Source

Language Plugins Rock SonarQube Life!

Sonar - Thu, 06/23/2016 - 13:43

SonarAnalyzers are fundamental pillars of our ecosystem. The language analyzers play a central role, but the value they bring isn’t always obvious. The aim of this post is to highlight the ins and outs of SonarAnalyzers.

The basics

The goal of the SonarAnalyzers (packaged either as SonarQube plugins or in SonarLint) is to raise issues on problems detected in source code written in a given programming language. The detection of issues relies on the static analysis of source code and the analyzer’s rule implementations. Each programming language requires a specific SonarAnalyzer implementation.

The analyzer

The SonarAnalyzer’s static analysis engine is at the core of source code interpretation. The scope of the analysis engine is quite large. It goes from basic syntax parsing to the advanced determination of the potential states of a piece of code. At minimum, it provides the bare features required for the analysis: basic recognition of the language’s syntax. The better the analyzer is, the more advanced it’s analysis can be, and the trickier the bugs it can find.

Driven by the will to perform more and more advanced analyses, the analyzers are continuously improved. New ambitions in terms of validation require constant efforts in the development of the SonarAnalyzers. In addition, to be able to handle updates to each programming language, regular updates are required in the analyzers to keep up with each language’s evolution.

The rules

The genesis of a rule starts with the writing of its specification. The specification of each rule is an important step. The description should be clear and unequivocal in order to be explicit about what issue is being detected. Not only must the description of the rule be clear and accurate, but code snippets must also be supplied to demonstrate both the bad practice and it’s fix. The specification is available from each issue raised by the rule to help users understand why the issue was raised.

Rules also have tags. The issues raised by a rule inherit the rule’s tags, so that both rules and issues are more searchable in SonarQube.

Once the specification of a rule is complete, next comes the implementation. Based on the capabilities offered by the analyzer, rule implementations detect increasingly tricky patterns of maintainability issues, bugs, and security vulnerabilities.

Continuous Improvement

By default, SonarQube ships with three SonarAnalyzers: Java, PHP, and JavaScript.
The analysis of other languages can be enabled by the installation of additional SonarAnalyzer plugins.

SonarQube community officially supports 24 language analyzers. Currently about 3500 rules are implemented across all SonarAnalyzers.

More than half of SonarSource developers work on SonarAnalyzers. Thanks to the efforts of our SonarAnalyzer developers, there are new SonarAnalyzer versions nearly every week.

A particular focus is currently made on Java, JavaScript, C#, and C/C++ plugins. The target is to deliver a new version of each one every month, and each delivery embeds new rules.

In 2015, we delivered a total of 61 new SonarAnalyser releases, and so far this year, another 30 versions have been released.

What it means for you

You can easily benefit from the regular delivery of SonarAnalyzers. At each release, analyzer enhancements and new rules are provided. But, you don’t need to upgrade SonarQube to upgrade your analysis; as a rule, new releases of each analyzers are compatible with the latest LTS.

When you update a SonarAnalyzer, the static analysis engine is replaced and new rules are made available. But at this step, you’re not yet benefiting from those new rules. During the update of your SonarAnalyzer, the quality profile remains unchanged. The rules executed during the analysis are the same ones you previously configured in your quality profile.
It means that if you want to benefit from new rules you must update your quality profile to add them.

Categories: Open Source

Sonar ecosystem upgrades to Java 8

Sonar - Tue, 06/14/2016 - 17:55

With the release of SonarQube version 5.6, the entire Sonar ecosystem will drop support for Java 7. This means you won’t be able to run new versions of the SonarQube server, execute an analysis, or use SonarLint with a JVM < 8.

Why? Well, its been over two years since Java 8′s initial release, and a year since Oracle stopped supporting Java 7, so we figured it was time for us to stop to. Doing so allows us to simplify our development processes and begin using the spiffy new features in Java 8. Plus, performance is up to 20% better with Java 8!

Of course, we’ll still support running older versions of ecosystem products, e.g. SonarQube 4.5, with Java 7, and you’ll still be able to compile your project with a lower version of Java. You’ll just have to bump up the JVM version to run the analysis.

Categories: Open Source

Version 7 Beta 2

IceScrum - Mon, 06/13/2016 - 19:49
A week ago, we were glad to publish the first Beta of the version that embodies the future of iceScrum! iceScrum Version 7 Beta If you did not hear about it yet, you can read the blog post named “A bright future for iceScrum”. First, we would like to thank our early users for their…
Categories: Open Source

Agile is not magic

IceScrum - Thu, 06/09/2016 - 12:30
After years helping companies to adapt their practices and change their mindset toward agile values, a comment we hear very often is “Ok, we know that we don’t follow the recommendations… But our context is quite singular and Agile/Scrum is meant to adapt to everything, right? Thus, we adapt it to our context.”. Unfortunately, it…
Categories: Open Source

SonarQube 5.6 (LTS) in Screenshots:

Sonar - Wed, 06/08/2016 - 13:45

The wait is over! The new SonarQube Long Term Support (LTS) version is out, and it’s packed with new features to help you better manage your technical debt and operational security. It has been over a year and a half since the last Long Term Support (LTS) version was announced – a very busy year and a half. In that time, we’ve pursued three main themes:

  • Fixing the Leak
  • Adding More for Developers
  • Increasing Scalability and Security
Fixing the Leak

The Water Leak concept says you should fix new issues before bothering with old ones. After all, an issue in two-year-old code has been tested by time. Its the one you added yesterday that should be fixed immediately – while the code is still fresh in your mind.

To that end, we’ve added a number of features to keep you focused on the leak. The first is a new, fixed project home page which puts the leak front and center (okay, front and right) by highlighting the metrics on new code:

And just to make sure it doesn’t slip from view, we’ve updated the default quality gate to focus on new code as well:

Of course, it’s best of all if new problems never hit the code base. In an effort to shorten the cycle we also added the ability to analyze pull requests. Now you no longer need to wait for your code to hit the SonarQube server to see what you need to fix. Instead, you can see new issues as comments on your GitHub pull request (PR):

This is enabled as a GitHub status check, so analysis is automatic with each new push to the PR and you get a tidy summary in the check list:

Adding More for Developers

As a company of developers, and our own first users and harshest critics, we’re always focused on making the platform more usable for developers. It should come as no surprise then, that there’s a lot for developers in this version!

I’ll start with the SonarQube Quality Model, which is an easy to understand, actionable model that takes the best from SQALE and adds what was missing. It draws bugs and security vulnerabilities out of the mass of maintainability issues to clearly highlight project risk, while retaining the calculation of technical debt.

Click through on any of these issue counts, and you land at the new issues page, which is available at both global and project levels. It features an easy-to-use search, totals by either count or technical debt, and super-easy keyboard (or mouse!) navigation:

On that issues page, you may notice the next developer-centric feature: precise issue location. Now we can highlight exactly, and only the portion(s) of a line relevant to the issue:

Last but not least on the topic of Issue improvements is False Positive’s long-awaited sister: Won’t Fix:

We’ve also reworked the presentation of Metric details. The old drilldowns have been replaced by a new project Measures space, which offers a general overview:

A domain view:

A treemap, a list of files, a component tree, and of course a file listing

Increasing Scalability and Security

Even though SonarSource is a developer-centric company, we didn’t forget devops. In fact, this new LTS makes great strides in that area.

The most significant change is that analyzers no longer talk to the database. This means you don’t have to hand out your DB credentials to every Joe who wants to run an analysis. Instead, scanners talk only to the web server, and the server takes it from there.

“But wait,” you’re thinking, “you still have to pass around the user credentials to submit an analysis.”

No you don’t. We’ve added the ability to generate user tokens, so you can run an analysis without exposing your password (or user name!).

Also Worth Noting

While it shouldn’t be major news, it’s also worth noting that the new LTS drops support for Java 7. It’s Java 8+ from here on out. Among other things, the change should make your SonarQube server even faster than before!

That’s all, Folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

If you’ve already worked with the 5.x series, few of these things will come as a surprise. If you’re still on the previous LTS, you should fasten your seat belt. It’s gonna blow your socks off!

Categories: Open Source

Bugs and Vulnerabilities are 1st Class Citizens in SonarQube Quality Model along with Code Smells

Sonar - Thu, 06/02/2016 - 12:46

In SonarQube 5.5 we adopted an evolved quality model, the SonarQube Quality Model, that takes the best from SQALE and adds what was missing. In doing so, we’ve highlighted project risks while retaining technical debt.

Why? Well, SQALE is good as far as it goes, but it’s primarily about maintainability, with no concept of risk. For instance, if a new, blocker security issue cropped up in your application tomorrow, under a strict adherence to the SQALE methodology you’d have to ignore it until you fixed all the Testability, Reliability, Changeability, &etc issues. When in reality, new issues (i.e. leak period issues) of any type are more important than time-tested ones, and new bugs and security vulnerabilities are the most important of all.

Further, SQALE is primarily about maintainability, but the SQALE quality model also encompasses bugs and vulnerabilities. So those important issues get lost in the crowd. The result is that a project can have blocker-level bugs, but still get an A SQALE rating. For us, that was kinda like seeing a green light at the intersection while cross-traffic is still flowing. Yes, it’s recoverable if you’re paying attention, but still dangerous.

So for the SonarQube Quality Model, we took a step back to re-evaluate what’s important. For us it was these things:

  1. The quality model should be dead simple to use
  2. Bugs and security vulnerabilities shouldn’t be lost in the crowd of maintainability issues
  3. The presence of serious bugs or vulnerabilities in a project should raise a red flag
  4. Maintainability issues are still important and shouldn’t be ignored
  5. The calculation of remediation cost (the use of the SQALE analysis model) is still important and should still be done

To meet those criteria, we started by pulling Reliability and Security issues (bugs and vulnerabilities) out into their own categories. They’ll never be lost in the crowd again. Then we consolidated what was left into Maintainability issues, a.k.a. code smells. Now there are three simple categories, and prioritization is easy.

We gave bugs and vulnerabilities their own risk-based ratings, so the presence of a serious Security or Reliability issue in a project will raise that red flag we wanted. Then we renamed the SQALE rating to the Maintainability rating. It’s calculated based on the SQALE analysis model (technical debt) the same way it always was, except that it no longer includes the remediation time for bugs and vulnerabilities:

To go help enforce the new quality model, we updated the default Quality Gate:

  • 0 New Bugs
  • 0 New Vulnerabilities
  • New Code Maintainability rating = A
  • Coverage on New Code >= 80%

The end result is an understandable, actionable quality model you can master out of the box; quality model 2.0, if you will. Because managing code quality should be fun and simple.

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00