Skip to content

Open Source

Version R6#14.13

IceScrum - Fri, 05/12/2017 - 17:58
R6#14.13 Here is the new and probably the last version of iceScrum R6. Its only purpose is providing iceScrum Standalone R6 users with the ability to migrate their projects to iceScrum v7. If you use iceScrum Cloud, you will have to wait a little longer before the migration is available but don’t worry it’s our…
Categories: Open Source

Accelerate Products Development at SonarSource

Sonar - Wed, 05/10/2017 - 16:53

We founded SonarSource 8 years ago with a dream to one day provide every developer the ability to measure the code quality of his projects. And we had a motto for this: “Democratize access to code quality tooling”. To make this dream come true we invested all our time and energy into developing the SonarQube platform, hiring a great team and building an open source business model to sustain the company growth and keep our freedom. We have also invested a lot in the relationship with our community; giving a lot and also getting back a lot.

Thanks to this approach, here are some examples of what we were able to deliver in the last few years:

  • on-the-fly Feedback in the IDE with SonarLint
  • analysis of 18 languages
  • deep analysis to cover the reliability and security domains
  • high availability and multi-tenancy of the platform to soon launch sonarqube.com

After 8 years of effort, we believe we have built great products along with an awesome 60-person company, a solid business and a great brand. We are very proud of these, but we do not think our dream has come true yet. Why? Because Continuous Code Quality still isn’t a commodity the way SCM, Continuous Integration, and artifacts management are: every developer should benefit from the power of a path-sensitive, context-sensitive data flow analysis engine to detect the nastiest bugs and subtlest security vulnerabilities. sonarqube.com should be a no-brainer for anyone who uses Github.com, VSTS, Travis CI… In other words, everyone writing code should want to benefit from the best analyzers to make sure each line produced is secure, reliable and maintainable.

To take up this challenge, we have made a choice to partner with Insight Venture Partners, one of the very best VCs in our domain. By leveraging their experience, we strongly believe we will be making our dream come true… way sooner than another 8 years!

Simon, Freddy & Olivier
The SonarSource Founders

Categories: Open Source

The Tweets You Missed in April

Sonar - Fri, 05/05/2017 - 14:58

Here are the tweets you likely missed last month!

SonarQube 6.3 released: read the news and see in it screenshots! https://t.co/5SejxD5Gflhttps://t.co/itdHN7tu6p pic.twitter.com/dMjp6Zg2aS

— SonarQube (@SonarQube) April 12, 2017

SonarCFamily 4.7 Released: 4 new rules and a dataflow engine supporting precise values for integer literals https://t.co/SUL1QVr0nu pic.twitter.com/eVY69DRcsE

— SonarQube (@SonarQube) April 12, 2017

SonarCOBOL 3.4 Released: 8 new rules https://t.co/s7M1JLUq04#cobolForEver pic.twitter.com/QyYx0p0ChT

— SonarQube (@SonarQube) April 13, 2017

SonarLint for IntelliJ 2.9 shows paths across methods calls that lead to an exception https://t.co/0S8Ch2Wylt pic.twitter.com/Yd2dmR7OdB

— SonarLint (@SonarLint) April 6, 2017

SonarLint for Eclipse 3.0 detects tricky issues on Java and JavaScript thanks to an extended dataflow analysis https://t.co/LQ8WnW6A3E pic.twitter.com/niRpICPOef

— SonarLint (@SonarLint) April 18, 2017

Categories: Open Source

SonarJS 3.0: Being Lean and Mean in JavaScript

Sonar - Mon, 05/01/2017 - 12:40

All through 2016 SonarJS has become richer and more powerful thanks to new rules and its new data flow engine, to the point of being able to find pretty interesting stuff like this:

cool-issue-annotated

That’s cool, isn’t it? Yet, there’s such a thing as being blinded by coolness and, as Pirelli was fond of saying, power is nothing without control. What good is pointing out a very nasty and hidden bug if you have long since stopped listening to what SonarJS has to tell you?

There are two main reasons a developer stops listening to the analyzer:

  1. The analyzer is noisy, stacking issue on top of issue because you insist on having more than one statement per line.
  2. The analyzer says something that is really dumb, so, the developer presumes, the analyzer is dumb. Life is too short to listen to dumb tools.

Unless we tackled both these points we risked having our oh-so-powerful analyzer be perceived like a “the end is nigh” lunatic.

futurama-the-end-is-nigh

Kill the noise

We don’t want to spam the developer with potentially true but ultimately irrelevant messages. But what is relevant?

We do want to provide value out of the box, so all SonarSource analysers provide a default rule-set, called the “Sonar way” profile, that does represent what it means for us to write good <insert language here> code. This means that we don’t have the luxury of saying “the users will setup the profile with the rules they prefer”, we have to take a stance on which rules are activated by default.

Guess what? Nobody knew that defining what is good JavaScript could be so complicated!

We thus embarked on a deep review of our default “Sonar way” profile to see if we could indeed find a meaningful, useful, common ground. We knew we needed an external point of view and we were very lucky to find a very knowledgeable and critical one: Alexander Kamushkin.

Alexander worked with us for a month and he did an amazing job, if somewhat painful to us, pointing out which rules provided the most value regardless of team culture and idioms, which could become idiom-neutral with some work, and which were by definition optional conventions.

After the first few rounds of discussion he put everything in what we have come to refer to as “Alexander’s Report”, of which this is a very small excerpt:

alexanders-report

Of course this was not the end of it, we kept on refining these findings, prioritizing and adding some more all through the development of SonarJS 3.0 and we have more in the pipe for later on.

We improved dozens of rules, splitted rules to separate the unarguable bug-generating cases from maintenance-related cases, added heuristics to kill false-positives, almost no rule previously part of “Sonar way” was left untouched. We also further evolved the data flow engine itself to make sure it was not making assumptions that might lead rules to be overconfident in reporting an issue.

We now feel that the default profile of SonarJS 3.0 is a carefully trimmed set of high-value/low-noise rules useful in almost any JS development context.

We also created a new profile: “Sonar way Recommended”. This profile includes all the rules found in “Sonar way”, plus rules which we have evolved to be high-value/low-noise for JS developments that mandate high code readability and long-term evolution.

Things we learned The issue is in the eye of the beholder

Take for instance one excellent rule: “Dead stores should be removed”. This rule says that if you set a value to a variable and then you set a new value without ever using the previous one you have probably made a mistake.

let x = 3;
x = 10;
alert(“Value : “ + x);

We can hardly be more confident that something is wrong here, you probably wanted to do something with that “3”. What if you are in the habit of initialising all your number variables to 0, or all your strings to empty string?

function bye(p) {
  let answer = "";
  switch(p) {
    case "HELLO" : answer = "GOODBYE";
      break;
    case "HI" : answer = "BYE";
      break;
    default : answer = "HASTA LA VISTA BABY";
  }
  return answer;
}

Do we want to raise a dead-store issue on that first initialisation? If we did we could kind of excuse ourselves by saying that it is indeed a dead-store, but since the developer did that on purpose the analyser is at best perceived as pedantic.

After all when raising issues we are not addressing machines but human beings. We want them to read and care about these issues, we cannot hide behind technical correctness, we must be correct and also try to guess when something is done on purpose and when it is a genuine mistake.

It’s not a bug until it is

Before SonarJS 3.0 every issue we detected which could potentially lead to a bug was classified as a bug.

This was done out of a coherent approach at issue criticality and it does draw a clear distinction between potential mistakes and more readability or maintenance-related code smells.

Still, there’s something very alarming in getting a report saying that your project contains 1.542 bugs.

A classic example of this is NaN. If you don’t expect NaN to be a possible value you can introduce some very nasty bugs, because, let’s not forget that NaN == NaN is false!

Still, nothing might happen, because you are careful in other ways and as such playing with NaNs is at worst suspicious, not a bug.

Also, we found out that, as the analyser improved, many potentially dangerous things could be resolved into being either certainly bugs or certainly not bugs. There’s no need to scream about an undefined variable if we can track its value and only raise an issue when you try to access a property on that undefined variable.

If you can’t analyse it, don’t make assumptions

The data flow analysis engine is pretty good, but it still cannot analyse everything. We learnt that if we cannot follow the whole life of a variable’s value we are better off assuming no knowledge than partial knowledge.

let x;
if(p) x = 2;
if(isPositive(x)) {
  return 10/x;
}

Should we warn you of a possible NaN? It depends if we were able to resolve the declaration of isPositive and go through its implementation. If we don’t know what happens within isPositive, even if we know that it is possible that x is undefined we can’t be sure that x can be undefined when 10/x is executed. It’s safer, to avoid raising an issue because of our partial understanding, to not presume we know anything about x.

And more

There would be much more to say, but for the time being suffice to know that SonarJS 3.0 inaugurates a focus on minimalistic usefulness, or, as Marko Ramius would say: we have engaged the silent drive.

Categories: Open Source

Breaking the SonarQube Analysis with Jenkins Pipelines

Sonar - Wed, 04/19/2017 - 15:14

One of the most requested feature regarding SonarQube Scanners is the ability to fail the build when quality level is not at the expected level. We have this built-in concept of quality gate in SonarQube, and we used to have a BuildBreaker plugin for this exact use case. But starting from version 5.2, aggregation of metrics is done asynchronously on SonarQube server side. It means build/scanner process would finish successfully just after publishing raw data to the SonarQube server, without waiting for the aggregation to complete.

Some people tried to resurrect the BuildBreaker feature by implementing some active polling at the end of the scanner execution. We never supported this solution, since it defeats one of the benefit of having asynchronous aggregation on SonarQube server side. Indeed it means your CI executors/agents will be occupied “just” for a wait.

The cleanest pattern to achieve this is to release the CI executor, and have the SonarQube server send a notification when aggregation is completed. The CI job would then be resumed, and take the appropriate actions (not only mark the job as failed, but it could also send email notifications for example).

All of this is now possible, thanks to the webhook feature introduced in SonarQube 6.2. We are also taking benefit of Jenkins pipeline feature, that allow some part of a job logic to be executed without occupying an executor.

Let’s see it in action.

First, you need SonarQube server 6.2+. In your Jenkins instance, install latest version of the SonarQube Scanner for Jenkins (2.6.1+). You should of course configure in Jenkins administration section the credentials to connect to the SonarQube server.

In your SonarQube server administration page, add a webhook entry:

https://<your Jenkins instance>/sonarqube-webhook/


Now you can configure a pipeline job using the two SonarQube keywords ‘withSonarQubeEnv’ and ‘waitForQualityGate’.

The first one should wrap the execution of the scanner (that will occupy an executor) and the second one will ‘pause’ the pipeline in a very light way, waiting for the webhook payload.

node {
  stage('SCM') {
    git 'https://github.com/foo/bar.git'
  }
  stage('build & SonarQube Scan') {
    withSonarQubeEnv('My SonarQube Server') {
      sh 'mvn clean package sonar:sonar'
    } // SonarQube taskId is automatically attached to the pipeline context
  }
}
 
// No need to occupy a node
stage("Quality Gate") {
  timeout(time: 1, unit: 'HOURS') { // Just in case something goes wrong, pipeline will be killed after a timeout
    def qg = waitForQualityGate() // Reuse taskId previously collected by withSonarQubeEnv
    if (qg.status != 'OK') {
      error "Pipeline aborted due to quality gate failure: ${qg.status}"
    }
  }
}

Here you are:


That’s all Folks!

Categories: Open Source

SonarQube 6.3 in Screenshots

Sonar - Wed, 04/12/2017 - 16:55

The SonarSource team is proud to announce the release of SonarQube 6.3, which brings both interface and analysis improvements.

  • Project “Activity” page
  • More languages on board by default
  • Global search improvements
  • Backdating issues raised by new rules on old code
  • The return of UI extension points

Project “Activity” page

This version introduces an Activity page at the project level. It replaces the History page found in previous versions, but unlike the History page, Activity can be seen by all users with project permissions, not just admins.

The activity list starts on the project home page, replacing the Events list:

On the project home page, only the most recent analyses are shown, but click through on “Show More” or the new “Activity” menu item, and you land at the full list:

Admins will find here the full list of editing options they’re used to, and users will be able to see the list of analyses on file for a project for the first time!

More languages on board by default

SonarQube 6.3 now embeds the latest versions of most SonarSource code analyzers: SonarC#, SonarFlex, SonarJava, SonarJS, SonarPHP, and SonarPython. That means less setup work on new installations and on upgrades:

Global search improvements

6.3 also brings several improvements to global search. First, it’s now backed by Elasticsearch, so it’s fast. Making that switch allowed us to improve not just speed but, the results as well. Now you can search by multiple terms, and your results will be ordered relevance:

Backdating issues raised by new rules on old code

If you’re living by the Leak Period you know the pain of adding new rules to your quality profile: suddenly code you haven’t touched in months or even years has “new” issues – valid issues you need to silence somehow, either by marking them Won’t Fix, or by editing code you previously had no plan to touch. Because we dogfood new rules at SonarSource we felt this pain acutely.

Well, help is here. Starting with 6.3, SonarQube backdates issues raised by newly activated rules on old code to the line’s last commit date. No longer will you be forced to excavate old code to clean up a specious leak. Instead, you can activate new rules with abandon, knowing that the only issues that show up in the leak period will be the ones that actually belong there.

The return of UI extension points

6.3 is the first version to reach the target architecture of a UI written completely in JavaScript. As a consequence, we’ve been able to re-introduce the ability to extend the UI at both the global and project levels. The docs give the details on how to go about that.

That’s all, folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00