Skip to content

Open Source

COBOL is… Alive!

Sonar - Wed, 01/14/2015 - 20:20

Most C, Java, C++, C#, JavaScript… developers reading this blog entry might think that COBOL is dead and that SonarSource should better focus its attention on more hyped languages like Scala, Go, Dart, and so on. But in 1997, the Gartner Group reported that 80 percent of the world’s business ran on COBOL, with more than 200 billion lines of code in existence and an estimated 5 billion lines of new code annually. COBOL is mainly used in the banking and insurance markets, and according to what we have seen in the past years, the erosion of the number of COBOL lines of code used in production is pretty low. So not only is COBOL not YET dead, but several decades will be required to see this death really happen. We released the first version of the COBOL plugin at the beginning of 2010 and this language plugin was in fact the first one to embed our own source code analysis technology, even before Java, C, C++, PL/SQL, … So at SonarSource, COBOL is a kind of leading technology :).

Multiple vendor extensions and lack of structure

The COBOL plugin embeds more than 130 rules, but before talking about those rules, let’s talk about the wide range of different COBOL dialects that are supported by the plugin. Indeed, since 1959 several specifications of the language and preprocessor behavior have been published, and most COBOL compilers have extended those specifications. So providing an accurate COBOL source code analyser means supporting most of those dialects: IBM Enterprise Cobol, HP Tandem, Bull GCos, IBM Cobol II, IBM Cobol 400, IBM ILE Cobol, Microfocus AcuCobol, OpenCobol, … which is the case for our plugin Moreover for those of you who are not familiar with COBOL source code: let’s imagine a C source file containing 20,000 lines of code, no functions, and just some labels to group statements and to make it possible to “emulate” the concept of function. Said like this, I guess everyone can understand how easy it can be to write unmaintainable and unreliable COBOL programs.

Need for tooling

Starting from this observation, managing a portfolio of thousands of COBOL programs, each one containing thousands of COBOL lines of code, without any tooling to automatically detect quality defects and potential bugs is a bit risky. The SonarSource COBOL plugin allows to continuously analyse millions lines of COBOL code to detect such issues and here are several examples of the rules provided by the plugin:

  • Detection of unused paragraphs, sections and data items.
  • Detection of incorrect PERFORM ... THRU ... control flow, where the starting procedure is located after the ending one in the source code, thus leading to unexpected behavior.
  • Tracking of GO TO statements that transfer control outside of the current module, leading to unstructured code.
  • Copy of a data item (variable) into another, smaller data item, which can lead to data loss.
  • Copy of an alphanumeric data item to a numeric one, which can also lead to data loss.
  • Tracking of EVALUATE statements not having the WHEN OTHER clause (similar to an if without an else).
  • Detection of files which are opened but never closed.

And among those 130+ rules, 30+ target the SQL code which can be embedded into COBOL programs. One such rule tracks LIKE conditions starting with *. Another tracks the use of arithmetic expressions and scalar functions in WHEREconditions. And last but not least, here are some other key features of this SonarSource COBOL plugin :

  • Copybooks are analysed in the context of each COBOL program and issues are reported directly on those copybooks.
  • Remediation cost to fix issues is computed with help of the SQALE method:
  • Even on big COBOL applications containing thousands of COBOL programs and so potentially millions of lines of code and thousands of issues, tracking only new issues on new or updated source code is easy.
  • Duplications in PROCEDURE DIVISION and among all COBOL programs can also be tracked easily.
  • To make sure that code complies with internal coding practices, a Java API allows the development of custom rules.

How hard it is to evaluate this COBOL plugin ?

So YES, Cobol is alive, and the SonarSource COBOL plugin helps make it even more maintainable and reliable.

Categories: Open Source

SonarQube 5.x series: It just keeps getting better and better!

Sonar - Fri, 01/09/2015 - 15:03

We recently wrapped up the 4.x series of the SonarQube platform by announcing its Long Term Support version: 4.5.1. At the same time, we sat down to map out the themes for the 5.x series, and we think they’re pretty exciting.

In the 5.x series, we want the SonarQube platform to become:

  • Fully operational for developers: with easy management of the daily incoming technical debt, and “real” cross-source navigation features
  • Better tailored for big companies: with great performance and more scalability for large instances, and no more DB access from an analysis

Easy management of the daily incoming technical debt A central Issues page

If you came home one day to find an ever-growing puddle of water on your floor, what’s the first thing you’d do? Grab a mop, or find and fix the source of the water? It’s the same with technical debt. The first thing you should care about is stopping the increase in debt (shutting off the leak) before fixing existing debt (grabbing a mop).

Until now, the platform has been great for finding where technical debt is located, but it hasn’t been a good place for developers to efficiently manage the incoming technical debt they add every day. Currently, you can subscribe to notifications of new issues, but that’s all. We think that’s a failing; developers should be able to rely on SonarQube to help them in this daily task.

To accomplish this, we’ll make the Issues page central. It will be redesigned to let users filter issues very efficiently thanks to “facets”. For instance, it will be almost effortless to see “all critical and blocker issues assigned to me on project Foo” with a distribution per rule. Or “all JavaScript critical issues on project Foo“.

With these new capabilities, the central Issues page will inevitably replace the old Issues drilldown page and eliminate its limitations (e.g. few filters, static counts that aren’t updated when issues are changed in the UI, …). In other words, when dealing with issues and technical debt, users will be redirected to the Issues space, and benefit from all those new features.

Issues will also get a tagging mechanism to help developers better classify pending technical debt. Each issue will inherit the tags on its associated rule, so it will be easy to find “security” issues, for instance. And users will be able to add or remove additional tags at will. This will help give a clearer vision of what the technical debt on a project is about: is it mainly bugs or just simple naming conventions? “legacy framework”-related issues or “new-stack-that-must-be-mastered” issues?

The developer at the center of technical debt management

Developing those great new features on the Issues page is almost useless if you, as a developer, always have to head to the Issues page and play with “facets” to find what you’re looking for. Instead, SonarQube must know what matters to you as a developer, i.e. it must be able to identify and report on “my” code. This is one reason the SCM Activity plugin will gently die, and come back to life as a core feature in SonarQube – with built-in support for Git and Subversion (other SCM providers will be supported as plugins). This will let SonarQube know which changes belong to which developer, and automatically assign new issues to the correct user. So you’ll no longer need to swim through all of the incoming debt each day to find your new issues.

“Real” cross source navigation features

For quite some time, SonarQube has been able to link files together in certain circumstances – through duplications (you can navigate to the files that have blocks in common with your code) or test methods (when coverage per test is activated, you can navigate to the test file that covers your code, and vice-versa). But, this navigation capability has been quite limited, and the workspace concept that goes with it is the best proof of that: it is restricted to the context of the component viewer.

With the great progress made on the language plugin side, SonarQube will be able to know that a given variable or function is defined outside of the current file, and take you to the definition. This new functionality can help developers understand issues more quickly and thoroughly, without the need to open an IDE. You no longer have to wonder where a not-supposed-to-be-null-but-is attribute is defined. You’ll be able to jump to the class definition right from the Web UI. And if you navigate far away from the initial location, SonarQube will help you remember your way, and give quick access to the files you browsed recently – wherever they were. In fact, we want SonarQube to become the central place to take a quick look at code without expending a lot of effort to do it (i.e. without the need to go to a workstation, open an IDE, pull the latest code from the SCM repository, probably build-it, …).

Focus on scalability and performance

SonarQube started as a “small” application and gradually progressed to become an enterprise-ready application. Still, its Achilles’ heel is the underlying relational database. This is the bottleneck each time we want SonarQube to be more scalable and performant. What’s more, supporting 4 different database vendors multiplies the difficulty of writing complex SQL queries efficiently. So even though the database will remain the place where we ensure data integrity, updating that data with analysis results must be done through the server, and searching must use a stack designed for performant searches across large amounts of data. We’ve implemented this new stack using Elasticsearch (ES) technology.

Throughout the 5.x series, most domains will slowly get indexed in ES, giving a performance boost when requesting the data. This move will also open new doors to implementing features that were inaccessible with a relational database – like the “facets” used on the Rules or Issues pages. And because ES is designed to scale, SonarQube will benefit from its ability to give amazing performance while adding new features on large instances with millions of issues and lines of code.

Decoupling the SonarQube analyses from the DB

The highest-voted open ticket on JIRA is also one of the main issues when setting up SonarQube in large companies: why does project analysis make so many queries to the database? And actually, why does it even need a connection to the database at all? This has big performance issues (when the analysis is run far away from the DB) and security issues (DB credentials must be known by the batch, some specific ports must be opened).

Along the way, the SonarQube 5.X releases will progressively cut dependencies to the database so that in the end, analysis simply generates a report and sends it to the server for processing. This will not only address the performance and security concerns, it will also greatly improve the design of the whole architecture, clearly carving it into different stacks with their own responsibilities. In the end, analysis will only call the analysers provided by the language plugins, making source code analysis blazing fast. Everything related to data aggregation or history computation (which once required so many database queries during analysis) will be handled by a dedicated “Compute Engine” stack on the server. Integration in the IDE will also benefit from this separation because only the language plugin analysers will be run – instead of the full process – opening up opportunities to have “on-the-fly” analyses.

Enhanced authentication and authorization system

A couple of versions ago, we started an effort to break the initial coarse-grained permissions (mainly global ones) into smaller ones. The target is to be able to have more control over the different actions available in SonarQube, and to be able to define and customize the roles available on the platform. This is particularly important on the project side, where there are currently only 4 permissions, and they don’t allow a lot of flexibility over what users can or cannot do on a project.

On the authentication side, the focus will be providing a reference Single Sign-On (SSO) solution based on HTTP headers – which is a convenient and widespread way of implementing SSO in big companies. API token authentication should also come along to remove the need to pass user credentials over the wire for analysis or IDE configuration.

All this with other features along the way

These are the main themes we want to push forward for the 5.x series, but obviously lots of other “smaller” features will come along the way. At the time I’m writing this post, we’ve already started working on most of those big features and we are excited about seeing them come out in upcoming versions. I’m sure you share our enthusiasm!

Categories: Open Source

Walking the Tightrope: Balancing Agility and Stability

Sonar - Fri, 12/12/2014 - 14:22

About a year ago we declared a Long Term Support (LTS) version for the first time ever, and recently, we declared another one (version 4.5.1). But we never talked about what LTS means or why we did it.

Here’s the story:

SonarSource is an agile company. We believe deeply in the agile principles, including this one:

Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.

So that’s why we deliver a new version about every two months. We know we need to get our changes out to the users so they can tell us what we’ve done well, and what we haven’t. (Feedback of the latter kind is more frequent, but both kinds are appreciated!) That way we can fix our goofs before they’re too deeply embedded under other layers of logic to fix easily.

That’s the agility part: release frequently and respond to customer feedback. But what about stability?

We know that many of our users are in large organizations. Since many of us came to SonarSource from large companies, we understand how well such organizations deal with frequent change: not well at all, sometimes. Instead, they need stability and reliability.

For a while, that left us with a conundrum: how could we be responsive to the customer need for stability, and still be agile?

(Drum roll, please!) Enter the LTS version.

An LTS version marks a significant milestone for us: it means that through incremental, we’ve brought a feature set to completion, and worked out the kinks. The vision that was established (a year ago, in this case) has been achieved, and it’s time to set out a new vision. (More on that soon!)

Once a version is marked LTS, we pledge to maintain it and fix any non-trivial bugs until the next LTS version is released. That way, users who need stability know that they’ll never be forced to upgrade to a non-LTS version just for a bug fix.

And if a bug fix is required for an LTS version, you know you can upgrade to it without any other change in behavior or features. I.e. it’s completely transparent to your users. Of course, we don’t mark a version LTS until we know it’s stable, and we’ve fixed all the bugs in it that we’re aware of. So the chance that you’ll need to perform this kind of transparent upgrade are small.

Of course, there are trade-offs. We release frequently, and pack a lot work into each release. By opting to stay with an LTS version, you trade benefiting from the cool new features for stability (yes, I know that’s a trade worth making for many people).

But there’s another trade-off to be aware of. When you go from one LTS version to the next, you’ll see a lot of change all at once. This time the jump from one LTS to the next includes significant UI changes, plugin compatibility differences, and changes that impact project analysis (possibly requiring analysis job reconfigurations). There’s an overview in the docs.

On the whole, a jump from one LTS to the next one will be awesome, but you need to be aware that it may not feel as trivial as SonarQube upgrades usually do. Really, it’s just a question of how you want to rip off the Band-Aid(R).

Categories: Open Source

New LTS Version Sums Impressive Array of New Features

Sonar - Thu, 12/04/2014 - 14:35

In November, SonarQube version 4.5.1 was announced as the new Long Term Support (LTS) release of the platform. It’s been nearly a year since the last LTS version was announced – a very busy, feature-packed year. Let’s take a look at the results.

Technical Debt moves into the core

If you’re upgrading directly from 3.7.4, the previous LTS, the change with the biggest impact is the addition of built-in support for Technical Debt calculation. The SonarQube platform is all about identifying and displaying technical debt, but previously there was no built-in way to calculate the cumulative amount.

Over the course of the last year, we’ve fixed that by moving the bulk of our SQALE implementation from the commercial SQALE plugin into the core platform. (Shameless plug: there’s still plenty of value left in the plugin.) Now you can see your technical debt in time (minutes, hours or days) without spending a penny. It’ll show up automatically in the re-vamped Issues and Technical Debt widget:

You can also see the technical debt ratio (time to fix the app versus an estimate of the time spent writing the current code base) and the SQALE rating. They show up in the Technical Debt Synopsis widget:

For more on Technical Debt, check out the docs.

Multi-language analysis debuts

This year also saw the fulfillment our most-requested feature ever: the ability to analyze a project for multiple languages. Now you can analyze all the Java, JavaScript, HTML, &etc. in your project with a single analysis, and see the consolidated results in SonarQube.

Note that if you previously relied on SonarQube’s default language setting (typically “Java”) rather than specifying sonar.language in the analysis properties, your first analysis after jumping to the new LTS will pick up more files than previously. Under multi-language, every language plugin that sees some of “its” files in your project will kick in automatically.

Rule management and Quality Gates emerge from the shadow of Profiles

Another paradigm shift this year, was the removal from Quality Profiles of Rule management and Quality Gates (previously known as Alerts). Now Quality Profiles are simply collections of rules, and Quality Gates and Rule management stand on their own.

Previously, if you wanted to apply the same standards to all projects across all languages, you had to set those standards up as Alerts in each and every single profile. I.E. you had to duplicate them. With the introduction of Quality Gates, those standards gain first-class citizenship and become applicable as a set across all projects.

Similarly, if you wanted to browse rules before, you could only do it in the context of a Quality Profile, and interacting with rules across multiple profiles was cumbersome at best. With the introduction of the Rules space, you can browse rules outside the context of a profile, and easily 1) see when a rule is active in multiple profiles, 2) activate or deactivate it in any profile in the instance. There are also impressive new search capabilities and a bevy of smaller, but nonetheless valuable, new features.

UI gains consistency

Several pages throughout the interface have been reworked to improve consistency and add functionality. The new Rules and Quality Gate spaces were implemented with an updated UI design, so the Measures and Issues pages have been updated as well to keep pace.

In addition, the component viewer has been completely re-imagined. Now it’s possible, for instance, to see the duplicated blocks and the issues in a file at the same time.

Widgets are added, updated

Several new widgets have been added for displaying Measure filters since the last LTS: donut chart (a pie with a hole in the middle ;-), bubble chart, and histogram. Word cloud has moved from a separate page to a reusable widget. And TreeMap has been completely rewritten to improve responsiveness and functionality.

Interactions outside the interface improve

The final two changes to mention are a little less showy, but no less important.

First, an incremental analysis mode has been added. It allows you to run a local analysis (the SonarQube database is not updated) on only the files that have changed locally since the last full analysis. With the two IDE plugins and the ability to run incremental analysis manually, there are three options for pre-commit analysis.

The last major change is the addition of built-in web service documentation. You’ll find the link in the SonarQube footer.

It leads to a listing of every web service available on the instance, with documentation for the methods, and in some cases a sample response. Never again will you wonder if you’re looking at the right version of the docs!

Coming up

It always seems a long time to me from release to release, but looking back, it has been an amazingly short time to have packed in so much new functionality. If you’re upgrading from the previous LTS, it’s definitely worth taking a look at the migration guide.

Coming up next, I’ll talk about SonarSource’s LTS philosophy – why we do it and what it means – and in another post I’ll talk about all the features planned for the next LTS.

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.


if (...choice 1 valid...) {
if (...choice 2 valid...) {


$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.


Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.



I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00