Skip to content

Open Source

The SonarQube COBOL Plugin Tracks Sneaky Bugs in Conditions

Sonar - Thu, 05/21/2015 - 13:28

Not long ago, I wrote that COBOL is not a dead language and there are still billions lines of COBOL code in production today. At COBOL’s inception back in 1959, the goal was to provide something close to natural language so that even business analysts could read the code. As a side effect, the language is really, really verbose. Each time a ruby, python or scala developer complains about the verbosity of Java, C# or C++, he should have a look at a COBOL program to see how much worse it could be :). Moreover, since there is no concept of a local variable in COBOL, the ability to factorize common pieces of code in PARAGRAPHS or SECTIONS is limited. In the end, the temptation to duplicate logic is strong. When you combine those two flaws: verbosity and duplicated logic, guess what the consequence is: it’s pretty easy in COBOL to inject bugs in conditions.

Let’s take some examples we’ve found in production code:

Inconsistencies between nesting and nested IF statements

In the following piece of code the condition of the second nested IF statement is always TRUE. Indeed, when starting evaluation of the nested condition, we already know that the value of ZS-LPWN-IDC-WTR-EMP is ‘X’ so by definition this is TRUE: ZS-LPWN-IDC-WTR-EMP NOT EQUAL 'N' AND 'O'. What was the intent of the developer here? Who knows?

And what about the next one? The second condition in this example is by definition always TRUE since the nesting ELSE block is executed if and only if KST-RETCODE is not equal to ’02′ and ’13′:

Inconsistencies in the same condition

In the following piece of code and in the same condition we’re asking ZS-RPACCNT-NB-TRX to be both equal to 1 and 0. Obviously Quantum Theory is not relevant in COBOL, and a data item can’t have two values at the same time.

The next example is pretty similar, except that here it is “just” a sub part of the condition which is always TRUE: (ZS-BLTRS-KY-GRP NOT = 'IH' OR ZS-BLTRS-KY-GRP NOT = 'IN'). We can probably assume that this was not what the developer wanted to code.

Inconsistencies due to the format of data items

What’s the issue with the next very basic condition?

ZS-RB-TCM-WD-PDN cannot be greater than 9 since it’s declared as a single-digit number:

With version 2.6 of the COBOL plugin, you can track all these kinds of bugs in your COBOL source code. So let’s start hunting them to make your COBOL application even more reliable!

Categories: Open Source

SonarQube User Conference in Paris

Sonar - Mon, 05/11/2015 - 12:12

We are very happy to announce that we are organizing our first Paris SonarQube User Conference, on June 10, 2015 from 9:30 a.m. until 2 p.m. at the Salons de l’Aéro-Club, 6 Rue Galilée in the 16th arrondissement.

This conference offers a great opportunity to meet other members of the community and compare notes on your experiences with the platform. We’ll also discuss new features, and the platform road map. We’re expecting many members of the community, including speakers from our global customer base and partners, and of course the SonarSource Team. We are expecting heated debate around the adoption and perspectives for SonarQube, as well as some insights on the paradigm shift in managing Technical Debt!

The program will also feature talks from Jean-Louis Letouzey (www.sqale.org) on understanding and leveraging the SQALE methodology, and Jean-Marc Prieur from Microsoft (http://blogs.msdn.com/b/jmprieur/) on the integration with Visual Studio & TFS.

To register for the event, simply send an email to kristi.karu At sonarsource.com. We are looking forward to meeting you there!

Categories: Open Source

Announcing SonarQube integration with MSBuild and Team Build

Sonar - Wed, 04/29/2015 - 00:20

This is a cross-post of Microsoft ALM web site.

Technical debt is the set of problems in a development effort that make forward progress on customer value inefficient. Technical debt saps productivity by making code hard to understand, fragile, difficult to validate, and creates unplanned work that blocks progress. Technical debt is insidious. It starts small and grows over time through rushed changes, lack of context and lack of discipline. Organizations often find that more than 50% of their capacity is sapped by technical debt.

SonarQube is an open source platform that is the de facto solution for understanding and managing technical debt.

Customers have been telling us and SonarSource, the company behind SonarQube, that the SonarQube analysis of .Net apps and integration with Microsoft build technologies needs to be considerably improved.

Over the past few months we have been collaborating with our friends from SonarSource and are pleased to make available a set of integration components that allow you to configure a Team Foundation Server (TFS) Build to connect to a SonarQube server and send the following data, which is gathered during a build under the governance of quality profiles and gates defined on the SonarQube server.

  • results of .Net and JavaScript code analysis
  • code clone analysis
  • code coverage data from tests
  • metrics for .Net and JavaScript

We have initially targeted TFS 2013 and above, so customers can try out these bits immediately with code and build definitions that they already have. We have tried using the above bits with builds in Visual Studio Online (VSO), using an on-premises build agent, but we have uncovered a bug around the discovery of code coverage data which we are working on resolving. When this is fixed we’ll send out an update on this blog. We are also working on integration with the next generation of build in VSO and TFS.

In addition, SonarSource have produced a set of .Net rules, written using the new Roslyn-based code analysis framework, and published them in two forms: a nuget package and a VSIX. With this set of rules, the analysis that is done as part of build can also be done live inside Visual Studio 2015, exploiting the new Visual Studio 2015 code analysis experience

The source code for the above has been made available at https://github.com/SonarSource, specifically:

We are also grateful to our ever-supportive ALM Rangers who have, in parallel, written a SonarQube Installation Guide, which explains how to set up a production ready SonarQube installation to be used in conjunction with Team Foundation Server 2013 to analyse .Net apps. This includes reference to the new integration components mentioned above.

This is only the start of our collaboration. We have lots of exciting ideas on our backlog, so watch this space.

As always, we’d appreciate your feedback on how you find the experience and ideas about how it could be improved to help you and your teams deliver higher quality and easier to maintain software more efficiently.

Categories: Open Source

SonarQube 5.1 in Screenshots

Sonar - Thu, 04/23/2015 - 07:32

The team is proud to announce the release of SonarQube 5.1, which includes many new features:

  • New issues page & improved issue management
  • New rules page
  • Improved layout and navigation
  • Simplified component Viewer
  • All text files in a project imported
  • Preview analysis timezone issue solved

New Issues Page & Improved Issue Management

Vast improvements in issue handling have gone into this version. First, there’s the replacement of the Issues Drilldown with the full power of the Issues page, contextualized to the current project.

Next are the long-awaited issue tags! Issues inherit tags from their rules, but the list is user-editable per issue.

Issue tags also come with a new widget, to show the distribution of issues in a project by tag:

Also on the long-awaited list is the ability to mark an issue “Won’t fix”. Choose that option from the dropdown and the issue disappears from issue counts and technical debt calculations at the next analysis.

Another key improvement is the automatic assignment of new issues to the last modifiers of the relevant lines. SonarQube user accounts are matched automatically to committers when possible, but it’s also possible to make those associations manually

And finally in the Issues Management area, the functionality of the Issues Report plugin has been moved into core, so you get those capabilities out of the box now.

New Rules Page

The Rules page has also made the final step in its transition. Its new page structure will be familiar from the Issues page, and Rules now features the same powerful and intuitive search facets.

When you’re in a Rule Profile context, inheritance is now clearly displayed in the results list, and it’s easy to toggle your search between what is and is not activated in the profile.

The rule detail has been enhanced too, most notably by the addition of linked issue counts for each rule.

Improved Layout and Navigation

The first thing you’ll notice is that you’ve got more horizontal space for content, because we’ve removed the blue navigation bar on the left.

Global navigation is in the top menu and a sub-menu has been added for navigation within a project:

The new top menu features a home icon on the left – the SonarQube logo by default – which can be customized with your own logo.

And by default, the search menu (keyboard shortcut: s) now starts with your recently-used items:

We’ve also made the help menu more obvious. You could see it before with the ‘?’ keyboard shortcut, but now there’s an icon too.

Simplified Component Viewer

The Component Viewer has been simplified in this version: there’s no more need to turn decorations on and off; it’s all on by default

And the “Show Details” option in the More Actions menu pulls up a display of all the file metrics.

All Text Files in a Project Imported

It’s now possible to import all the files in your project. This allows you to have a fuller view of your project in SonarQube and to create manual issues on those files.

Preview Analysis Timezone Issue Solved

And finally, the timezone problem that kept people in different timezones than their SonarQube servers from performing preview analysis has been fixed. There’s not much to show for this point, but it’s significant enough to many to deserve a mention here.

That’s All, Folks!

Time now to download the new version and try it out. But don’t forget that you’ll need Java 7 to run this version of the platform (you can still analyse Java 6 code), and don’t forget to read the installation or upgrade guide.

Categories: Open Source

SonarQube User Conference – U.S. West (Santa Clara, CA)

Sonar - Wed, 04/08/2015 - 22:01

We are very happy to announce that the second SonarQube user conference will take place on April 27th at the Santa Clara Convention Center, in Santa Clara, California.

Following our East Coast tour in October of last year and the good feedback we received from it, we have decided to host another event, this time on the West Coast. This time the format will be slightly different, with two distinct sessions:

  1. An executive session in the morning covering major topics on our roadmap, as well as strategy, adoption, and results. This session is reserved to our customers.
  2. A more technical session in the afternoon, during which we will demonstrate the latest features, such as new integrations with Visual Studio and TFS, new bug detection features, issue filtering through Elastic Search… We will also discuss the challenges and future state of the SonarQube platform and Continuous Inspection as a practice.

We believe this is a unique chance for us to meet our users, and for you to provide feedback on the product and to hear where we are going, so we hope you’ll be able to join us.

There is still time to register but don’t wait too long; the seats are filling up fast! To reserve your seat, simply send an email to kristi.karu At sonarsource.com.

And if you can’t make it to California this spring, don’t worry. We’re planning a similar event this fall in Europe. Stay tuned for more details.

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00