Skip to content

Open Source

Welcome to Kagibox, our first physical product!

IceScrum - Fri, 11/25/2016 - 15:33
Today’s topic is quite different from what we usually talk about here. No version of iceScrum or new feature for your favorite tool (don’t worry you will get some very soon), but a presentation of a brand new product we just launched this week: Kagibox! For a long time, our team has been thinking of…
Categories: Open Source

SonarQube 6.x series: Focused and Efficient

Sonar - Thu, 11/03/2016 - 15:09

At the beginning of the summer, we announced the long-awaited new “Long Term Support” version, SonarQube 5.6. It comes packed with great features to highlight and help developers manage the leak, and to ensure the security and scalability of large instances.

Now we’re concentrating on the main themes for the 6.x series, and based on the discussions we have had during our City Tour 2016, we’re sure that you’ll be as excited by these new features as you were with the ones in 5.6 LTS.

Better leak management

water leak

Support of file move and renaming

SonarQube 5.6 LTS provides all the built-in features you need to monitor and fix the leak on your source code: a project home page that highlights activity on code that was added or modified recently, and a quality gate that turns red whenever bugs or vulnerabilities make their way into new code.

Unfortunately, SonarQube 5.6 doesn’t understand moving or renaming files. That means that if an old file is moved, all its existing issues are closed (including the ones marked False Positive or Won’t Fix), and new ones are (re)opened on the file at its new location. An ugly side effect is that old issues end up in the leak period even though the file wasn’t edited. The end result is noise for development teams who refactor frequently.

This limitation is fixed in SonarQube 6.0, and development teams at SonarSource have been enjoying it for a couple of months already.

Better understanding of bugs and vulnerabilities

Over the past 2 years, SonarSource’s analyzers have reached maturity levels that not only allow them to detect “simple” maintenance issues, but also more tricky issues that can be found only by exploring the code in depth using “symbolic execution” to explore multiple execution paths through the code. That’s why in the 5.x series, Bugs and Vulnerabilities debuted as part of the new SonarQube Quality Model. As you can imagine, it can be very complex to detect a bug when lots of different execution paths have to be explored. As a consequence, it’s easy to guess that it would be hard for a developer to understand why SonarQube is reporting this or that bug without more help. A glance at “SonarAnalyzer for Java: Tricky Bugs are Running Scared” shows that we must print arrows and explanations on the screenshots to help users understand how we discovered a bug.

The next LTS of SonarQube will provide this information out of the box in the web application. Not only will developers see where each bug is, but they’ll be able to display the execution paths (with explanations) that lead to it. This will be a nice improvement to help you fix the leak more easily!

Project Activity Stream

You’re already applying the right process to fix the leak, but sometimes it is hard to know exactly what causes the tiny drops that end up being the leak. The next LTS will keep track of the low-level activities in your project to help you find the source of your leak. For instance, are you facing unexpected new issues in the leak period? You will be able to see that they are due to the activation of a new rule in your quality profile. You want to find which exact commit(s) were not sufficiently tested and caused the quality gate to turn red because of insufficient coverage? You will see commit hashes to more easily link the problem with what happened in the source code repository.

Branching as a first class citizen

While SonarQube provides a feature to handle short-living (feature) branches through its pull request analysis plugin, it currently does very little when it comes to long-living (maintenance) branches, even though we all know that maintenance is a huge part of software development. Unfortunately, SonarQube’s current branch support is minimal at best. The sonar.branch analysis parameter allows you to analyze a branch alongside the trunk version of the code, but under the hood SonarQube treats the branch as a separate, completely unrelated project: configuration isn’t shared, metrics are artificially doubled (for instance the number of lines of code), issues are duplicated in each copy of the code with no link between them, it’s impossible to know at what point of time the maintenance branches diverged from the main one, … etc. In the end, you end up managing the branch as a totally different project, even though it is really the same application.

The next LTS will address all those issues, making it simple to create maintenance branches on existing projects to track activity on the branches and make sure that even in branches, there’s no leak on the new code.

See what’s important to you!

Eye

User-needs oriented spaces

In the early days, SonarQube offered the possibility to inject and display any kind of information, mostly thanks to customizable dashboards and widgets. This led to widespread adoption, but at the cost of SonarQube being seen as a multi-purpose aggregation and reporting tool: one plugin would add information from a bug tracking system, another one would add documentation information, … The consequence was that the global and project dashboards became a crazy quilt of both useless and useful information, with everything mixed in together in a big mess.

In the 5.x series, project dashboards were replaced by hardcoded pages dedicated to fit the use cases that SonarQube is meant for: seeing the overall quality of a project on its home page, quickly identifying whether the leak is fixed and the reasons why it might not be, and digging into the details to know more about what’s going wrong. Following the same logic, next LTS of SonarQube will get rid of global dashboards and widgets to provide pages designed to answer the needs of developers, technical leaders, project managers and executives – all this out of the box without having to wonder what to configure.

Powerful project exploration with tags

When focusing on a given project, SonarQube offers everything you need to both get the big picture and dig into the details. When it comes to exploring the whole set of projects available on a SonarQube instance, the only entry point is the ageing “Measures” page. This page currently goes into to much detail (allowing you to query for files, for instance), with difficult-to-use filtering criteria.

The next LTS will replace this page with a brand-new “Projects” page to query projects using advanced filtering similar to what’s on the Issues page. Ultimately, it will support tags on projects. It should help answer questions like: what’s the distribution of “strategic” projects regarding security and reliability ratings? how do “offshore” projects perform in terms of maintainability?

Always up-to-date portfolios

The Governance product allows you to manage application portfolios, usually by mapping the organisational structure of a company. The executive-oriented high level indicators produced by Governance are currently updated once in a while, when a refresh is triggered by some external system (usually a CI job), independent of project analyses. The consequence is that, depending on the frequency of this externally-triggered refresh task, those high-level indicators are imprecisely synchronized with the current status of the relevant projects.

The version of Governance compatible with the next LTS will get rid of the need to trigger this refresh, and update portfolio indicators as soon as one of the underlying projects has been updated. This way, there is no need to set up an external process to trigger portfolio calculation, and no wondering if what you are seeing in SonarQube is up to date or not.

Excellent support of huge instances

Scalability

Horizontal scalability

One of the targets of the 5.x series was making sure SonarQube would scale vertically to house more projects on a single instance if given more space, more CPU, and more RAM. This was achieved thanks to the architectural changes which led to removing the DB connection from the Scanner side, and to adding Elasticsearch in front of the database. But vertical scalability necessarily has limits – namely those of the underlying hardware.

The next LTS will allow you to deploy SonarQube as a cluster of SonarQube nodes. You’ll be able to configure each node for one or more components of SonarQube (web server, compute engine and Elasticsearch node), based on your load. The first instance to benefit from this capability will be SonarQube.com, the SonarQube-based service operated by SonarSource.

Organizations

When talking about large instances, one topic that often comes up is how to efficiently and correctly handle the permissions for large numbers of users and projects. Let’s take the example of an IT department serving several independent business units: the business units might not share the same quality profiles (because they’re working with different technologies), and each one probably wants to define its own user groups, or make specific configurations to suit their needs. There’s currently no good way to manage this scenario, but in the next LTS, organizations will create a way to define umbrellas that isolate sets of users and projects to achieve these goals. As with the ability to set up a cluster, SonarQube.com will be the first instance to benefit from this, so that users can group their projects together and customize settings or quality profiles for them.

Webhooks for Devops

Not related to big instances only, but still in the hands of DevOps teams who operate complex ALM setups, webhooks will increase your ability to integrate SonarQube with existing infrastructure. For instance, freshly built binaries shouldn’t be deployed to production if they don’t pass the quality gate, right? With webhooks, you’ll be able to have SonarQube notify the build system of the projects’ quality gate status so it can cancel or continue the delivery pipeline as appropriate.

Target is mid-2017!

That’s all folks! The estimated time of arrival for the next SonarQube 6.x LTS is mid-2017. Expect other small but useful features to make their way along those big themes!

Categories: Open Source

SonarQube Embraces the .NET Ecosystem

Sonar - Fri, 10/28/2016 - 15:05

In the last couple months, we have worked on further improving our already-good support for the .NET ecosystem. In this blog post, I’ll summarize the changes and the product updates that you’re about to see.

C# plugin version 5.4

We moved all functionalities previously based on our own tokenizer/parser to Roslyn. This lets us do the colorization more accurately and will allow future improvements with less effort. Also, we’re happy to announce the following new features:

  • Added symbol reference highlighting, which has been available for Java source code for a long time.
  • Improved issue reporting with exact issue locations.
  • Added the missing complexity metrics: “complexity in classes” and “complexity in functions”
  • Finally, we also updated the rule engine (C# analyzer) to the latest version, so you can benefit from the rules already available through SonarLint for Visual Studio.

With these changes you should have the same great user experience in SonarQube for C# that is already available for Java.

VB.NET plugin version 3.0

The VB.NET plugin 2.4 also relied on our own parser implementation, which meant that it didn’t support the VB.NET language features added by the Roslyn team, such as string interpolation, and null-conditional operators. The deficit resulted in parsing errors on all new constructs, and on some already existing ones too, such as async await, and labels that are followed by statements on the same line. The obvious solution to all these problems was to use Roslyn internally. In the last couple months, we made the necessary changes, and now the VB.NET plugin uses the same architecture as the C# plugin. This has many additional benefits above and beyond eliminating the parsing errors, such as enabling the following new features in this version of the VB.NET plugin:

  • Exact issue location
  • Symbol reference highlighting
  • Colorization based on Roslyn
  • Copy-paste detection based on Roslyn
  • Missing complexity metrics are also computed
  • Support all the coverage and testing tools already available for C#

Additionally, we removed the dependency between the VB.NET and C# plugins, so if you only do VB.NET development, you don’t have to install the C# plugin any more.

While we were at it, we added a few useful new rules to the plugin: S1764, S1871, S1656, S1862. Here’s an issue we found with these rules in Roslyn itself:

Scanner for MsBuild version 2.2

Some of the features mentioned above couldn’t be added just by modifying the plugins. We had to improve the Scanner for MSBuild to make the changes possible. At the same time, we fixed many of the small annoyances and a few bugs. Finally, we upgraded the embedded SonarQube Scanner to the latest version, 2.8, so you’ll benefit from all changes made there too (v2.7 changelog, v2.8 changelog).

Additionally, when you use MSBuild14 to build your solution, we no longer need to compute metrics, copy-paste token information, code colorization information, etc. in the Scanner for MSBuild “end step”, so you’ll see a performance improvement there. These computations were moved to the build phase where they can be done more efficiently, so that step will be a little slower, but the overall performance should still be better.

FxCop plugin version 1.0

A final change worth mentioning is that we extracted FxCop analysis from the C# plugin into a dedicated community plugin. This move seems to align with what Microsoft is doing: not developing FxCop any longer. Microsoft’s replacement tool will come in the form of Roslyn analyzers.

Note that we not only extracted the functionality to a dedicated plugin, but fixed a problem with issues being reported on excluded files (see here).

Summary

That’s it. Huge architectural changes with many new features driven by our main goal to support .NET languages to the same extent as we support Java, JavaScript, and C/C++.

Categories: Open Source

SonarQube 6.1 in Screenshots

Sonar - Tue, 10/25/2016 - 14:40

The SonarSource team is proud to announce the release of SonarQube 6.1, which brings an improved interface and the first baby steps toward SonarQube clusters.

  • More Actionable Project Page
  • Redesigned Settings Pages
  • First Steps Toward Clustering

More Actionable Project Page

SonarQube 6.1 enhances the project front page to make duplications in the leak period useful and actionable.

Previously, we only tracked change in the duplications percentage against the global code base. So a very large project with only 100 new lines – all of them duplicated – still had a very small duplication percentage in the leak period. In other words, the true magnitude of new duplications was lost in the crowd. Now we calculate new duplications over code touched in the leak period, so those 100 new duplicated lines get the attention they deserve:

Redesigned Settings Pages

The global and project settings pages are redesigned for better clarity and ease of use in the new versioin:

Among the improvements the new pages bring is a clearer presentation of just what the default settings are:

First Steps Toward Clustering

There’s not a lot to show here, but it’s still worth mentioning that 6.1 takes the first steps down the road to a fully clusterizable architecture. You can still run everything on a single node if you want, but folks with large instances will be glad to know that we’re on the way to letting them distribute the load. Nothing’s configurable yet, but the planned capabilities are already starting to show up in the System Info portion of the UI:

That’s all, folks!

Its time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

The Tweets You Missed in September

Sonar - Wed, 10/05/2016 - 10:21

Here are the tweets you likely missed last month!

No barriers on https://t.co/DvXKhNM443! Sign up & start analyzing your OSS projects today! https://t.co/0QqXk1EAVO pic.twitter.com/Cl54pquct4

— SonarQube (@SonarQube) September 1, 2016

SonarLint for @VisualStudio 2.7 Released: with 30 new rules targeting https://t.co/nJ3w5PQ9Xy https://t.co/h60GhyaZEp pic.twitter.com/1s9WCTeaax

— SonarLint (@SonarLint) September 23, 2016

SonarQube #JavaScript plugin 2.16 Released : https://t.co/VbXwmrgd5n pic.twitter.com/oNLvSvHBiX

— SonarQube (@SonarQube) September 9, 2016

SonarQube ABAP 3.3 Released: Five new rules https://t.co/ucoO6OM0rj @SAPdevs #abap @SAP pic.twitter.com/Xbvcgb3oXF

— SonarQube (@SonarQube) September 7, 2016

SonarQube Scanner for Gradle 2.1 natively supports Android projects, and brings other improvements. https://t.co/xCQ9NHBYUn pic.twitter.com/QHMthAwHBX

— SonarQube (@SonarQube) September 26, 2016

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00