Skip to content

Open Source

Water Leak Changes the Game for Technical Debt Management

Sonar - Fri, 07/03/2015 - 09:07

A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well…

When I checked a few days later, I realized that actually there wasn’t much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year’s announcement of SonarQube 4.3. Well (again)… for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren’t much.

Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let’s go!

First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:

  • The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
  • Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
  • There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
  • What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process

All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.

For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there’s another problem: how to fix technical debt without injecting functional regressions? This is really no fun…

At SonarSource, we think several parameters in this equation must be changed:

  • First and most importantly, the developers should own quality and be ultimately responsible for it
  • The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
  • The Quality Gate should be unified for all applications
  • The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team

Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?

water leak

When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.

So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating – either that or put together a remediation plan. Why is it that we don’t apply the simple logic we use at home to the way we manage our code quality? I don’t know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.

Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:

  • The Quality Gate can be run every day, and passing it is achievable. There is no surprise at release time
  • It is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
  • There is a clear ownership of code quality
  • The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
  • The cost is insignificant because it is part of the development process

As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.

I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn’t is where it does not need to be.

Categories: Open Source

New iceScrum products availables!

IceScrum - Thu, 06/18/2015 - 17:56
Hello everybody,   We are back this week to announce you the launch of not one, but two new iceScrum products!   iceScrum users asked us for improvements regarding our offers for iceScrum Pro Standalone, to be more flexible in our offers and adapt to our customers needs. That is why we created those two…
Categories: Open Source

Quality Gates Work – If You Let Them

Sonar - Thu, 06/11/2015 - 21:20

Some people see rules – standards – requirements – as a way to hem in the unruly, limit bad behavior, and restrict rowdiness. But others see reasonable rules as a framework within which to excel, a scaffolding for striving, an armature upon which to build excellence.

Shortly after its inception, the C-Family plugin (C, C++ and Objective-C) had 90%+ test coverage. The developers were proud of that, but the Quality Gate requirement was only 80% on new code, so week after week, version by version, coverage slowly dropped. The project still passed the Quality Gate with flying colors each time, but coverage was moving in the wrong direction. In December, overall coverage dropped into the 80′s with the version 3.3 release, 89.7% to be exact.

In January, the developers on the project decided they’d had enough, and asked for a different Quality Gate. A stricter one. They asked to be held to a standard of 90% coverage on new code.

Generally at SonarSource, we advocate holding everyone to the same standards – no special Quality Profiles for legacy projects, for instance. But this request was swiftly granted, and version 3.4, released in February, had 90.1% coverage overall, not just on new code. Today the project’s overall coverage stands at 92.7%.

Language Team Technical Lead Evgeny Mandrikov said the new standard didn’t just inspire the team to write new tests. The need to make code more testable “motivated us to refactor APIs, and lead to a decrease of technical debt. Now coverage goes constantly up.”

Not only that, but since the team had made the request publicly, others quickly jumped on the bandwagon. Six language plugins are now assigned to that quality gate. The standard is set for coverage on new code, but most of the projects meet it for overall coverage, and the ones that don’t are working on it.

What I’ve seen in my career is that good developers hold themselves to high standards, and believe that it’s reasonable for their managers to do the same. Quality Gates allow us to set – and meet – those standards publicly. Quality Gates and quality statistics confer bragging rights, and set up healthy competition among teams.

Looking back, I can’t think of a better way for the C-Family team to have kicked off the new year.

Categories: Open Source

[UPDATE] Version R6#14.1

IceScrum - Thu, 06/11/2015 - 12:47
Hello everybody, Here comes a new version of iceScrum and iceScrum Pro! This version brings many changes, we hope that you will like them! This post lists all the changes brought by iceScrum R6#14. It is updated when minor versions are released to fix bugs and add small improvements on top of this major version.…
Categories: Open Source

New team & project management

IceScrum - Tue, 06/09/2015 - 10:29
Hello Everybody, We believe that teams are the cornerstone of agile project management and that it deserved to get more power in iceScrum. Cloud users get this for a long time but it is now available for everybody: a team can work on several projects. This change has major consequences: – teams now have a…
Categories: Open Source

Welcome to iceScrum.com!

IceScrum - Thu, 06/04/2015 - 12:24
Hello everybody, Our new website is the perfect opportunity to give a new start to this blog, so welcome to www.icescrum.com! Until now and for historical reasons we had two websites: – www.icescrum.org, dedicated to the open source version of iceScrum, – www.kagilum.com, dedicated to the professional features and services we have been developing for…
Categories: Open Source

The SonarQube COBOL Plugin Tracks Sneaky Bugs in Conditions

Sonar - Thu, 05/21/2015 - 13:28

Not long ago, I wrote that COBOL is not a dead language and there are still billions lines of COBOL code in production today. At COBOL’s inception back in 1959, the goal was to provide something close to natural language so that even business analysts could read the code. As a side effect, the language is really, really verbose. Each time a ruby, python or scala developer complains about the verbosity of Java, C# or C++, he should have a look at a COBOL program to see how much worse it could be :). Moreover, since there is no concept of a local variable in COBOL, the ability to factorize common pieces of code in PARAGRAPHS or SECTIONS is limited. In the end, the temptation to duplicate logic is strong. When you combine those two flaws: verbosity and duplicated logic, guess what the consequence is: it’s pretty easy in COBOL to inject bugs in conditions.

Let’s take some examples we’ve found in production code:

Inconsistencies between nesting and nested IF statements

In the following piece of code the condition of the second nested IF statement is always TRUE. Indeed, when starting evaluation of the nested condition, we already know that the value of ZS-LPWN-IDC-WTR-EMP is ‘X’ so by definition this is TRUE: ZS-LPWN-IDC-WTR-EMP NOT EQUAL 'N' AND 'O'. What was the intent of the developer here? Who knows?

And what about the next one? The second condition in this example is by definition always TRUE since the nesting ELSE block is executed if and only if KST-RETCODE is not equal to ’02′ and ’13′:

Inconsistencies in the same condition

In the following piece of code and in the same condition we’re asking ZS-RPACCNT-NB-TRX to be both equal to 1 and 0. Obviously Quantum Theory is not relevant in COBOL, and a data item can’t have two values at the same time.

The next example is pretty similar, except that here it is “just” a sub part of the condition which is always TRUE: (ZS-BLTRS-KY-GRP NOT = 'IH' OR ZS-BLTRS-KY-GRP NOT = 'IN'). We can probably assume that this was not what the developer wanted to code.

Inconsistencies due to the format of data items

What’s the issue with the next very basic condition?

ZS-RB-TCM-WD-PDN cannot be greater than 9 since it’s declared as a single-digit number:

With version 2.6 of the COBOL plugin, you can track all these kinds of bugs in your COBOL source code. So let’s start hunting them to make your COBOL application even more reliable!

Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00