Skip to content

Open Source

Call for Papers is Open for Geneva SonarQube Conference

Sonar - Fri, 07/31/2015 - 16:05

A few weeks ago, I announced a free SonarQube User Conference in Geneva on the 23rd and 24th of September. More than 100 people have already registered.

We want this 2-day conference to be as valuable as possible for participants, and thus want to have a variety of speakers. That’s why we are opening a call for papers, and invite anyone who wants to talk about SonarQube to submit a presentation. The CFP will close on Friday the 14th of August.

To submit a talk, simply send an email to marketing@sonarsource.com with the title, abstract and duration of the presentation. We are eagerly awaiting your proposal!

Categories: Open Source

Version R6#14.2

IceScrum - Wed, 07/29/2015 - 20:08
Will be available soon.
Categories: Open Source

New support to start with iceScrum

IceScrum - Mon, 07/27/2015 - 11:57
Hello everybody, In this blogpost we are going to present you two new features we put at your disposal to start using iceScrum in the good way and understand it better. Starting from scratch with iceScrum can be difficult if you are not familiar at all with the Scrum methodology. That is why we added…
Categories: Open Source

SonarLint brings SonarQube rules to Visual Studio

Sonar - Fri, 07/24/2015 - 14:33

We are happy to announce the release of SonarLint for Visual Studio version 1.0. SonarLint is a Visual Studio 2015 extension that provides on-the-fly feedback to developers on any new bug or quality issue injected into C# code. The extension is based on and benefits from the .NET Compiler Platform (“Roslyn”) and its code analysis API to provide a fully-integrated user experience in Visual Studio 2015.

Features

There are lots of great rules in the tool. We won’t list all 76 of the rules we’ve implemented so far, but here are a couple that show what you can expect from the product:

  • Defensive programming is a good practise, but in some cases you shouldn’t simply check whether an argument is null or not. For example, value types (such as structs) can never be null, and as a consequence comparing a non-restricted generic type parameter to null might not make sense (S2955), because it will always return false for structs.
  • Did you know that a static field of a generic class is not shared among instances of different close-constructed types (S2743)? So how many instances of DefaultInnerComparer do you think will be created with the following class? It is static so you might have guessed one, but actually there will be an instance for each type parameter used for instantiating the class.
  • We’ve been using SonarLint internally for a while now, and are running it against a few open source libraries too. We’ve already found bugs in both Roslyn and Nuget with the rule “Identical expressions used on both sides of a binary operator” rule (S1764).
  • Also, as shown below, some cases of null pointer dereferencing can be detected as well (S1697):

This is just a small selection of the implemented rules. To find out more, go and check out the product.

How to get it?

SonarLint is packaged in two ways:

  • Visual Studio Extension
  • Nuget package

To install the Visual Studio Extension download the VSIX file from Visual Studio Gallery. Optionally, you can download the complete source code and build the extension for yourself. Oh, and you might have already realized: this product is open source (under LGPLv3 license), so you can contribute if you’d like.

By the way, internally the SonarQube C# plugin also uses the same code analyzers, so if you are already using the SonarQube platform for C# projects, from now on you can also get the issues directly in the IDE.

What’s next?

In the following months we’ll increase the number of supported rules, and as with all our SonarQube plugins, we are moving towards bug-detection rules. Next to this effort, we’re continuously adding code fixes to the Visual Studio Extension. That way, the issues identified by the tool can be automatically fixed inside Visual Studio. In the longer run, we aim to bring the same analyzers we’ve implemented for C# to VB.Net as well. Updates are coming frequently, so keep an eye on the Visual Studio Extension Update window.

This SonarLint for Visual Studio is one piece of this puzzle we’ve been working on since the beginning of the year: providing to the .Net community a tight, easy and native integration of the SonarQube ecosystem into the Microsoft ALM suite. This 1.0 release of SonarLint is a good opportunity to warmly thanks again Jean-Marc Prieur, Duncan Pocklington and Bogdan Gavril from Microsoft. They have been highly and daily contributing to this effort to make SonarQube a central piece of any .Net development environment.

For more information on the product go to http://vs.sonarlint.org or follow us on Twitter.

Categories: Open Source

Geneva SonarQube Conference – Sept. 23 & 24.

Sonar - Fri, 07/17/2015 - 10:13

We are very happy to announce the first SonarQube European User Conference! This two-day free event will take place in Geneva on September 23rd and 24th, right in the heart of the city at Côté Cour Côté Jardin – Rue de la Chapelle 8, 1207 Geneva.

Following the success of the conferences in the US and Paris earlier this year, the Geneva conference targets a broader and more international audience. It will provide a forum for users to meet, network, and share experiences and best practices around the SonarQube platform. Attendees will also be able to learn directly from the SonarSource Team about the platform roadmap and vision, as well as what was accomplished with 4.x and what is being accomplished with the 5.x series.

The program will highlight some of the exciting topics we’ve talked about these past weeks, like the Water Leak paradigm, the new SonarQube GitHub Plugin, the collaboration with Microsoft to integrate with the .NET ALM Suite, and the coverage of the security and vulnerability markets. We plan to make the conference as exciting as these recent topics and even more so!

There will also be more informal sessions, such as presentations and feedback from customers and partners on large scale implementations of SonarQube, and hands-on workshops on technical aspects such as platform monitoring, custom rules development…

We hope you will be able to make it, and are looking forward to meeting you there! To register, please click here.

Categories: Open Source

SonarQube Swift Plugin Offers Mature Functionality for Young Language

Sonar - Fri, 07/10/2015 - 10:13

The Swift programming language is only a year old, but the SonarQube plugin for code written in this “green” language has already been out for six months and already offers a mature set of features.

The SonarQube Swift plugin is absolutely easy to use. All you need to do is specify the name of the project and the folder with the source files. As the analysis output, you get a wealth of metrics (lines of code, complexity etc.), code duplication detection, and of course, the most important and interesting thing: issues raised by your code.

The Swift language introduces a lot of new features, some of which developers have been waiting for for a long time (e.g. easy-to-use optional values), while others are a bit more controversial (operator overriding, custom operators). Love the new features or hate them, no developer is indifferent.

Because the Swift language is so new, our team has made an effort to create a pile of useful rules to help developers take proper advantage of its unique features. For instance, for those who are used to ending each switch case with a break, we have this rule: “break” should be the only statement in a “case”.
For those who are addicted to using custom operators, we have rules limiting the risks of using of this feature:

And for sure, the Swift plugin provides standard types of rules like name convention rules for all possible categories, a bunch of rules detecting too-complex code and other super useful bug detection rules such as:

The Swift language is developing rapidly, regularly releasing new versions with new features and syntax. At the recent WWDC 2015, Swift 2.0 was announced. It introduces error-handling mechanisms, defer statements, guard statements and a lot of other stuff. All of which is already supported by Swift plugin 1.4!

So if you are interested in being able to develop high quality Swift code quickly, take a look at Nemo to what the SonarQube Swift plugin offers, and then try it out for yourself.

Categories: Open Source

GitHub pull request analysis helps fix the leak

Sonar - Wed, 07/08/2015 - 12:15

If you follow SonarSource, you are probably aware of a simple and yet powerful paradigm that we’re using internally: the water leak concept. That is how we’ve been working on a daily basis at SonarSource since a couple of years already, using various features of SonarQube like “New Issues” notifications“Since previous version” differential period, and quality gates. These features allows us to make sure that no technical debt is introduced on new code. More recently, we have developed a brand new plugin to go even further in this direction: the SonarQube GitHub Plugin.

Analysing GitHub pull requests to detect new issues

At SonarSource, we use GitHub to manage our codebase. Every bug fix, improvement, and new feature is developed in a Git branch and managed through a pull request on GitHub. Each pull request must be reviewed by someone else on the team before it can be merged into the master branch. Previously, it was only after the merge and the next analysis (every master branch is analysed on our internal SonarQube instance several times a day) that SonarQube feedback was available, possibly leading to another pull request-review cycle. “Wouldn’t it be great” we thought, “if the pull request could be reviewed not only by a teammate, but also by SonarQube itself before being merged?” That way, developers would have the opportunity to fix potential issues before they could be injected into the master branch (and reported on the SonarQube server).

This is what we achieved with the new SonarQube GitHub Plugin. Basically, every time a pull request is submitted by a member of team, the continuous integration system launches a SonarQube preview analysis with the parameters to activate the GitHub plugin, so that:

  1. When the SonarQube analysis starts, the GitHub plugin updates the status of the pull request to mention that there’s a pending analysis
  2. Then SonarQube executes all the required language plugins
  3. And at the end, the GitHub plugin:
    • adds an inline comment for each new issue,
    • adds a global comment with a summary of the analysis,
    • and updates the status of the pull request, setting it to “failed” if at least one new critical or blocker issue was found.

Here’s what such a pull request looks like (click to enlarge):

Pull Request Analysis with SonarQube GitHub Plugin

Thanks to the GitHub plugin, developers get quick feedback as a natural, integrated part of their normal workflow. When a GitHub analysis shows new issues, developers can choose to fix the issues and push a new commit – thus launching a new SonarQube analysis. But in the end, it is up to the developer whether or not to merge the branch into the master, whatever the status of the pull request after the analysis. The SonarQube GitHub plugin provides feedback, but the power remains where it belongs – in the hands of the developers.

What’s next?

Now that the integration with GitHub has proven to be really useful, we feel that doing a similar plugin for Atlassian Stash would be valuable, and writing it should be quite straightforward.

Also, analysing pull requests on GitHub is a great step forward, because it gives early feedback on incoming technical debt. But obviously, developers would like to have this feedback even earlier: in their IDEs. This is why in the upcoming months, we will actively work on the Eclipse and IntelliJ plugins to make sure they allow developers to efficiently fix issues before commit and adopt the “water leak” approach wholesale. To achieve this target, we’ll update the plugins to run SonarQube analyses in the blink of an eye for instantaneous feedback on the code you are developing.

 

Categories: Open Source

Water Leak Changes the Game for Technical Debt Management

Sonar - Fri, 07/03/2015 - 09:07

A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well…

When I checked a few days later, I realized that actually there wasn’t much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year’s announcement of SonarQube 4.3. Well (again)… for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren’t much.

Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let’s go!

First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:

  • The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
  • Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
  • There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
  • What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process

All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.

For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there’s another problem: how to fix technical debt without injecting functional regressions? This is really no fun…

At SonarSource, we think several parameters in this equation must be changed:

  • First and most importantly, the developers should own quality and be ultimately responsible for it
  • The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
  • The Quality Gate should be unified for all applications
  • The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team

Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?

water leak

When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.

So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating – either that or put together a remediation plan. Why is it that we don’t apply the simple logic we use at home to the way we manage our code quality? I don’t know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.

Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:

  • The Quality Gate can be run every day, and passing it is achievable. There is no surprise at release time
  • It is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
  • There is a clear ownership of code quality
  • The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
  • The cost is insignificant because it is part of the development process

As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.

I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn’t is where it does not need to be.

Categories: Open Source

New iceScrum products available!

IceScrum - Thu, 06/18/2015 - 17:56
Hello everybody,   We are back this week to announce you the launch of not one, but two new iceScrum products!   iceScrum users asked us for improvements regarding our offers for iceScrum Pro Standalone, to be more flexible in our offers and adapt to our customers needs. That is why we created those two…
Categories: Open Source

Quality Gates Work – If You Let Them

Sonar - Thu, 06/11/2015 - 21:20

Some people see rules – standards – requirements – as a way to hem in the unruly, limit bad behavior, and restrict rowdiness. But others see reasonable rules as a framework within which to excel, a scaffolding for striving, an armature upon which to build excellence.

Shortly after its inception, the C-Family plugin (C, C++ and Objective-C) had 90%+ test coverage. The developers were proud of that, but the Quality Gate requirement was only 80% on new code, so week after week, version by version, coverage slowly dropped. The project still passed the Quality Gate with flying colors each time, but coverage was moving in the wrong direction. In December, overall coverage dropped into the 80′s with the version 3.3 release, 89.7% to be exact.

In January, the developers on the project decided they’d had enough, and asked for a different Quality Gate. A stricter one. They asked to be held to a standard of 90% coverage on new code.

Generally at SonarSource, we advocate holding everyone to the same standards – no special Quality Profiles for legacy projects, for instance. But this request was swiftly granted, and version 3.4, released in February, had 90.1% coverage overall, not just on new code. Today the project’s overall coverage stands at 92.7%.

Language Team Technical Lead Evgeny Mandrikov said the new standard didn’t just inspire the team to write new tests. The need to make code more testable “motivated us to refactor APIs, and lead to a decrease of technical debt. Now coverage goes constantly up.”

Not only that, but since the team had made the request publicly, others quickly jumped on the bandwagon. Six language plugins are now assigned to that quality gate. The standard is set for coverage on new code, but most of the projects meet it for overall coverage, and the ones that don’t are working on it.

What I’ve seen in my career is that good developers hold themselves to high standards, and believe that it’s reasonable for their managers to do the same. Quality Gates allow us to set – and meet – those standards publicly. Quality Gates and quality statistics confer bragging rights, and set up healthy competition among teams.

Looking back, I can’t think of a better way for the C-Family team to have kicked off the new year.

Categories: Open Source

[UPDATE] Version R6#14.2

IceScrum - Thu, 06/11/2015 - 12:47
Hello everybody, Here comes a new version of iceScrum and iceScrum Pro! This version brings many changes, we hope that you will like them! This post lists all the changes brought by iceScrum R6#14. It is updated when minor versions are released to fix bugs and add small improvements on top of this major version.…
Categories: Open Source

New team & project management

IceScrum - Tue, 06/09/2015 - 10:29
Hello Everybody, We believe that teams are the cornerstone of agile project management and that it deserved to get more power in iceScrum. Cloud users get this for a long time but it is now available for everybody: a team can work on several projects. This change has major consequences: – teams now have a…
Categories: Open Source

Don’t Cross the Beams: Avoiding Interference Between Horizontal and Vertical Refactorings

JUnit Max - Kent Beck - Tue, 09/20/2011 - 03:32

As many of my pair programming partners could tell you, I have the annoying habit of saying “Stop thinking” during refactoring. I’ve always known this isn’t exactly what I meant, because I can’t mean it literally, but I’ve never had a better explanation of what I meant until now. So, apologies y’all, here’s what I wished I had said.

One of the challenges of refactoring is succession–how to slice the work of a refactoring into safe steps and how to order those steps. The two factors complicating succession in refactoring are efficiency and uncertainty. Working in safe steps it’s imperative to take those steps as quickly as possible to achieve overall efficiency. At the same time, refactorings are frequently uncertain–”I think I can move this field over there, but I’m not sure”–and going down a dead-end at high speed is not actually efficient.

Inexperienced responsive designers can get in a state where they try to move quickly on refactorings that are unlikely to work out, get burned, then move slowly and cautiously on refactorings that are sure to pay off. Sometimes they will make real progress, but go try a risky refactoring before reaching a stable-but-incomplete state. Thinking of refactorings as horizontal and vertical is a heuristic for turning this situation around–eliminating risk quickly and exploiting proven opportunities efficiently.

The other day I was in the middle of a big refactoring when I recognized the difference between horizontal and vertical refactorings and realized that the code we were working on would make a good example (good examples are by far the hardest part of explaining design). The code in question selected a subset of menu items for inclusion in a user interface. The original code was ten if statements in a row. Some of the conditions were similar, but none were identical. Our first step was to extract 10 Choice objects, each of which had an isValid method and a widget method.

before:

if (...choice 1 valid...) {
  add($widget1);
}
if (...choice 2 valid...) {
  add($widget2);
}
... 

after:

$choices = array(new Choice1(), new Choice2(), ...);
foreach ($choices as $each)
  if ($each->isValid())
    add($each->widget());

After we had done this, we noticed that the isValid methods had feature envy. Each of them extracted data from an A and a B and used that data to determine whether the choice would be added.

Choice pulls data from A and B

Choice1 isValid() {
  $data1 = $this->a->data1;
  $data2 = $this->a->data2;
  $data3 = $this->a->b->data3;
  $data4 = $this->a->b->data4;
  return ...some expression of data1-4...;
}

We wanted to move the logic to the data.

Choice calls A which calls B

Choice1 isValid() {
  return $this->a->isChoice1Valid();
}
A isChoice1Valid() {
  return ...some expression of data1-2 && $this-b->isChoice1Valid();
}
Succession

Which Choice should we work on first? Should we move logic to A first and then B, or B first and then A? How much do we work on one Choice before moving to the next? What about other refactoring opportunities we see as we go along? These are the kinds of succession questions that make refactoring an art.

Since we only suspected that it would be possible to move the isValid methods to A, it didn’t matter much which Choice we started with. The first question to answer was, “Can we move logic to A?” We picked Choice. The refactoring worked, so we had code that looked like:

Choice calls A which gets data from B

A isChoice1Valid() {
  $data3 = $this->b->data3;
  $data4 = $this->b->data4;
  return ...some expression of data1-4...;
}

Again we had a succession decision. Do we move part of the logic along to B or do we go on to the next Choice? I pushed for a change of direction, to go on to the next Choice. I had a couple of reasons:

  • The code was already clearly cleaner and I wanted to realize that value if possible by refactoring all of the Choices.
  • One of the other Choices might still be a problem, and the further we went with our current line of refactoring, the more time we would waste if we hit a dead end and had to backtrack.

The first refactoring (move a method to A) is a vertical refactoring. I think of it as moving a method or field up or down the call stack, hence the “vertical” tag. The phase of refactoring where we repeat our success with a bunch of siblings is horizontal, by contrast, because there is no clear ordering between, in our case, the different Choices.

Because we knew that moving the method into A could work, while we were refactoring the other Choices we paid attention to optimization. We tried to come up with creative ways to accomplish the same refactoring safely, but with fewer steps by composing various smaller refactorings in different ways. By putting our heads down and getting through the other nine Choices, we got them done quickly and validated that none of them contained hidden complexities that would invalidate our plan.

Doing the same thing ten times in a row is boring. Half way through my partner started getting good ideas about how to move some of the functionality to B. That’s when I told him to stop thinking. I don’t actually want him to stop thinking, I just wanted him to stay focused on what we were doing. There’s no sense pounding a piton in half way then stopping because you see where you want to pound the next one in.

As it turned out, by the time we were done moving logic to A, we were tired enough that resting was our most productive activity. However, we had code in a consistent state (all the implementations of isValid simply delegated to A) and we knew exactly what we wanted to do next.

Conclusion

Not all refactorings require horizontal phases. If you have one big ugly method, you create a Method Object for it, and break the method into tidy shiny pieces, you may be working vertically the whole time. However, when you have multiple callers to refactor or multiple implementors to refactor, it’s time to begin paying attention to going back and forth between vertical and horizontal, keeping the two separate, and staying aware of how deep to push the vertical refactorings.

Keeping an index card next to my computer helps me stay focused. When I see the opportunity for a vertical refactoring in the midst of a horizontal phase (or vice versa) I jot the idea down on the card and get back to what I was doing. This allows me to efficiently finish one job before moving onto the next, while at the same time not losing any good ideas. At its best, this process feels like meditation, where you stay aware of your breath and don’t get caught in the spiral of your own thoughts.

Categories: Open Source

My Ideal Job Description

JUnit Max - Kent Beck - Mon, 08/29/2011 - 21:30

September 2014

To Whom It May Concern,

I am writing this letter of recommendation on behalf of Kent Beck. He has been here for three years in a complicated role and we have been satisfied with his performance, so I will take a moment to describe what he has done and what he has done for us.

The basic constraint we faced three years ago was that exploding business opportunities demanded more engineering capacity than we could easily provide through hiring. We brought Kent on board with the premise that he would help our existing and new engineers be more effective as a team. He has enhanced our ability to grow and prosper while hiring at a sane pace.

Kent began by working on product features. This established credibility with the engineers and gave him a solid understanding of our codebase. He wasn’t able to work independently on our most complicated code, but he found small features that contributed and worked with teams on bigger features. He has continued working on features off and on the whole time he has been here.

Over time he shifted much of his programming to tool building. The tools he started have become an integral part of how we work. We also grew comfortable moving him to “hot spot” teams that had performance, reliability, or teamwork problems. He was generally successful at helping these teams get back on track.

At first we weren’t sure about his work-from-home policy. In the end it clearly kept him from getting as much done as he would have had he been on site every day, but it wasn’t an insurmountable problem. He visited HQ frequently enough to maintain key relationships and meet new engineers.

When he asked that research & publication on software design be part of his official duties, we were frankly skeptical. His research has turned into one of the most valuable of his activities. Our engineers have had early access to revolutionary design ideas and design-savvy recruits have been attracted by our public sponsorship of Kent’s blog, video series, and recently-published book. His research also drove much of the tool building I mentioned earlier.

Kent is not always the easiest employee to manage. His short attention span means that sometimes you will need to remind him to finish tasks. If he suddenly stops communicating, he has almost certainly gone down a rat hole and would benefit from a firm reminder to stay connected with the goals of the company. His compensation didn’t really fit into our existing structure, but he was flexible about making that part of the relationship work.

The biggest impact of Kent’s presence has been his personal relationships with individual engineers. Kent has spent thousands of hours pair programming remotely. Engineers he pairs with regularly show a marked improvement in programming skill, engineering intuition, and sometimes interpersonal skills. I am a good example. I came here full of ideas and energy but frustrated that no one would listen to me. From working with Kent I learned leadership skills, patience, and empathy, culminating in my recent promotion to director of development.

I understand Kent’s desire to move on, and I wish him well. If you are building an engineering culture focused on skill, responsibility and accountability, I recommend that you consider him for a position.

 

===============================================

I used the above as an exercise to help try to understand the connection between what I would like to do and what others might see as valuable. My needs are:

  • Predictability. After 15 years as a consultant, I am willing to trade some freedom for a more predictable employer and income. I don’t mind (actually I prefer) that the work itself be varied, but the stress of variability has been amplified by having two kids in college at the same time (& for several more years).
  • Belonging. I have really appreciated feeling part of a team for the last eight months & didn’t know how much I missed it as a consultant.
  • Purpose. I’ve been working since I was 18 to improve the work of programmers, but I also crave a larger sense of purpose. I’d like to be able to answer the question, “Improved programming toward what social goal?”
Categories: Open Source

Thu, 01/01/1970 - 02:00