Skip to content

Feed aggregator

Do Great Work & Don’t Ship Defects

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Teams are often pressured by business stakeholders to “go faster” and deliver new features quickly at the expense of quality. This pressure leads to technical debt unless the team stands strong. Engineers and developers, you have a responsibility to your teams, your profession, and to yourselves to uphold high standards. Yes, learn and incorporate techniques which enable you to deliver frequently but take care to also ensure that your code meets or exceeds your definition of done.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Water Leak Changes the Game for Technical Debt Management

Sonar - Fri, 07/03/2015 - 09:07

A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well…

When I checked a few days later, I realized that actually there wasn’t much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year’s announcement of SonarQube 4.3. Well (again)… for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren’t much.

Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let’s go!

First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:

  • The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
  • Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
  • There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
  • What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process

All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.

For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there’s another problem: how to fix technical debt without injecting functional regressions? This is really no fun…

At SonarSource, we think several parameters in this equation must be changed:

  • First and most importantly, the developers should own quality and be ultimately responsible for it
  • The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
  • The Quality Gate should be unified for all applications
  • The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team

Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?

water leak

When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.

So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating – either that or put together a remediation plan. Why is it that we don’t apply the simple logic we use at home to the way we manage our code quality? I don’t know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.

Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:

  • The Quality Gate can be run every day, and passing it is achievable. There is no surprise at release time
  • It is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
  • There is a clear ownership of code quality
  • The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
  • The cost is insignificant because it is part of the development process

As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.

I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn’t is where it does not need to be.

Categories: Open Source

R: Calculating the difference between ordered factor variables

Mark Needham - Fri, 07/03/2015 - 00:55

In my continued exploration of Wimbledon data I wanted to work out whether a player had done as well as their seeding suggested they should.

I therefore wanted to work out the difference between the round they reached and the round they were expected to reach. A ’round’ in the dataset is an ordered factor variable.

These are all the possible values:

rounds = c("Did not enter", "Round of 128", "Round of 64", "Round of 32", "Round of 16", "Quarter-Finals", "Semi-Finals", "Finals", "Winner")

And if we want to factorise a couple of strings into this factor we would do it like this:

round = factor("Finals", levels = rounds, ordered = TRUE)
expected = factor("Winner", levels = rounds, ordered = TRUE)  
 
> round
[1] Finals
9 Levels: Did not enter < Round of 128 < Round of 64 < Round of 32 < Round of 16 < Quarter-Finals < ... < Winner
 
> expected
[1] Winner
9 Levels: Did not enter < Round of 128 < Round of 64 < Round of 32 < Round of 16 < Quarter-Finals < ... < Winner

In this case the difference between the actual round and expected round should be -1 – the player was expected to win the tournament but lost in the final. We can calculate that differnce by calling the unclass function on each variable:

 
> unclass(round) - unclass(expected)
[1] -1
attr(,"levels")
[1] "Did not enter"  "Round of 128"   "Round of 64"    "Round of 32"    "Round of 16"    "Quarter-Finals"
[7] "Semi-Finals"    "Finals"         "Winner"

That still seems to have some remnants of the factor variable so to get rid of that we can cast it to a numeric value:

> as.numeric(unclass(round) - unclass(expected))
[1] -1

And that’s it! We can now go and apply this calculation to all seeds to see how they got on.

Categories: Blogs

NDC talk on SOLID in slices not layers video online

Jimmy Bogard - Thu, 07/02/2015 - 20:21

The talk I gave at NDC Oslo 2015 is up on SOLID architecture in slices not layers:

https://vimeo.com/131633177

In it I talk about flipping this style architecture:

To one that focuses on vertical deliverable features:

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Agency Product Owner Training Starts in August

Johanna Rothman - Thu, 07/02/2015 - 16:42

We have an interesting problem in some projects. Agencies, consulting organizations, and consultants help their clients understand what the client needs in a product. Often, these people and their organizations then implement what the client and agency develop as ideas.

As the project continues, the agency manager continues to help the client identify and update the requirements. Because this a limited time contract, the client doesn’t have a product manager or product owner. The agency person—often the owner—acts as a product owner.

This is why Marcus Blankenship and I have teamed up to offer Product Owner Training for Agencies.

If you are an agency/consultant/outside your client’s organization and you act as a product owner, this training is for you. It’s based on my workshop Agile and Lean Product Ownership. We won’t do everything in that workshop. Because it’s an online workshop, you’ll work on your projects/programs in between our meetings.

If you are not part of an organization and you find yourself acting as a product owner, this training is for you. See Product Owner Training for Agencies.

Categories: Blogs

6 Awesome Resources to Help You Be a Better Executive Sponsor

Agile Management Blog - VersionOne - Thu, 07/02/2015 - 14:30

Executive Resources.docx

 

 

 

 

 

 

 

 

74% of projects are successful at companies where sponsors have expert or advanced project management knowledge, yet the 62% of companies do not provide executive sponsor education.

Make sure your next project is successful by reviewing these six resources for executive sponsors.

Executive Sponsor Engagement
TOP DRIVER OF PROJECT AND PROGRAM SUCCESS

An in-depth research report from the Project Management Institute uncovering the three primary factors that can limit or inhibit executive sponsors’ ability to be effective.

How to Be an Effective Executive Sponsor

This Harvard Business Review article describes the keys to communication between the executive sponsor and project manager.

A Sponsor’s 12 “P’s” to Deliver Successful Strategic Execution

In this article Jon Hughes, Head of Program Management Consulting at Cognizant describes the twelve personal traits of a successful executive sponsor.

7 Tips for Communicating with Executive Sponsors and Stakeholders

This blog post highlights seven tips for communicating with executive sponsors and stakeholders that will help your team engage with the key leaders of the organization.

How an Executive Sponsor Contributes to Project Management

This article discusses what an executive sponsor can do to ensure the success of projects.

How to Innovate with an Executive Sponsor

This Harvard Business Review article explains how to garner sponsorship but avoid being steered toward experimenting in a large, public, fashion that failure results in shuttering the innovation effort.

What are some other good resources you would recommend?

Categories: Companies

SAFe 4.0 Sneak Peek 2!

Agile Product Owner - Wed, 07/01/2015 - 21:39

Hi Everyone,

Since our last update, the team has been working on the next revision of the Big Picture, V4.0. In so doing, we have taken every opportunity to test assumptions and learn new insights from our practitioners in the field: SPCs, Agile coaches, and leaders who deal with different aspects of scaling development within large enterprises. Multiple SPC testing sessions in Boulder, CO, Herndon, VA, and Sydney, Australia are the latest examples. These sessions helped us finalize the current draft of the BP for SAFe 4.0. Thanks to all who contributed!

As a result, we have developed a more modular – and more expandable – version of the Big Picture. And as many of you know from our prior releases of SAFe LSE, the Expanded view  incorporates the LSE-developed content and provides guidance for teams building larger cyber-physical systems and really big software solutions. The Expanded view is below:

11

The Collapsed view provides guidance for cases when the value streams can be delivered by a single ART (50–125) people.

1

Obviously, there are other changes as well. These include:

  • New treatment and inclusion of Kanban systems at Portfolio, Value Stream and Program levels. Designed to be somewhat self-explanatory, but time will tell
  • The addition of the Enterprise icon; highlighting the connection of the program portfolio to the enterprise strategy
  • The inclusion of the Customer and Solution Context
  • Program and Solution epics
  • A new treatment for Solution Intent that now shows the dynamic nature of fixed and variable aspects. It will also serve as a launching pad for discussions of adaptive requirements and design, set-based development, and agile contracts
  • Value Stream Coordinator – a VS equivalent of the RTE
  • Solution Architect – multiple ART value streams typically require architectural governance and guidance
  • Update to Engineering and Quality Practices include the “XP” inspired attribution
  • Communities of Practice – this important construct, orthogonal to cross-functional teams, has been used in SAFe, but now has its now object for better articulation
  • “Release Any Time” – better wording (we hope) to explain the fact that Releases do not have to occur on the cadence
  • Explicit representation of the value stream, solution and customer in the collapsed view. Obviously, value streams always exist,  regardless of the the size of the solution.
  • The new “spanning palette”, which illustrates for example, how Metrics, Vision,  Roadmap and Milestones (new object) occur at multiple levels
  • The newer third dimension (grey background), which will be used to navigate to Lean Values, SAFe Principles, and Implementation content. Lean-Agile Leaders have been repositioned there as well.
  • New softer color palette

We plan to release a Preview version of SAFe 4.0 in August. General Availability is scheduled for November, 2015. As you can see from the BP, SAFe 4.0 is largely a superset of 3.0. Therefore, starting this fall, our SPC certifications will certify in both SAFe 4.0 and SAFe 3.o. SAFe 3.0, and all the associated courseware, will be supported throughout 2016.

In the meantime, we appreciate comments and thoughts on the current version. With every increment, we evolve one step forward, together.

Thank you and stay tuned for new updates!

—The SAFe Framework Team: Dean, Alex, Richard, Inbar

Categories: Blogs

CAST Highlight Improves Technical Debt Estimates

Scrum Expert - Wed, 07/01/2015 - 18:28
CAST has launched a new version of CAST Highlight that rapidly and affordably analyzes even the most complex portfolios of enterprise applications to identify areas where CIOs can focus their efforts and achieve the most “bang for their buck.” CAST’s cloud-based Highlight is a new weapon in the CIO’s armory, using advanced benchmarking to assess software risk, complexity and size across large IT portfolios. Using Highlight, CIOs can more accurately and rapidly prioritize projects and programs, based on tangible data. Previously, strategic IT initiatives were notoriously difficult to rank; a lack ...
Categories: Communities

End-to-end Hypermedia: Building a React Client

Jimmy Bogard - Wed, 07/01/2015 - 18:06

In the last post, I walked through what is to me the most interesting part of REST – the client. It’s easy to build a server API, but no API is complete without someone actually using that API. This is where most REST examples fall down for me – they show all sorts of pretty pictures of hypermedia-rich JSON from the server, but no real examples of how to consume that API.

I walked through some jQuery code in the last post, but why stop with jQuery? That’s so 2010. Instead, I want to build around React. React is perfect for hypermedia because of its component-oriented nature. A resource’s representation can be broken down into its components, and React components then matched accordingly. But before we get into the client, I’ll need to modify my sample to consume React.

Installing React

As a shortcut, I’m just going to use ReactJS.Net to build React into my existing MVC app. I install the ReactJS.Net NuGet package, and add a script reference to my downloaded react.js library. Normally, I’d go through the whole Bower/npm path, but this seemed like the simplest path to integrate into my sample.

I’m going to create just a blank JSX file for all my React components for this page, and slim down my Index view to the basics:

<h2>Instructors</h2>
<div id="content"></div>
@section scripts{
    <script src="@Url.Content("~/Scripts/react-0.13.3.js")"></script>
    <script src="@Url.Content("~/Scripts/InstructorInfo.jsx")"></script>
    @{
        var href = Url.Action("Index", "Instructor", new {httproute = ""});
    }
    <script>
        React.render(
            React.createElement(InstructorsInfo, {href: '@href'}),
            document.getElementById("content")
        );
    </script>
}

All of the div placeholders are removed except one, for content. I pull in the React library and my custom React components. The ReactJS.Net package takes my JSX file and transpiles it into Javascript (as well as builds the needed files for in-browser debugging). Finally, I render my base React component, passing in the root URL for kicking off the initial request for instructors, and the DOM element in which to render the React component into.

Once I’ve got the basic React library up and running, it’s time to figure out how we would like to componentize our page.

Slicing our Page

If we look at the page we want to create, we need to take this page and create React components from the parts we find. Here’s our page from before:

Looking at this, I see three individual tables populated with collection+json data. I’m thinking I create one overall component composed of three individual items. Inside the table, I can break things up into the table, rows, header, cells and links:

I might need a few more, but this is a good start. Next, we can start building our React components.

React Components

First up is our overall component that contains our three tables of collection+json data. Since I have an understanding of what’s getting returned on the server side, I’m going to make an assumption that I’m building out three tables, and I can navigate links to drill down to more. Additionally, this component will be responsible for making the initial AJAX call and keeping the overall state. State is important in React, and I’ve decided to keep the parent component responsible for the resource state rather than each table. My InstructorInfo component is:

class InstructorsInfo extends React.Component {
  constructor(props) {
    super(props);
    this.state = {
      instructors: { },
      courses: { },
      students: { }
    };
    this._handleSelect = this._handleSelect.bind(this);
  }
  componentDidMount() {
    $.getJSON(this.props.href)
      .done(data => this.setState({ instructors: data }));
  }
  _handleSelect(e) {
    $.getJSON(e.href)
      .done(data => {
        var state = e.rel === "courses"
          ? { students: {}}
          : {};

        state[e.rel] = data;

        this.setState(state);
      });
  }
  render() {
    return (
      <div>
        <CollectionJsonTable data={this.state.instructors}
          onSelect={this._handleSelect} />
        <CollectionJsonTable data={this.state.courses}
          onSelect={this._handleSelect} />
        <CollectionJsonTable data={this.state.students}
          onSelect={this._handleSelect} />
      </div>
    )
  }
}

I’m using ES6 here, which makes building React components a bit nicer to work with. I first declare my React component, extending from React.Component. Next, in my constructor, I set up the initial state, a object with empty values for the instructors/courses/students state. Finally, I set up the binding for a callback function to bind to the React component as opposed to the function itself.

In the componentDidMount function, I perform the initial AJAX call and set the instructors collection state based on the data that gets back. The URL I use to make the initial call is based on the “href” of my components properties.

The _handleSelect function is the callback of the clicked link way down on one of the tables. I wanted to have the parent component manage fetching new collections instead of a child component figuring out what to do. That method makes the AJAX call based on the “href” passed in from the collection+json data, gets the state back and updates the relevant state based on the “rel” of the link. To make things easy, I matched up the state’s property names to the rel’s I knew about.

Finally, the render function just has a div with my three CollectionJsonTable components, binding up the data and select functions. Let’s look at that component next:

class CollectionJsonTable extends React.Component {
  render() {
    if (!this.props.data.collection) {
      return <div></div>;
    }
    if (!this.props.data.collection.items.length){
      return <p>No items found.</p>;
    }

    var containsLinks = _(this.props.data.collection.items)
      .some(item => item.links && item.links.length);

    var rows = _(this.props.data.collection.items)
      .map((item, idx) => <CollectionJsonTableRow
        item={item}
        containsLinks={containsLinks}
        onSelect={this.props.onSelect}
        key={idx}
        />)
      .value();

    return (
      <table className="table">
        <CollectionJsonTableHeader
          data={this.props.data.collection.items}
          containsLinks={containsLinks} />
        <tbody>
          {rows}
        </tbody>
      </table>
    );
  }
}

This one is not quite as interesting. It only has the render method, and the first part is just to manage either no data or empty data. Since my data can conditionally have links, I found it easier to inform child components whether or not links exist (through the lodash code), rather than every component having to re-figure this out.

To build up each row, I map the collection+json items to CollectionJsonTableRow components, setting up the necessary props (the item, containsLinks, onSelect and key items). In React, there’s no event aggregator so I have to pass down a callback function to the lowest component via properties all the way down. Finally, since I’m building a collection of components, it’s best practice to put some sort of key on these items so that React knows how to re-render correctly.

The final rendered component is a table with a CollectionJsonTableHeader and the rows. Let’s look at that header next:

class CollectionJsonTableHeader extends React.Component {
  render() {
    var headerCells = _(this.props.data[0].data)
      .map((datum, idx) => <th key={idx}>{datum.prompt}</th>)
      .value();

    if (this.props.containsLinks) {
      headerCells.push(<th key="links"></th>);
    }

    return (
      <thead>
        <tr>
          {headerCells}
        </tr>
      </thead>
    );
  }
}

This component also only has a render method. I map the data items from the first item in the collection, producing header cells based on the prompt from the collection+json data. If the collection contains links, I’ll add an empty header cell on the end. Finally, I render the header with the header cells in a row.

With the header done, I can circle back to the CollectionJsonTableRow:

class CollectionJsonTableRow extends React.Component {
  render() {
    var dataCells = _(this.props.item.data)
      .map((datum, idx) => <td key={idx}>{datum.value}</td>)
      .value();

    if (this.props.containsLinks) {
      dataCells.push(<CollectionJsonTableLinkCell
        key="links"
        links={this.props.item.links}
        onSelect={this.props.onSelect} />);
    }

    return (
      <tr>
        {dataCells}
      </tr>
    );
  }
}

The row’s responsibility is just to build up the collection of cells, plus the optional CollectionJsonTableLinkCell. As before, I have to pass down the callback for the link clicks. Similar to the header cells, I fill in the data value (instead of the prompt). Next up is our link cell:

class CollectionJsonTableLinkCell extends React.Component {
  render() {
    var links = _(this.props.links)
      .map((link, idx) => <CollectionJsonTableLink
        key={idx}
        link={link}
        onSelect={this.props.onSelect} />)
      .value();

    return (
      <td>{links}</td>
    );
  }
}

This one isn’t so interesting, it just loops through the links, building out a CollectionJsonTableLink component, filling in the link object, key, and callback. Finally, our CollectionJsonTableLink component:

class CollectionJsonTableLink extends React.Component {
  constructor(props) {
    super(props);
    this._handleClick = this._handleClick.bind(this);
  }
  _handleClick(e) {
    e.preventDefault();
    this.props.onSelect({
      href : this.props.link.href,
      rel: this.props.link.rel}
    );
  }
  render() {
    return (
      <a href='#' rel={this.props.link.rel} onClick={this._handleClick}>
        {this.props.link.prompt}
      </a>
    );
  }
}
CollectionJsonTableLink.propTypes = {
  onSelect: React.PropTypes.func.isRequired
};

The link clicks are the most interesting part here. I didn’t want my link itself to have the behavior of what to do on click, so I call my “onSelect” prop in the click event from my link. The _handleClick method calls the onSelect method, passing in the href/rel from the collection+json link object. In my render method, I just output a normal anchor tag, with the rel and prompt from the link object, and the onClick event bound to the _handleClick method. Finally, I indicate that the onSelect prop is required, so that I don’t have to check for its existence when the link is clicked.

With all these components, I’ve got a working example:

I found working with hypermedia and React to be a far nicer experience than just raw jQuery. I could reason about individual components at the same level as the hypermedia controls, matching what I was building much more effectively to the resource representation returned. I still have to have some sort of knowledge of how I’m going to navigate the links and what to do, but that logic is all encapsulated in my topmost component.

Each of the sub-components aren’t tied to my overall logic and can be re-used as much as I want across my application, allowing me to use collection+json extensively and not worry about having to parse the result again and again. I’ve got a component that can effectively render a nice table based on a collection+json representation.

Next, we’ll kick things up a notch and build out a React.Native implementation, pushing the limit of hypermedia with a dynamic native mobile client.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Pitfall of Scrum: Excessive Preparation/Planning

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Regular big up-front planning is not necessary with Scrum. Instead, a team can just get started and use constant feedback in the Sprint Review to adjust it’s plans. Even the Product Backlog can be created after the first Sprint has started. All that is really necessary to get started is a Scrum Team, a product vision, and a decision on Sprint length. In this extreme case, the Scrum Team itself would decide what to build in its first Sprint and use the time of the Sprint to also prepare some initial Product Backlog Items. Then, the first Sprint Review would allow stakeholders to provide feedback and further develop the Product Backlog. The empirical nature of Scrum could even allow the Product Owner to emerge from the business stakeholders, rather than being assigned to the team right from the start.

Starting a Sprint without a Product Backlog is not easy, but it can be done. The team has to know at least a little about the business, and there should be some (possibly informal) project or product charter that they are aware of. The team uses this super basic information and decides on their own what to build in their first Sprint. Again, the focus should be on getting something that can be demoed (and potentially shippable). The team is likely to build some good stuff and some things that are completely wrong… but the point is to get the Inspect and Adapt cycle started as quickly as possible. Which means of course that they need to have stakeholders (customers, users) actually attend the demo at the end of the Sprint. The Product Owner may or may not even be involved in this first Sprint.

One important reason this is sometimes a good approach is the culture of “analysis paralysis” that exists in some organizations. In this situation, an organization is unable to do anything because they are so concerned about getting things right. Scrum is a framework for inspect and adapt and that can (and does) include the Product Backlog. Is it better for a team to sit idle while someone tries to do sufficient preparation? Or is it better to get started and inspect and adapt? This is actually a philosophical question (as well as a practical question). The mindset and philosophy of the Agile Manifesto and Scrum is that trying to produce valuable software is more important that documentation… that individuals and how they work together is more important than rigidly following a process or tool. I will agree that in many cases it is acceptable to do some up-front work, but it should be minimized, particularly when it is preventing people from starting to deliver value. The case of a team getting started without a product backlog is rare… but it can be a great way for a team to help an organization overcome analysis paralysis.

The Agile Manifesto is very clear: “The BEST architectures, requirements and designs emerge out of self-organizing teams.” [Emphasis added.]

Hugely memorable for me is the story that Ken Schwaber told in the CSM course that I took from him in 2003.  This is a paraphrase of that story:

I [Ken Schwaber] was talking to the CIO of a large IT organization.  The CIO told me that his projects last twelve to eighteen months and at the end, he doesn’t get what he needs.  I told him, “Scrum can give you what you don’t need in a month.”

I experienced this myself in a profound way just a couple years into my career as an Agile coach and trainer.  I was working with a department of a large technology organization.  They had over one hundred people who had been working on Agile pilot projects.  The department was responsible for a major product and executive management had approved a complete re-write.  The product managers and Product Owners had done a lot of work to prepare a product backlog (about 400 items!) that represented all the existing functionality of the product that needed to be re-written.  But, the big question, “what new technology platform do we use for the re-write?” had not yet been resolved.  The small team of architects were tasked with making this decision.  But they got stuck.  They got stuck for three months.  Finally, the director of the department, who had learned to trust my advice in other circumstances, asked me, “does Scrum have any techniques for making these kind of architectural decisions?”

I said, “yes, but you probably won’t like what Scrum recommends!”

She said, “actually, we’re pretty desperate.  I’ve got over a hundred people effectively sitting idle for the last three months.  What does Scrum recommend?”

“Just start.  Let the teams figure out the platform as they try to implement functionality.”

She thought for a few seconds.  Finally she said, “okay.  Come by this Monday and help me launch our first Sprint.”

The amazing thing was that the teams didn’t lynch me when on Monday she announced that “our Agile consultant says we don’t need to know our platform in order to get started.”

The first Sprint (two weeks long) was pretty chaotic.  But, with some coaching and active support of management, they actually delivered a working increment of their product.  And decided on the platform to use for the rest of the two-year project.

You must trust your team.

If your organization is spending more than a few days preparing for the start of a project, it is probably suffering from this pitfall.  This is the source of great waste and lost opportunity.  Use Scrum to rapidly converge on the correct solutions to your business problems instead of wasting person-years of time on analysis and planning.  We can help with training and coaching to give you the tools to start fast using Scrum and to fix your Scrum implementation.

This article is a follow-up article to the 24 Common Scrum Pitfalls written back in 2011.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

On Running LeadingAgile

Leading Agile - Mike Cottmeyer - Wed, 07/01/2015 - 03:16

There is so much I’ve been wanting to write the past year or so about the business of LeadingAgile. Those of you following our blog for a while will know that it wasn’t all that long ago that I was working at VersionOne, left for Pillar, and then started out as an independent consultant and formed LeadingAgile.

I got really busy really fast and quickly started selling more work than I could do alone. So I began growing the team. Over the past few years we’ve built a really awesome group of consultants and an equally awesome group of support staff to help us run our operations. There are about 40 of us now and we are still growing.

I’ve always felt that our approach was unique enough that we needed a team of dedicated folks that were totally bought into our system, could grow with us as we evolved our understanding of our models, and were readily available when we needed them. For those reasons, we’ve always hired rather than using subcontractors.

People ask me all the time how we hire at LeadingAgile given that many of our contracts are short term and we never quite know where the work is going to come from. Lot’s of companies use subcontractors to mitigate that risk, but since we’ve decided to absorb that risk internally, we have to have a plan.

That’s a little of what I want to share with you today.

Starting LeadingAgile

We started LeadingAgile with effectively no money in the bank. There was no venture funding, no nest egg to draw on, not even a credit card to live off of when I got this started. Risk management went something like this… how much do I need to pay my bills and where do I think I can go to get that work.

I was fortunate to have friends in the industry that said they could subcontract work to me if I needed it… and that was my safety net. By that time I was pretty well networked here in Atlanta, so I took a chance. Fortunately for my wife and kids… everything worked out. We didn’t go bankrupt.

I started earning more money than we were spending, so we used the extra to get out of debt and build working capital. After about a year and a half, we used some of that working capital to hire our first consultant. Having someone to share the workload allowed me to write more, speak more, and sell more.

As our client list grew, we began hiring more consultants and building our staff. Over time we’ve gotten LeadingAgile to the size it is today, and we’ve built a foundation that can support doubling or tripling our consulting team as demand continues to rise. We’ve learned a lot doing it.

Running LeadingAgile

Over time, I’ve learned that we have four major risk areas… variables maybe… that we have to constantly manage in order to successfully run our company.

1. Working Capital – at any given time we need to have sufficient working capital avaiable to weather a storm or to invest in new opportunities. As we make hiring and growth decisions, we never know which of those new opportunities are going to play out. Understanding how long we can maintain the company without having to let anyone go, or adjust our strategy, is a key factor in decision making.

2. Utilization – We always have the choice to deliver work or to hold off. We can often work with our clients to go faster or slower depending on what they are trying to accomplish and the availability of our team. Understanding the rate we want to burn down our backlog of work is critical. Consultants on the bench are like spoiled inventory. Once a day is passed, you can never bill for it.

3. Sales – We are constantly marketing LeadingAgile and our ideas. We are constantly building relationships with people that might buy from us now or someday in the future. We can attempt to accelerate our sales pipeline or slow it down. Sometimes we don’t want a deal to close because we can’t effectively service it. Other times we need deals to close because we have people on the bench.

4. Growth – If we sell more business than we have staff to deliver, then we have to think about growing our team. We can’t grow our team just in time, so we have to invest, bringing people in early, getting them up to speed with our approach, and having them shadow our current consultants, so they are fully prepared to be part of a transformation team if and when the new business gets signed.

If you graph these variables, it looks something like this…

graph

The green line represents the maximum revenue we could realize in a given month. The red line represents our break even revenue for the month. The blue line shows what we are actually expecting to deliver in any given month. The general theory is that if we stay in between the maximum and minimum every month, we are in a good place and can sustain the company.

But here is the deal… we know our fixed cost, we know our variable costs, we sometimes can predict utilization in a given month, but more than a month or two out is a crap shoot. Even within a given month lots of things can happen. We never know exactly what deals are going to sell, what deals or going to continue, or what deals are going to end. The uncertainty at times can be mind-numbing.

One of my folks said yesterday, it feels like we are always on the verge of having 10 people on the bench or hiring 10 people, we just don’t know which.

So here is how we think about managing the company in the face of this kind of extreme uncertainty.

Our job is make sure that we have as many options available as possible at all times to maximize the chances of good stuff happening. We always want to have a fall back plan if something bad happens.

For us that means the following…

1. Working Capital – We always want to have enough working capital to survive if our revenue falls below the break even threshold in a given month. Ideally, I’d like to be able to survive 6-12 months running at a loss. Now granted, if that happened there would be other fundamentals to work on, and maybe we’d have to acknowledge we were spending too hot or had a bad strategy, but having financial safety prevents you from making rash decisions.

2. Utilization – We try to create options with our clients so we have some room on existing engagements in case one clients slows down and another client needs some extra help. We encourage and support our consultants bringing other team members into their engagements whenever possible, even if it’s not in a billable capacity, so that there is greater familiarity with each other and we all know how each other work. The better we work together, the more portable we are between accounts.

3. Sales – We try to maintain about four times the level of active business development than we think we could ever win, or that we think we could even do, because you never know when something is going to fall through. You can never directly influence the buying cycle of a client, so having lots of opportunities in the pipeline at any one time increases the chances that something will work out when you need it. We spend a ton of time educating folks on our approach before the sale, so managing the pipe is a lot of work.

4. Growth – And when we are really good at all this, firing on all cylinders, and everything closes at once, we might find ourselves with more work than we can actually do. We maintain relationships with a ton of great consultants out in the industry, and are always on the lookout for more great people to add to the team. When we need to hire, it’s important that we have a list of folks that are ready to come work for us when we need them. Our people (and our approach) are fundamentally what we sell.

What this basically means is that we are constantly forecasting forward, assessing the probability that utilization is going to be high, that clients are going to continue, or that new deals are going to close. We assess our level of working capital and regularly place bets on future outcomes. We bet on deals, we bet on people, we bet on cash flow. The goal is to manage risk such that we make more good bets than bad.

Our experience has been that if we do the right things with our money, we are constantly doing the right things for our clients, we are super diligent about managing the sales process and making sure we are talking to new companies all the time, and we always have a solid pipeline of great people to come work for us… we have a shot at making all this work. So far, we’ve had a good run.

The key point in all of this, is that nothing in anything we do is actually knowable. We can model it. We can make informed predictions. But we can never know it for sure. We live in a world of creating options, managing risk, making good decisions, and having safety in the bank if things don’t play out like we’ve planned. Having more options than we need is they key to sustaining and growing our company.

In Conclusion

As I take a step back and look at where we’ve been over the past five years, it’s a really fascinating journey. Applying concepts like Lean Startup and Real Options when you’ve got the livelihood of forty families, and the trust of a hundred or more clients, riding on your success is pretty sobering. For me it’s a testament to how powerful these concepts are when you strive to authentically apply them to your work.

 

The post On Running LeadingAgile appeared first on LeadingAgile.

Categories: Blogs

R: write.csv – unimplemented type ‘list’ in ‘EncodeElement’

Mark Needham - Wed, 07/01/2015 - 00:26

Everyone now and then I want to serialise an R data frame to a CSV file so I can easily load it up again if my R environment crashes without having to recalculate everything but recently ran into the following error:

> write.csv(foo, "/tmp/foo.csv", row.names = FALSE)
Error in .External2(C_writetable, x, file, nrow(x), p, rnames, sep, eol,  : 
  unimplemented type 'list' in 'EncodeElement'

If we take a closer look at the data frame in question it looks ok:

> foo
  col1 col2
1    1    a
2    2    b
3    3    c

However, one of the columns contains a list in each cell and we need to find out which one it is. I’ve found the quickest way is to run the typeof function over each column:

> typeof(foo$col1)
[1] "double"
 
> typeof(foo$col2)
[1] "list"

So ‘col2′ is the problem one which isn’t surprising if you consider the way I created ‘foo':

library(dplyr)
foo = data.frame(col1 = c(1,2,3)) %>% mutate(col2 = list("a", "b", "c"))

If we do have a list that we want to add to the data frame we need to convert it to a vector first so we don’t run into this type of problem:

foo = data.frame(col1 = c(1,2,3)) %>% mutate(col2 = list("a", "b", "c") %>% unlist())

And now we can write to the CSV file:

write.csv(foo, "/tmp/foo.csv", row.names = FALSE)
$ cat /tmp/foo.csv
"col1","col2"
1,"a"
2,"b"
3,"c"

And that’s it!

Categories: Blogs

Should I Patch Built-In Objects / Prototypes? (Hint: NO!)

Derick Bailey - new ThoughtStream - Tue, 06/30/2015 - 15:49

A question was asked via twitter:

@derickbailey @rauschma What do you think about adding methods on built-in prototypes (e.g. String.prototype)?

— Boris Kozorovitzky (@zbzzn) June 30, 2015

So, I built a simple flow chart to answer the question (created w/ draw.io)

Should i patch this

All joking aside, there’s only 1 situation where you should patch a built in object or prototype: if you are building a polyfill to accurately reproduce new JavaScript features for browsers and runtimes that do not support the new feature yet.

Any other patch of a built-in object is cause for serious discussion of the problems that will ensue.

And if you think you should be writing a polyfill, stop. Go find a community supported and battle-tested polyfill library that already provides the feature you need.

Want some examples of the problems?

Imagine this: you have a global variable in a browser, called “config”. Do you think anyone else has ever accidentally or purposely created a “config” variable? What happens when you run in to this situation, and your code is clobbered by someone else’s code because they use the same variable name?

Now imagine this being done on built-in objects and methods, where behaviors are expected to be consistent and stable. If I patch a “format” method on to the String.prototype, and then you load a library that patches it with different behavior, which code will continue working? How will I know why my format function is now failing? What happens when you bring in a new developer and you forget to educate them on the patched and hacked built-in objects in your system?

Go read up on “monkey-patching” in the Ruby community. They learned these lessons the hard way, YEARS ago. You will find countless horror stories and problems caused by this practice.

Here are some examples, to get you started.

But, what if …

NO! There is always a better way to get the feature you need. Decorator / wrapper objects are a good place to start. Hiding the implementation behind your API layer, where you actually need the behavior is also a good place to be.

The point is…

DO NOT PATCH THE BUILT-IN OBJECTS OR PROTOTYPES

Ever.

Your code, your team and your sanity will thank you.

Categories: Blogs

5 Best Practices of Successful Executive Sponsors

Agile Management Blog - VersionOne - Tue, 06/30/2015 - 14:30

 

Exec Sponsor

 

 

 

 

 

 

 

 

It is well know that executive sponsors can help a project to be successful, but not all projects with an executive sponsor succeed.

Why don’t they?

It is because there isn’t necessarily a training manual for how to be an executive sponsor or what pitfalls one must avoid.

So, how do you become a successful executive sponsor?

Build Trust & Communication

While the project manager is responsible for ensuring that the necessary work is being done so that a project will be successful, an executive sponsor’s role is to ensure the project is successful. While those may sound like the same thing, they are vastly different.

The project manager must focus on the day-to-day execution, while the executive sponsor should focus on the bigger picture, ensuring that the project stays aligned to the strategic goal and is being supported by other stakeholders and removing roadblocks.

In order to do this, the executive sponsor and project manager must have a candid relationship built on trust. Too often projects fail because people tend to hope for the best-case scenario and rely too much on best-case status updates. The communication between project manager and executive sponsor should be about openly discussing risks that the executive sponsor can help the team navigate.

Make Realistic Commitments

It goes without saying that commitment is a key component of being an executive sponsor, yet countless projects that have executive sponsors fail nevertheless. This isn’t to say that the failure is necessarily due to the executive sponsor, but as obvious as the importance of commitment is, there are many cases where the executive sponsor had an unrealistic expectation of their commitment. According to PMI’s annual Pulse of the Profession survey, one-third of projects fail because executive sponsors are unengaged.

Sometimes this has less to do with the individual and more to do with the organization. As more and more studies come out showing how executive sponsors increase the success of projects, companies want more executive sponsorship of projects. This has led to many executives being overextended across too many projects.

Before taking on a new project, sit down and determine the required time commitment and whether you have the bandwidth to meet that commitment. Your organization may be pressuring you to step up and take another project, but it won’t do them or you any good if the project fails.

Avoid Getting Overextended

We already discussed that the success of having an executive sponsor has led to many organizations overextending their executives. An in-depth study by the Project Management Institute found that executives sponsor three projects on average at any one time and they report spending an average of 13 hours per week per project, on top of their normal work.

Obviously, this isn’t sustainable and isn’t a recipe for success. The same study found several negative impacts from executive sponsors being overextended.

Project Mgt Statistics

 

 

 

 

 

 

 

The solution here is simple; you have to learn how to say no. That is, of course, easier said than done when you’re being pressured to take on a new project, but again, it won’t do them or you any good if the project fails.

Develop Project Management Knowledge

According to a PMI study, 74% of projects are successful at companies where sponsors have expert or advanced project management knowledge. Unfortunately, only 62% of companies provide executive sponsor education and development. Not every executive has necessarily been a project manager or gone through project management training.

The results speak for themselves; having advanced project management knowledge makes it far more likely that you will be successful. If your organization doesn’t provide executive sponsor development, take it upon yourself to become a project management expert. It will help your team, company and self. The Boston Consulting Group has found that successful executive sponsors focus on improving their skills in change leadership, influencing stakeholders and issue resolution.

Conclusion

I hope this has inspired you to develop your executive sponsor skills. While it may be difficult to find the time, the payoff will be well worth it for you, your team and your company!

What are some other important keys to being a successful executive sponsor?

Categories: Companies

Story Splitting: Where Do I Start?

Leading Agile - Mike Cottmeyer - Tue, 06/30/2015 - 14:16

I don’t always follow the same story splitting approach when I need to split a story. It has become intuitive for me so I might not be able to write about everything I do or what goes through my mind or how I know. But I can put here what comes to mind at the moment:

Look at your acceptance criteria. There is often some aspect of business value in each acceptance criteria that can be split out into a separate story that is valuable to the Product Owner.

Consider the tasks that need to be done. Can any of them be deferred (to a later sprint)? (And  no, testing is not a task that can be deferred to a later sprint.) If so, then consider whether any of them are separately valuable to the Product Owner. If so, perhaps that would be a good story to split out.

If there are lots of unknowns, if it’s a 13 point story because of unanswered questions, make a list of the questions and uncertainties. For each, ask whether it’s a Business Analyst (BA) to-do or a Tech to-do. Also ask for each whether it’s easy and should be considered “grooming”. If it’s significant enough and technical, maybe you should split that out as a Research Spike. Then make an assumption about the likely outcome of the spike, or the desired outcome of the spike, note the assumption in the original story, and reestimate the original story given the assumption.

Look in the story description for conjunctions since and’s and or’s are a clue that the story may be doing too much. Consider whether you can split the story along the lines of the conjunctions.

Other Story Splitting ideas:
  • Workflow steps: Identify specific steps that a user takes to accomplish the specific workflow, and then implement the work flow in incremental stages
  • Business Rule Variations
  • Happy path versus error paths
  • Simple approach versus more and more complex approaches
  • Variations in data entry methods or sources
  • Support simple data 1st, then more complex data later
  • Variations in output formatting, simple first, then complex
  • Defer some system quality (an “ility”). Estimate or interpolate first, do real-time later. Support a speedier response later.
  • Split out parts of CRUD. Do you really really really need to be able to Delete if you can Update or Deactivate? Do you really really really need to Update if you can Create and Delete? Sure, you may need those functions, but you don’t have to have them all in the same sprint or in the same story.

Some of the phrases in the above list may be direct quotes or paraphrases from Dean Leffingwell’s book “Agile Software Requirements”.

The post Story Splitting: Where Do I Start? appeared first on LeadingAgile.

Categories: Blogs

Product Owner Camp

Growing Agile - Tue, 06/30/2015 - 14:14
We recently attended the PO Camp in Switzerland (#POCam […]
Categories: Companies

How to create the smallest possible docker container of any image

Xebia Blog - Tue, 06/30/2015 - 11:46

Once you start to do some serious work with Docker, you soon find that downloading images from the registry is a real bottleneck in starting applications. In this blog post we show you how you can reduce the size of any docker image to just a few percent of the original. So is your image too fat, try stripping your Docker image! The strip-docker-image utility demonstrated in this blog makes your containers faster and safer at the same time!


We are working quite intensively on our High Available Docker Container Platform  using CoreOS and Consul which consists of a number of containers (NGiNX, HAProxy, the Registrator and Consul). These containers run on each of the nodes in our CoreOS cluster and when the cluster boots, more than 600Mb is downloaded by the 3 nodes in the cluster. This is quite time consuming.

cargonauts/consul-http-router      latest              7b9a6e858751        7 days ago          153 MB
cargonauts/progrium-consul         latest              32253bc8752d        7 weeks ago         60.75 MB
progrium/registrator               latest              6084f839101b        4 months ago        13.75 MB

The size of the images is not only detrimental to the boot time of our platform, it also increases the attack surface of the container.  With 153Mb of utilities in the  NGiNX based consul-http-router, there is a lot of stuff in the container that you can use once you get inside. As we were thinking of running this router in a DMZ, we wanted to minimise the amount of tools lying around for a potential hacker.

From our colleague Adriaan de Jonge we already learned how to create the smallest possible Docker container  for a Go program. Could we repeat this by just extracting the NGiNX executable from the official distribution and copying it onto a scratch image?  And it turns out we can!

finding the necessary files

Using the utility dpkg we can list all the files that are installed by NGiNX

docker run nginx dpkg -L nginx
...
/.
/usr
/usr/sbin
/usr/sbin/nginx
/usr/share
/usr/share/doc
/usr/share/doc/nginx
...
/etc/init.d/nginx
locating dependent shared libraries

So we have the list of files in the package, but we do not have the shared libraries that are referenced by the executable. Fortunately, these can be retrieved using the ldd utility.

docker run nginx ldd /usr/sbin/nginx
...
	linux-vdso.so.1 (0x00007fff561d6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd8f17cf000)
	libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fd8f1598000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fd8f1329000)
	libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fd8f10c9000)
	libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fd8f0cce000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd8f0ab2000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd8f0709000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fd8f19f0000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd8f0505000)
Following and including symbolic links

Now we have the executable and the referenced shared libraries, it turns out that ldd normally names the symbolic link and not the actual file name of the shared library.

docker run nginx ls -l /lib/x86_64-linux-gnu/libcrypt.so.1
...
lrwxrwxrwx 1 root root 16 Apr 15 00:01 /lib/x86_64-linux-gnu/libcrypt.so.1 -> libcrypt-2.19.so

By resolving the symbolic links and including both the link and the file, we are ready to export the bare essentials from the container!

getpwnam does not work

But after copying all essentials files to a scratch image, NGiNX did not start.  It appeared that NGiNX tries to resolve the user id 'nginx' and fails to do so.

docker run -P  --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;"
...
2015/06/29 21:29:08 [emerg] 1#1: getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2
nginx: [emerg] getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2

It turned out that the shared libraries for the name switch service reading /etc/passwd and /etc/group are loaded at runtime and not referenced in the shared libraries. By adding these shared libraries ( (/lib/*/libnss*) to the container, NGiNX worked!

strip-docker-image example

So now, the strip-docker-image utility is here for you to use!

    strip-docker-image  -i image-name
                        -t target-image-name
                        [-p package]
                        [-f file]
                        [-x expose-port]
                        [-v]

The options are explained below:

-i image-name           to strip
-t target-image-name    the image name of the stripped image
-p package              package to include from image, multiple -p allowed.
-f file                 file to include from image, multiple -f allowed.
-x port                 to expose.
-v                      verbose.

The following example creates a new nginx image, named stripped-nginx based on the official Docker image:

strip-docker-image -i nginx -t stripped-nginx  \
                           -x 80 \
                           -p nginx  \
                           -f /etc/passwd \
                           -f /etc/group \
                           -f '/lib/*/libnss*' \
                           -f /bin/ls \
                           -f /bin/cat \
                           -f /bin/sh \
                           -f /bin/mkdir \
                           -f /bin/ps \
                           -f /var/run \
                           -f /var/log/nginx \
                           -f /var/cache/nginx

Aside from the nginx package, we add the files /etc/passwd, /etc/group and /lib/*/libnss* shared libraries. The directories /var/run, /var/log/nginx and /var/cache/nginx are required for NGiNX to operate. In addition, we added /bin/sh and a few handy utilities, just to be able to snoop around a little bit.

The stripped image has now shrunk to an incredible 5.4% of the original 132.8 Mb to just 7.3Mb and is still fully operational!

docker images | grep nginx
...
stripped-nginx                     latest              d61912afaf16        21 seconds ago      7.297 MB
nginx                              1.9.2               319d2015d149        12 days ago         132.8 MB

And it works!

ID=$(docker run -P -d --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;")
docker run --link $ID:stripped cargonauts/toolbox-networking curl -s -D - http://stripped
...
HTTP/1.1 200 OK

For HAProxy, checkout the examples directory.

Conclusion

It is possible to use the official images that are maintained and distributed by Docker and strip them down to their bare essentials, ready for use! It speeds up load times and reduces the attack surface of that specific container.

Checkout the github repository for the script and the manual page.

Please send me your examples of incredibly shrunk Docker images!

Categories: Companies

AgileCymru. Cardiff Bay, UK, July 7-8 2015

Scrum Expert - Tue, 06/30/2015 - 09:35
AgileCymru is a two-days Agile conference that takes place in Wales. It offers practical advice, techniques and lessons from practitioners, experts and beginners in the field of Agile software development and project management with Scrum. In the agenda of AgileCymru you can find topics like “How to fail a software project fast and efficiently?”, “Game of Scrums: Tribal behaviors with Agile at Scale”, “Dreaming – how business intent drives your Agile initiatives”, “From Agile projects to an Agile organization – The Journey”, “User Needs Driven Development – The Evolution of ...
Categories: Communities

Dev Lunch as a Power Tool

At my current company I’ve been going out to lunch pretty much every day I’m in the office. I know a lot of developers bring their lunches and like to eat alone in silence, but I’m probably less on the introvert side. I’ve always made it a point to have lunches with developers on a regular basis, but my current standing lunches with have been an evolution of that.

Breaking bread and catching up on stories, families, etc are a great way to bond with developers you don’t work with on a regular basis. Most of the engineers I go to lunch with work on another team, so I’m constantly keeping up to date with their current work and letting them know about what my team is doing. As a bonus I get out of the office and eat some hot food, though not necessarily the healthiest fare. Sharing and communication between teams really helps even in a flat startup organization like ours. I know some companies have meals catered in the office which works as well, but if you’re at a regular place, lunch out can work just as well.

As a suggestion if you tend to eat your lunches alone or at your desk, try to make a habit of eating out with some other devs or even other employees at least once a week.

Categories: Blogs

Do What Works… Even If It’s Not Agile.

Leading Agile - Mike Cottmeyer - Mon, 06/29/2015 - 22:19

I think I’ve come to the conclusion that ‘agile’ as we know it isn’t the best starting place for everyone that wants to adopt agile. Some folks, sure… everyone, probably not.

For many companies something closer to a ‘team-based, iterative and incremental delivery approach, using some agile tools and techniques, wrapped within a highly-governed Lean/Kanban based program and portfolio management framework’ is actually a better place to start.

Why?

Well, many organizations really struggle forming complete cross-functional teams, building backlogs, producing working tested software on regular intervals, and breaking dependencies. In the absence of these, agile is pretty much impossible.

Scrum isn’t impossible, mind you.

Agile is impossible.

So how does a ‘team-based, iterative and incremental delivery approach, using some agile tools and techniques, wrapped within a highly-governed Lean/Kanban based program and portfolio management framework’ actually work?

Let me explain.

First, I want to form Scrum-like teams around as much of the delivery organization as I can. I’ll form teams around shared components, feature teams, services teams, etc. Ideally, I’d like to form teams around objects that would be end up being real Scrum teams in some future state.

These Scrum-like teams operate under the same rules as a normal Scrum team, they are complete and cross-functional, internally self-organizing, same roles, ceremonies, and artifacts as a normal Scrum team, but with much higher focus on stabilizing velocity and less on adaptation.

Why ‘Scrum-like’ teams?

Dependencies. #dependenciesareevil

These teams have business process and requirements dependencies all around them. They have architectural dependencies between them. They have organizational dependencies due to the current management structure and likely some degree of matrixing.

Until those dependencies are broken, it’s tough to operate as a small independent Scrum team that can inspect and adapt their way into success. Those dependencies have to be managed until they can be broken. We can’t pretend they aren’t there.

How do I manage dependencies?

This is where the ‘Lean/Kanban’ based program and portfolio governance comes in. Explicitly model the value stream from requirements identification all the way through delivery. Anyone that can’t be on a Scrum team gets modeled in this value stream.

We like to form small, dedicated, cross-functional teams (explicitly not Scrum teams) to decompose requirements, deal with cross-cutting concerns, flow work into the Scrum-like teams, and coordinate batch size along with all the upstream and downstream coordination.

Early on, we might be doing 3-6-9 even 12-18 month roadmapping, creating 3-6 month feature level release plans, and fine grained, risk adjusted release backlogs at the story level. The goal is to nail quarterly commitments and to start driving visibility for longer term planning.

Not agile?

Don’t really care, this is a great first step toward untangling a legacy organization that is struggling to get a foot hold adopting agile. Many companies we work with, this is agile enough, but ideally this is only the first step toward greater levels of agile maturity.

How do I increase maturity?

Goal #1 was to stabilize the system and build trust with the organization. This isn’t us against them, it’s not management against the people, it’s working within the constraints of the existing system to get better business results… and fast.

Over time, you want to continue working to reduce batch size at the enterprise level, you want to progressively reduce dependencies between teams, you want to start funding teams and business capabilities rather than projects, you want to invest to learn.

Lofty goals, huh?

That said, those are lofty goals for an organization that can’t form a team, build a backlog, or produce working tested software every few weeks. Those are lofty goals for an organization that is so mired in dependencies they can’t move, let alone self-organize.

Our belief is that we are past the early adopters. We are past the small project teams in big companies. We are past simply telling people to self-organize and inspect and adapt. We need a way to crack companies apart, to systematically refactor them into agile organizations.

Once we have the foundational structures, principles, guidelines in place, and a sufficient threshold of people is bought into the new system and understands how it operates, then we can start letting go, deprecating control structures, and really living the promise of agile.

The post Do What Works… Even If It’s Not Agile. appeared first on LeadingAgile.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.