Skip to content

Feed aggregator

What is Customer eXperience (CX)?

Manage Well - Tathagat Varma - Sun, 11/06/2016 - 05:53
(This originally appeared as an interview/blog at http://www.zykrr.com/blog/2016/09/11/Tathagat-interview.html) How will you define CX to a layman? CX to me is that unequivocally superior experience for the specific purpose of a given product or a service every single time – even when sometimes I have to pay more for it. Let’s take an example of something that […]
Categories: Blogs

Retrospectives: Sometimes dreaded, sometimes loved but always essential

Learn more about transforming people, process and culture with the Real Agility Program

Among all the components of Scrum, retrospectives are one of my favourites. A properly planned, efficiently executed, and regularly run retrospective can be like the glue that holds a team together.

My first experience in running a retrospective had surprising results.  We were working in a team of five but only two were present in the retrospective. Not only that, but of these two, neither could decide who should be running the retrospective. To be clear, this was not a Scrum team. But it is a team who is using some Agile methods to deliver a product once a week. Retrospectives are one of the methods. So without a clear ScrumMaster to facilitate the retrospective it was, let’s say, a little messy.

Despite all this, there were some positive results. The team had released a product every three weeks with success. The retrospective on the third week revealed challenges & progress, obstacles and opportunities.

The method used was the format of a Talking Stick Circle, where one person holds the floor and shares their reflections while others listen without interrupting and then the next person speaks and so on.

The major learning was that there were decisions to be made about who was doing which task at what time and in the end the direction was clear. Enthusiasm was high and the path forward was laid. The retrospective was a success.

The most remarkable part of the experience was hearing what was meaningful for others. When both people could share what they valued, hoped for and aspired to with the project it was easy to see what could be done next, using the skills, capacities and talents of team members.

For more resources on agile retrospectives, check out this link.

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Retrospectives: Sometimes dreaded, sometimes loved but always essential appeared first on Agile Advice.

Categories: Blogs

Building on the shoulders of giants: microservices as a redesign strategy

Xebia Blog - Fri, 11/04/2016 - 21:10
With the rise of new-IT backed companies in almost every segment; from retail to financial institutions, more traditional companies are often forced in change or perish strategies. Where the business strengths of newer competitors are often enforced by strong, serial startup developers, able to integrate the experience of previous failures into completely new stacks. Older companies’
Categories: Companies

New Agile Planning Game For Parents & Children

Learn more about transforming people, process and culture with the Real Agility Program planningpokerdecks

 

Challenge: As children age, they want to have more say in how they spend their time. Sometimes they don’t know how to express what is important to them or they can’t prioritize their time.

Solution: Parents can easily include their children in decision-making by using an Agile playing card method.  Here’s how it works.

TO PLAY THE GAME – 

You will need:

A pack of playing cards

A stack of post-it notes

Enough pens for everyone playing

Steps to play:

SET UP

  1. Distribute post its & pens to each player
  2. Set a timer for 2 minutes
  3. Each person writes things they want to do for the pre-determined time (on a weekend, throughout a week, or in an evening)
  4. There is no limit to how much you can write.

PLAY

  1. Decide who goes first.
  2. Take turns placing one post it on the table
  3. Each person decides the value of that activity (0-8)
  4. When everyone has decided play the cards
  5. Notice what everyone chose.
  6. If someone played a 1 or 0 it’s nice to listen to why they rated it so low
  7. Place the sum of the two numbers on the corner of the post it.
  8. Continue this going back & forth putting the activities in a sequence with the higher numbers at the top

DISCUSS

  1. When all the post it activities have been gone through, then look at the top 5 or 6 items. These will likely be the ones you will have time for.
  2. Discuss if there are any over-laps and shift the list accordingly.
  3. Discuss what makes the most sense and what both would like to do. The chances are good that all of these items are ones that both/all people rated high, that’s what put them on the top of the list.
  4. It should be relatively easy to find a way to do the top 3-5 activities with little effort.

PLAN for the day and ENJOY!!!

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post New Agile Planning Game For Parents & Children appeared first on Agile Advice.

Categories: Blogs

Sponsor Profile – Scrum Alliance

Agile Ottawa - Fri, 11/04/2016 - 13:47
Who is Manuel Gonzalez? Manuel Gonzalez is the Chief Executive Officer (CEO) of Scrum Alliance. He has more than 37 years of experience transforming organizations, Manuel “Manny” Gonzalez has proven himself to be a successful global strategic leader in both the … Continue reading →
Categories: Communities

The second impediment, or why should you care about engineering practices?

Scrum Breakfast - Fri, 11/04/2016 - 13:12
Sometimes I think, Switzerland is land of product owners. Thanks to our strong economy, there is much more development to do than have capacity for. So we off-shore and near-shore quite a bit. And technical topics seem to produce a yawn among managers and product owners. "Not really my problem," they seem to be saying. I'd like you to challenge that assumption!

I don't often change the main messages of my Scrum courses. For years, I have been talking about "Inspect and Adapt", "Multitasking is Evil" and "Spillover is Evil." Recently I have added a new message:

Bugs are Evil.

Why this change? While researching for my Scrum Gathering Workshop on Code Dojo's, I found a paper by Alistair Cockburn from XP Sardinia. He wrote, In 1000 hours (i.e. in one month), a team can write 50'000 lines of code. And they will send 3'500 bugs to quality assurance.

Doing the math based on industry standard assumptions, I found that that team will create 12'000 hours of effort for themselves, quality assurance, operations, and customer support. 1 year of waste produced for no good reason!

Is this really true? Well, at my last Scrum Master course, I met someone whose teams spend roughly 2/3rds of their time fixing bugs, much of them in emergency mode. Technical debt is a real danger! Imagine if you were paying 2/3rd of your income to pay the rent and plug holes in the ceiling of your apartment! That product is on the verge of bankruptcy!

Technical topics often generate a yawn among product owners and managers. But it's your money and your team's capacity which is being wasted!

So I'd like to encourage you to pay attention to engineering practices. Bugs are evil! Remember that and make sure everyone in your organization knows it too. As a leader, you are best positioned to ask the question, how can have fewer bugs?"

P.S. This is the topic for Monday's Scrum Breakfast Club, "Why should you care about engineering practices?" Check out the event, and come to my Manager and Product Owner friendly introduction to Pair Programming and Test Driven Development.
Categories: Blogs

Cultivating Disruptive Innovation in the Enterprise

BigVisible Solutions :: An Agile Company - Thu, 11/03/2016 - 20:16

bulb1 

Innovation is a word that lots of businesses are throwing around today, but let’s unpack exactly what we mean about it here. There are at least two major categories of innovation: “sustaining” and “disruptive”.

“Sustaining innovations” are built on top of existing products, where a viable business model is already known. They are designed to compete with rival companies and typically manifest as incremental enhancements to product offerings and operational cost reductions. For example, iPhones have gotten faster and thinner and are equipped with higher resolution screens with every major new release.

“Disruptive innovations” create a new value network that either gains foothold in previously ignored markets or creates entirely new markets. Disruptive innovation usually takes time to gain traction and the disruptor is commonly overlooked by current market leaders, but eventually they displace the established market leaders and their existing product dominance. Blockbuster getting completely destroyed by Netflix is a good example. Netflix’s initial product offering only appealed to a few customer segments: movie buffs who didn’t care about new releases, early adopters of DVD players, and online shoppers. However, as new technologies allowed Netflix to shift to streaming video over the internet, it became appealing to Blockbuster’s core customer base as well. This is a classic case of market disruption.

Where Should My Enterprise Invest?

Sustaining innovations help reinforce dominance in an enterprise’s existing market. So it only makes sense that a large majority of investments in an innovation portfolio be allocated to generating innovations that have a near-term payoff and makes today’s cash-cow healthier and more productive. This is where large-scale enterprise Lean and Agile adoptions have demonstrated impressive results.

But don’t let the word “sustaining” give the impression that these incremental advancements can sustain your business indefinitely. In spite of your market dominance, your enterprise may fall victim to the next Netflix, Uber, or Tesla as well, even with sustaining innovations. It’s not just the B2C corporations who are being disrupted, either: it’s happening at an unprecedented rate to B2B corporations as well, across nearly every industry.

Unfortunately, many enterprises don’t invest in any kind of innovation focusing all hands on supporting current product offerings. Meanwhile, other enterprises invest in structures like innovation labs or “intraprenuership” programs, which are ineffective as a means to create disruptive innovation. Innovation labs tend to focus on idea generation, not necessarily generating new viable businesses. Meanwhile “intrapreneurship” programs commonly equate to 10% of employee time, which asks them to divide their focus between current, urgent business generation and long-term prospects–and the urgent stuff always wins.

Now that we’re clear on the two types of innovation, let’s focus on effectively generating disruptive innovation. At the risk of oversimplifying, I’d recommend that 5 – 15% of an innovation portfolio should be allocated to generating “disruptive innovations”.

How Are Disruptors Innovating?

Historically, disruptive innovations tend to be created by outsiders and entrepreneurs, not market-leading companies. In recent years, startups have been disrupting the enterprises at an ever increasing rate. Consider how Marriott, Hilton, and other hotels chains are being disrupted by AirBnB for example. Below is a short list of what makes it possible for startups to reign in the disruptive innovation arena.

Access to Enabling Technologies

Today technologies like 3D printing, cloud-based hosting, and even Platforms as a Service have lowered the barrier of entry into most markets. Previously only enterprises and very large organizations could afford these resources and services and provide them to their people. The playing field has now been leveled and startups are now on equal footing.

Urgency Drives Speed

The constraints of a limited financial runway, otherwise know as “the burn rate”, or more dramatically “the startup death clock”, is one of the greatest motivators to finding success for entrepreneurs in a startup. They worry about having to layoff employees, shut the doors, and it all ending badly. Those fears cause them to hustle with everything they’ve got. It creates a bias toward actions and ensures experiments generate new learning in the most efficient way possible. Enterprises have the luxury of putting off to the future the very things that make startups such effective innovators. The false sense of comfort enterprise product managers have in time and funding often makes them sluggish and complacent.

No Bureaucratic Antibodies

Startups are able to fail fast, so they learn fast and then pivot based on new learnings without major friction.They have no committees to put decisions in front of, no one has to open and answer an IT ticket to deploy changes, there’s no marketing department to debate if the company image will be tarnished by risky public experiments. Startup decisions are rapid yet data-driven. Startups can hypothesize and run enough experiments to invalidate five new business models and pivot to something more viable in the time it takes enterprise decision makers to schedule and hold one meeting. They take action at the speed of Tesla accelerating in “Ludicrous Mode” given a green light.

Fail Fast, Succeed Faster

Finally, the greatest advantage the startup has is the rate at which they are able to learn and the ease with which they can act on that learning. Disruptive innovations are discovered under conditions of extreme uncertainty where the customers and the solutions to their problems are often unknown. The Lean Startup applies the scientific method to the challenge of understanding what customers really want, not just what they say they want or you think they should want. Many of the most successful startups today are applying the Lean Startup methodology to quickly generate hard evidence, often invaliding assumptions about the problem they’re trying to solve and the customer they’re trying to solve it for. For every major successful product-market fit they have, they will have failed through 99 bad ideas based on false assumptions. Learning through experimentation at the rate needed to achieve a new product-market fit is untenable in most enterprise organizations, because it would simply take too long.

In comparison to the lumbering enterprise giant, a healthy Lean Startup is egoless, unattached to their solutions, and nimble in the face of change. The startup understands that its mission is to continuously search for viable new business models and mature them to product-market fit — and very little else matters to them.

What are Innovation Colonies and What do They Do?

One strategy that enterprises can use to survive and even thrive in spite of the onslaught of startup competition is to build an “innovation colony” — a new organizational structure aimed specifically at generating disruptive innovations at an even faster rate than most startup incubators do. In an innovation colony, the principles of Lean Startup, Customer Development, Design Thinking, and modern Agile delivery processes are the only mode of operation. Innovation colonies are part design studio, part startup incubator, part corporate product development, and part investment fund. Companies such as Lockheed Martin, Adobe, Disney, and Microsoft all include innovation colonies in their long-term investment strategy.

futurefarm1 

Innovation colonies are typically settlements of two to ten small teams each consisting of three to five fully dedicated members. Teams in the colony are cross-functional and require a strong balance of business, design, and technology experience and expertise. Team members are highly entrepreneurial: versatile, risk-willing, and resilient. Employees who populate the colony come from the mothership enterprise but are heavily screened for entrepreneurial qualities and attracted by a different set of incentives as part of recruiting.

Innovation colonies are completely separate from other product development functions of the organization both in location and reporting structure. They have their own office spaces, their own infrastructure, and are often located in another building or city. They have their own managing innovation officers, who report directly to the CEO and board.

As with any startup, teams working in the colony have limited financial runway and are accountable to investors from the mothership. Colony teams use “innovation accounting” to demonstrate progress to investors and drive investment decisions regarding steady funding, increased investment, or termination.

Inside the innovation colony, teams test high volumes of new and risky ideas through structured experiments. They discard the failures and further incubate the ideas that show the right kind of traction. Only a small fraction of ideas will achieve product-market fit, but it only takes one successful disruptive innovation to create the enormous returns that justify the investment of the whole colony.

Once one of the colony-born startups achieves product-market fit and is ready to scale the business model to a large customer base, the mothership enterprise can exercise first rights to do one of three things: integrate the new business into enterprise operations, establish the new business as a separate entity, or sell it to an outside buyer.

Got a burning question for Jeff — follow @JRSBerg on Twitter and tweet it to him!

Resources

The Innovator’s Dilemma, Clayton Christensen. 1997.

The Lean Enterprise, Trevor Owens. 2014.

The Lean Startup, Eric Ries. 2012.

Want to Learn More?

This is part of a larger conversation about enterprise innovation. Jeff Steinberg recently gave an exciting webinar on just this topic: the webinar “Innovation Colonies: Incubating the Future of Your Business” dives into the business benefits of establishing a healthy innovation colony and what makes them so powerful in driving disruptive innovation.

 

Watch Now

The post Cultivating Disruptive Innovation in the Enterprise appeared first on SolutionsIQ.

Categories: Companies

Video: Agile Social Action at a Neighbourhood Level

Learn more about transforming people, process and culture with the Real Agility Program

This post marks the beginning of a new Agile experiment supported by BERTEIG.

Quite simply, the idea is to apply Agile methods and principles outlined in the Agile Manifesto to a social action project at a neighbourhood level.

The objective is to use the empowering principles of Agile to help eliminate the extremes between wealth and poverty.

The approach is to pair up one family who has items to share with another family who is in need in order to provide a weekly care package including food and other basic care supplies.

The sharing takes place in the real world with the delivery of a package weekly but also corresponds to an online platform which allows for the sharing to happen at an sustainable cadence.

The initiative was formally launched three weeks ago and this video is the first which addresses some basic structures of the framework. This video is a bit like a one-person retrospective.

One of the principles of BERTEIG is to strive to create unity and to be of service to humanity. This socio-economic Agile experiment is a way in which BERTEIG is reaching out to help others and contributing towards the advancement of a small neighbourhood moving along the continuum from poverty to prosperity, materially and spiritually.

 

 

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Video: Agile Social Action at a Neighbourhood Level appeared first on Agile Advice.

Categories: Blogs

Agile Product Planning and Analysis

TV Agile - Thu, 11/03/2016 - 17:48
This talk presents a method for Agile product planning and analysis with application examples. Discover to deliver is a method was recently published by Ellen Gottesdiener and Mary Gorman, recognized experts in Agile requirements management and collaboration. Discover to deliver aims to help software teams discover valuable features to deliver them faster. Video producer: http://www.agile.lt/
Categories: Blogs

How Realm Digital improved their Scrum practices with Targetprocess

TargetProcess - Edge of Chaos Blog - Thu, 11/03/2016 - 15:51

Earlier this year, Realm Digital brought their company culture of embracing new technology to life by consolidating their suite of software tools and introducing Targetprocess to manage workflows.

Previously, Realm was using a host of software for their Scrum-based Agile development, including Harvest for time tracking, Mantis for bug tracking, and Float for resource scheduling (to name a few).

They needed a solution that could help them to both improve their Scrum process and unite the different functions of their various other software tools. To ensure that their business processes remained agile and lean, CEO Simon Bestier championed the migration to Targetprocess.

Now that they’re a few months in, their marketing manager sat down with Developer Kyle Mulder and newly appointed Project and Operations Manager Hans Croukamp to see what kind of a difference Targetprocess has made at Realm Digital so far.

Realm Digital Team

Q. Kyle, what has been the most noticeable change in the way we approach projects since Realm moved over to Targetprocess?

      Kyle: After switching to Targetprocess, the most noticeable change has been that we now set achievable two week sprints. Thanks to the effort-based point system that Targetprocess uses, we can now see what project teams are capable of accomplishing by checking out a project’s sprint velocity. This enables us to set realistic targets for our sprints, rather than packing them with too much work, resulting in a sprint not being finished.

Q. Do you think the entire Realm team has embraced Targetprocess and all it has to offer?

      Kyle: While I’d like to believe that we have fully embraced Targetprocess, I still think there are a lot of hidden features and functionality in the tool that could come in handy to ease our workflows and minimize the amount of other applications we require on a daily basis.

      Hans: Targetprocess is vastly different from any other project management software I’ve worked with before. From a project management perspective, I find the different customizable views, dashboards, and reports to be extremely helpful. The buy-in from the team is very positive and the tool greatly assists us by presenting a 360-view on every project. It is quite a powerful tool and can be customized in numerous ways. The more you play around with the tool, the more you find the features and views the platform has to offer.

Q. Which Targetprocess feature, in your opinion, adds the most value, but is underused?

      Kyle: From a Developer and Team Lead point of view, the most valuable, yet underused feature, would be using the “relations” functionality with User Stories. The ability to relate multiple stories a big Feature is quite useful.

We also often use Burndown Charts to review a sprint’s progress and get insight into how much effort is remaining to complete a User Story . There is also a sub-tasks functionality, but these provide estimates based on hours, rather than story points, which is less useful for us.   

Burndown Chart

A Burndown Chart in Targetprocess

      Hans: The customization of views and boards is probably the feature that I use the most. I can’t necessarily say it is under-utilized, but the more I dig into the views, the more I find value in the details. I can also see the “eureka moment” for colleagues and clients when Targetprocess is used to explain the status of a project.

If I had to highlight something that I do think has tremendous value but that we’re currently not using to the full extent, it would be the ability to see points and work allocation per person/resource.

Q. Hans, is there anything you can think of that can make learning how to use Targetprocess easier?

      Hans: Targetprocess is a comprehensive tool with different functionality for different people; Project Managers, Account Managers, Developers, Management, etc. We all use the tool slightly differently.

Although I love the videos on the Targetprocess YouTube channel, it really takes you awhile to understand the tool as a new user. I do believe that breaking the learning process down and approaching it from how each person in their respective role will use the tool can help – rather than to trying to comprehend everything the tool can do at once.

I also think it might be good for them to raise 'Targetprocess champions' – people from different companies who can share their experience with the software. The recorded Webinars also really helped me when I started learning what the tool can do.  

This post was submitted by Realm Digital, a global digital strategy and technology partner located in South Africa. The company specializes in digital solutions including web and mobile development. For more information, visit their website at www.realmdigital.co.za.

Looking to publish your own article on our blog? Send your pitch to news@targetprocess.com. We publish articles on Agile, team communication, software development, your own experiences with business software, and anything else related to collaboration.

Realm Digital Instagram

From the Realm Digital Instagram
Categories: Companies

SonarQube 6.x series: Focused and Efficient

Sonar - Thu, 11/03/2016 - 15:09

At the beginning of the summer, we announced the long-awaited new “Long Term Support” version, SonarQube 5.6. It comes packed with great features to highlight and help developers manage the leak, and to ensure the security and scalability of large instances.

Now we’re concentrating on the main themes for the 6.x series, and based on the discussions we have had during our City Tour 2016, we’re sure that you’ll be as excited by these new features as you were with the ones in 5.6 LTS.

Better leak management

water leak

Support of file move and renaming

SonarQube 5.6 LTS provides all the built-in features you need to monitor and fix the leak on your source code: a project home page that highlights activity on code that was added or modified recently, and a quality gate that turns red whenever bugs or vulnerabilities make their way into new code.

Unfortunately, SonarQube 5.6 doesn’t understand moving or renaming files. That means that if an old file is moved, all its existing issues are closed (including the ones marked False Positive or Won’t Fix), and new ones are (re)opened on the file at its new location. An ugly side effect is that old issues end up in the leak period even though the file wasn’t edited. The end result is noise for development teams who refactor frequently.

This limitation is fixed in SonarQube 6.0, and development teams at SonarSource have been enjoying it for a couple of months already.

Better understanding of bugs and vulnerabilities

Over the past 2 years, SonarSource’s analyzers have reached maturity levels that not only allow them to detect “simple” maintenance issues, but also more tricky issues that can be found only by exploring the code in depth using “symbolic execution” to explore multiple execution paths through the code. That’s why in the 5.x series, Bugs and Vulnerabilities debuted as part of the new SonarQube Quality Model. As you can imagine, it can be very complex to detect a bug when lots of different execution paths have to be explored. As a consequence, it’s easy to guess that it would be hard for a developer to understand why SonarQube is reporting this or that bug without more help. A glance at “SonarAnalyzer for Java: Tricky Bugs are Running Scared” shows that we must print arrows and explanations on the screenshots to help users understand how we discovered a bug.

The next LTS of SonarQube will provide this information out of the box in the web application. Not only will developers see where each bug is, but they’ll be able to display the execution paths (with explanations) that lead to it. This will be a nice improvement to help you fix the leak more easily!

Project Activity Stream

You’re already applying the right process to fix the leak, but sometimes it is hard to know exactly what causes the tiny drops that end up being the leak. The next LTS will keep track of the low-level activities in your project to help you find the source of your leak. For instance, are you facing unexpected new issues in the leak period? You will be able to see that they are due to the activation of a new rule in your quality profile. You want to find which exact commit(s) were not sufficiently tested and caused the quality gate to turn red because of insufficient coverage? You will see commit hashes to more easily link the problem with what happened in the source code repository.

Branching as a first class citizen

While SonarQube provides a feature to handle short-living (feature) branches through its pull request analysis plugin, it currently does very little when it comes to long-living (maintenance) branches, even though we all know that maintenance is a huge part of software development. Unfortunately, SonarQube’s current branch support is minimal at best. The sonar.branch analysis parameter allows you to analyze a branch alongside the trunk version of the code, but under the hood SonarQube treats the branch as a separate, completely unrelated project: configuration isn’t shared, metrics are artificially doubled (for instance the number of lines of code), issues are duplicated in each copy of the code with no link between them, it’s impossible to know at what point of time the maintenance branches diverged from the main one, … etc. In the end, you end up managing the branch as a totally different project, even though it is really the same application.

The next LTS will address all those issues, making it simple to create maintenance branches on existing projects to track activity on the branches and make sure that even in branches, there’s no leak on the new code.

See what’s important to you!

Eye

User-needs oriented spaces

In the early days, SonarQube offered the possibility to inject and display any kind of information, mostly thanks to customizable dashboards and widgets. This led to widespread adoption, but at the cost of SonarQube being seen as a multi-purpose aggregation and reporting tool: one plugin would add information from a bug tracking system, another one would add documentation information, … The consequence was that the global and project dashboards became a crazy quilt of both useless and useful information, with everything mixed in together in a big mess.

In the 5.x series, project dashboards were replaced by hardcoded pages dedicated to fit the use cases that SonarQube is meant for: seeing the overall quality of a project on its home page, quickly identifying whether the leak is fixed and the reasons why it might not be, and digging into the details to know more about what’s going wrong. Following the same logic, next LTS of SonarQube will get rid of global dashboards and widgets to provide pages designed to answer the needs of developers, technical leaders, project managers and executives – all this out of the box without having to wonder what to configure.

Powerful project exploration with tags

When focusing on a given project, SonarQube offers everything you need to both get the big picture and dig into the details. When it comes to exploring the whole set of projects available on a SonarQube instance, the only entry point is the ageing “Measures” page. This page currently goes into to much detail (allowing you to query for files, for instance), with difficult-to-use filtering criteria.

The next LTS will replace this page with a brand-new “Projects” page to query projects using advanced filtering similar to what’s on the Issues page. Ultimately, it will support tags on projects. It should help answer questions like: what’s the distribution of “strategic” projects regarding security and reliability ratings? how do “offshore” projects perform in terms of maintainability?

Always up-to-date portfolios

The Governance product allows you to manage application portfolios, usually by mapping the organisational structure of a company. The executive-oriented high level indicators produced by Governance are currently updated once in a while, when a refresh is triggered by some external system (usually a CI job), independent of project analyses. The consequence is that, depending on the frequency of this externally-triggered refresh task, those high-level indicators are imprecisely synchronized with the current status of the relevant projects.

The version of Governance compatible with the next LTS will get rid of the need to trigger this refresh, and update portfolio indicators as soon as one of the underlying projects has been updated. This way, there is no need to set up an external process to trigger portfolio calculation, and no wondering if what you are seeing in SonarQube is up to date or not.

Excellent support of huge instances

Scalability

Horizontal scalability

One of the targets of the 5.x series was making sure SonarQube would scale vertically to house more projects on a single instance if given more space, more CPU, and more RAM. This was achieved thanks to the architectural changes which led to removing the DB connection from the Scanner side, and to adding Elasticsearch in front of the database. But vertical scalability necessarily has limits – namely those of the underlying hardware.

The next LTS will allow you to deploy SonarQube as a cluster of SonarQube nodes. You’ll be able to configure each node for one or more components of SonarQube (web server, compute engine and Elasticsearch node), based on your load. The first instance to benefit from this capability will be SonarQube.com, the SonarQube-based service operated by SonarSource.

Organizations

When talking about large instances, one topic that often comes up is how to efficiently and correctly handle the permissions for large numbers of users and projects. Let’s take the example of an IT department serving several independent business units: the business units might not share the same quality profiles (because they’re working with different technologies), and each one probably wants to define its own user groups, or make specific configurations to suit their needs. There’s currently no good way to manage this scenario, but in the next LTS, organizations will create a way to define umbrellas that isolate sets of users and projects to achieve these goals. As with the ability to set up a cluster, SonarQube.com will be the first instance to benefit from this, so that users can group their projects together and customize settings or quality profiles for them.

Webhooks for Devops

Not related to big instances only, but still in the hands of DevOps teams who operate complex ALM setups, webhooks will increase your ability to integrate SonarQube with existing infrastructure. For instance, freshly built binaries shouldn’t be deployed to production if they don’t pass the quality gate, right? With webhooks, you’ll be able to have SonarQube notify the build system of the projects’ quality gate status so it can cancel or continue the delivery pipeline as appropriate.

Target is mid-2017!

That’s all folks! The estimated time of arrival for the next SonarQube 6.x LTS is mid-2017. Expect other small but useful features to make their way along those big themes!

Categories: Open Source

Sponsor Profile – Pyxis Technologies

Agile Ottawa - Thu, 11/03/2016 - 14:30
Can you tell us a little about your company? Pyxis Technologies is specialized in software development. Driven by Agility for over 15 years now, Pyxis services its clients with a complete customized offer: studio and its team of architects, designers, … Continue reading →
Categories: Communities

Is QA Seen as an Impediment in Your Organization? 

NetObjectives - Thu, 11/03/2016 - 11:37
Is QA (aka QC) seen as an impediment in your organization? If you answered yes to this question, then chances are your organization is in one of two camps: 1) QA is really an impediment or 2) QA is actually ensuring that your organization is releasing quality software. QA Is Really an Impediment Some organizations still have formal QA departments that execute testing as a separate phase from...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Bad Habits and Lean-Agile Transformation

NetObjectives - Thu, 11/03/2016 - 09:03
We like patterns at Net Objectives and in software architecture, we view them as solutions to reoccurring problems.  Habits are patterns too—behavior patterns that are practiced in response to some context, process, or event and these can become ingrained and involuntary in organizations that follow process (which is most organizations). Example organizational habits might...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Efficiency Rants and Raves: Twitter Chat Thursday

Johanna Rothman - Thu, 11/03/2016 - 00:07

I’m doing a Twitter chat November 3 at 4pm Eastern/8pm UK with David Daly. David posted the video of our conversation as prep for the Twitter chat.

Today he tweeted this: “How do you optimize for features? That’s flow efficiency.” Yes, I said that.

There were several Twitter rants about the use of the word “efficiency.” Okay. I can understand that. I don’t try to be efficient as much as I try to be effective.

However, I’ve discussed the ideas of resource efficiency and flow efficiency in several places:

And more. Take a look at the flow efficiency tag on my site.

Here’s the problem with the word, “efficiency.” It’s already in the management lexicon. We can’t stop people from using it. However, we can help them differentiate between resource efficiency (where you optimize for a person), and flow efficiency (where you optimize for features). One of the folks discussing this in the hashtag said he optimized for learning, not speed of features. That’s fine.

Flow efficiency optimizes for moving work through the team. If the work you want is learning, terrific. If the work you want is a finished feature, no problem. Both these require the flow through the team—flow efficiency—and not optimization for a given person.

I’ve mentioned this book before, but I’ll suggest it again. Please take a look at this book: This is Lean: Resolving the Efficiency Paradox.

If I want to change management, I need to speak their language. Right now, “efficiency” is part of their language. I want to move that discussion to helping them realize there is a difference between resource efficiency and flow efficiency.

I hope you decide to join us on the chat (which is about hiring for DevOps). I will be typing as fast as my fingers will go

Categories: Blogs

The Seven Kanban Cadences

Kanbanery - Wed, 11/02/2016 - 19:32

Artykuł The Seven Kanban Cadences pochodzi z serwisu Kanbanery.

“A regular cadence, or „heartbeat,‟ establishes the capability of a team to reliably deliver working software at a dependable velocity. An organization that delivers at a regular cadence has established its process capability and can easily measure its capacity.”
– Mary Poppendieck

One of the (many) things that distinguishes Kanban from Scrum is that Kanban decouples cadences. That might need a little explanation. A cadence is a regular rhythm of activity. For example, every two weeks we hold a planning meeting. At the end of every sprint, we hold a sprint review meeting. Add a few more, and you get the drum beat of consistent productivity.

Imagine no cadences, but still a need for these meetings. “Things are falling apart. We haven’t held a retrospective in months. Maybe we should do that?” “I don’t have anything to work on, can we try to get everyone together for a planning meeting, please?” All meetings would become emergency-driven. Yuck.

But why do we plan every two weeks? Why does every sprint need exactly one retrospective, no more and no less, and why does it have to come at the end? By decoupling cadences, Kanban gives teams permission to try different rules, like a retrospective after every other sprint, or planning meetings every other Wednesday, because that’s when the stakeholders have time to get together. Or even, hold onto your hats, planning meetings every two days while keeping a two-week build iteration. Why? Maybe the industry is so dynamic and work items so small that two-week commitments don’t make sense. So that’s a quick overview of what cadences are and an argument for decoupling them. But that’s not what I’m writing about today. It’s just backstory.

David Anderson has written about the seven Kanban cadences, but no one (as far as I know, after a lot of Googling) has described them all neatly in one place. I was talking with another leader of the Kanban community and off the top of our heads, we could (working together) only name six of them. So here, in one place, are all seven cadences, starting with the three familiar from Scrum (planning, retrospectives, and the daily standup).

Standup Meeting What is it?

The standup meeting is usually the most frequent meeting, and it serves to keep the team on the same page about the state of affairs in the project. It addresses questions like who’s working on what, who needs help, and if any tasks are blocked.

Why do we do it?

The standup meeting serves the needs of the team for information that they use to make the best decisions about how to use their time. It’s the internal feedback look for the team doing the work but is also helpful for stakeholders who like to know what’s going on and how they can help.

How do we do it?

It’s usually done standing up to keep the meeting short, but the format can vary dramatically, from the Scrum round of three questions to a right to left scan of the Kanban board for blockers and bottlenecks. There is a lot of room for innovations like Liz Keogh’s “What I have learned today?” format.

Replenishment Meeting What is it?

A pull system needs tasks in the input queue to avoid becoming starved. The replenishment planning meeting is the meeting in which we decide what those tasks will be. Scrum calls this the sprint planning meeting, but the format may differ and multiple shareholders may be involved. In the first documented example of Kanban in action, David Anderson arranged regular replenishment meetings when he discovered that a team was having difficulty prioritizing incoming work from multiple managers, and so he organized bi-weekly teleconferences with those managers to prioritize the work together.

Why do we do it?

This is the point at which maybe becomes should. It’s the last step between the infinite possibilities and the system’s commitment point. This meeting takes the latest information from downstream feedback and market forces and determines what is the most important set of tasks to feed into the system. Done frequently and transparently, they help stakeholders downstream to learn to trust what is promised will be delivered with some degree of regularity.

How do we do it?

All that matters is that the right people are on hand to make the best decisions with the right data. A replenishment meeting can look very different depending on the context. It can happen as often as daily or as rarely as yearly. How often it happens should be a function of the efficiency of feedback loops, the speed of delivery, and the dynamics of the environment into which the system is delivering value. It can even happen on an as-needed basis triggered by a minimal task limit in the input queue rather than on a regular schedule.

Operations Review What is it?

The operations review is a higher-level view of how the various teams/divisions/departments/tribes are collaborating as an organization.

Why do we do it?

We should all know by now the shortcoming of local optimization (improving a part of a system without considering the other parts). A single start team can’t save an organization with a poor delivery pipeline. Most systemic inefficiencies occur in handoffs and queues. In this meeting, managers of different components look for ways to improve the system as a whole.

How do we do it?

Using input from most of the other cadences in this blog post, the managers look at how the company as a whole is performing. How happy are clients? How profitable is the company? Have there been any changes in staff turnover? Where is the underutilized capacity? Based on these data, the team devises experiments to improve the flow efficiency and reduce variation in the entire system.

Delivery Planning Meeting What is it?

This is the meeting that acknowledges that most of us don’t deliver directly to the final customer. It’s the meeting that smooths out hand-offs between teams or departments.

Why do we do it?

The next downstream stage, our customers, may not appreciate finding a pile of work on their doorstep at random intervals. They likely will appreciate being involved in deciding how, when and what is delivered.

How do we do it?

Review the output of the daily standup meetings and the data on the board. Take into account any risks that arose in risk assessment and in the daily standup and look at what is ready to deliver and what is likely to be ready to deliver soon. With the input of the people who will be taking delivery, decide what to deliver, when and whether any training or handoff activities are required to ensure a smooth transfer of WIP (your team’s deliverables are the receiving team’s WIP). As a result of this meeting, some tasks in progress may change priority or class of service. For example, if the participants decide that an item in progress should be included in next Tuesday’s delivery and are confident that it can be, that item’s class of service may change from “standard” to “time-based.” It now has a deadline, and the team may change their pull and swarming behaviors accordingly.

Service Delivery Review What is it?

How well are we serving our clients? A service delivery review looks at a Kanban system from the point of view of the people who matter most, the intended beneficiaries of the service.

Why do we do it?

Team or department efficiency is wasted if the client is unsatisfied. The service delivery review, involving representatives of the end users of our outputs, explores customer satisfaction with all aspects of the process, including efficiency, communication, delivery against SLAs, and how well the resources of the team are being utilized. The goal is to improve customer satisfaction and to build trust through transparency.

How do we do it?

Review the last batch of work delivered, considering scope, quality, SLAs, and any other success criteria with the clients. Are they satisfied with what is being delivered, how frequently, and how it is being delivered? Are they content that you are making the best use of the resources available to you to deliver value to them? When things go wrong, are they happy with your reaction and communication about emerging issues and are they convinced that your risk assessment process is sufficient to minimize downstream damages?

Risk Review What is it?

A risk review conversation can happen at any level of the organization, and probably should occur at all levels. Its purpose is to assess the likelihood of failing to deliver to expectations, either to downstream system components or to end users.

Why do we do it?

Identifying risks in advance and taking steps to mitigate those risks improved system predictability which increases trust and profitability.

How do we do it?

The most basic level of risk review is to examine past failures, in the form of blocked tasks, re-work, and missed SLAs, and to identify the root causes and find ways to keep those things from happening again. More comprehensive risk planning includes speculation about possible future risks based on experience and input from all levels. Even more sophisticated tools can be brought into play, such as Anticipatory Failure Determination, which I wrote about last month.

Strategy Review What is it?

A strategy review examines changes in the market and questions whether our current operational goals are optimized to serve emerging needs.

Why do we do it?

So, we’re really efficient. A streamlined, well-oiled, fine-tuned machine. But are we doing the right thing? Has the right thing changed as a result of market conditions? Are we building the ultimate telegraph in the age of telephones? This meeting is to review the company strategy and ensure that our processes are delivering the value that best serves the greater strategic goal.

How do we do it?

Compare recent delivery timelines to market trends. Are we delivering efficiently enough to adapt? If there is a mismatch between our ability to make significant changes to our offerings and the pace of new market demands, then we must consider changing markets or finding ways to optimize our processes. The best people to consider such questions are company executives with input from marketing and client-facing business units like sales and customer service. The result of this meeting could be new guidelines for evaluating product ideas or KPIs that are better aligned with market expectations.

I have intentionally refrained from suggesting how often each of these things should happen. You should probably have team standup meetings more frequently than company-wide strategy reviews, but exactly when and how often you incorporate these cadences into your operations is a function of the dynamics of your context. I’ll just point out that there are two basic scheduling mechanisms to consider.

Time-based

Time-based cadences happen every day, every week, every second-Tuesday, quarterly or annually. This approach is good for situations in which there is a value in frequent updates (daily standups) or for those important but non-urgent things that might never happen otherwise (strategy reviews).

Event-driven

Nowhere is it written that all of these cadences needs to happen at regular intervals. It might make sense to link some to events. For example, risk reviews could be done monthly, but might also be triggered by critical failures. Service delivery reviews might only be bi-annual when everything is going well, but can be triggered automatically by a failure to meet SLAs in a critical area. Replenishment meetings could happen weekly, or they could be triggered by a certain minimum number of items in an input queue.

That’s a lot of meetings!

David Anderson has noted that in proposing these seven cadences, he in no way intends to add seven meetings to everyone’s busy work lives. Since doing the right thing well and finding ways to get better are pretty fundamental to knowledge work, you probably already have some existing meetings that serve all of these purposes. The value of identifying the seven cadences is to look at how we’re getting this stuff done now and to evaluate if there are any gaps and whether we have established a regular cadence that makes sense or if some of these functions are still being handled in an ad hoc fashion rather than proactively.

Artykuł The Seven Kanban Cadences pochodzi z serwisu Kanbanery.

Categories: Companies

Targetprocess v.3.10.3: Miscellaneous features and minor fixes

TargetProcess - Edge of Chaos Blog - Wed, 11/02/2016 - 17:00
Minor features
  • Added the ability to batch update 'date' custom fields.
  • Improved live updates performance.
  • Improved add time dialog: Spent Time numeral is selected by default.
Fixed Bugs
  • Test Steps editor layout fixed.
  • Fixed custom reports to properly show comments with HTML tags
Categories: Companies

The Simple Leader: Plan, Do, Study, Adjust

Evolving Excellence - Wed, 11/02/2016 - 10:14

This is an excerpt from The Simple Leader: Personal and Professional Leadership at the Nexus of Lean and Zen


Excellent firms don’t believe in excellence—
only in constant improvement and constant change.
– Tom Peters

The PDSA (Plan-Do-Study-Act) cycle is the core component of continuous improvement programs. You may have heard it called PDCA (Plan-Do-Check-Act)—and they are very similar— but I have come to prefer PDSA, with the A standing for Adjust, for reasons I’ll explain shortly. Understanding the cycle and its application to continuous improvement is critical for leadership. But first, a history lesson.

In November 2010, Ronald Moen and Clifford Norman wrote a well-researched article in Quality Progress that detailed the history behind PDCA and PDSA. The cycles have their origins in 1939, when Walter Shewhart created the SPI (Specification- Production-Inspection) cycle. The SPI cycle was geared toward mass production operations, but Shewhart soon realized the potential application of the scientific method to problem solving, writing that “it may be helpful to think of the three steps in the mass production process as steps in the scientific method. In this sense, specification, production and inspection correspond respectively to hypothesizing, carrying out an experiment and testing the hypothesis. The three steps constitute a dynamic scientific process of acquiring knowledge.”

At the time, W. Edwards Deming was working with Shewhart to edit a series of Shewhart’s lectures into what would become Shewhart’s Statistical Method from the Viewpoint of Quality Control, published in 1939. Deming eventually modified the cycle and presented his DPSR (Design-Production-Sales-Research) cycle in 1950, which is now referred to as the Deming cycle or Deming wheel. According to Masaaki Imai, Toyota then modified the Deming wheel into the PDCA (Plan-Do-Check-Act) cycle and began applying it to problem solving.

In 1986, Deming again revised the Shewhart cycle, with another modification added in 1993 to make it the PDSA (Plan- Do-Study-Act) cycle, or what Deming called the Shewhart cycle for learning and improvement. (Deming never did like the PDCA cycle. In
1990, he wrote Ronald Moen, saying “be sure to call it PDSA, not the corruption PDCA.” A year later he wrote, “I don’t know the source of the cycle that you propose. How the PDCA ever came into existence I know not.”)

The PDCA cycle has not really evolved in the past 40 years and is still used today at Toyota. The PDSA cycle continues to evolve, primarily in the questions asked at each stage. Although both embody the scientific method, I personally prefer the PDSA cycle, because “study” is more intuitive than “check.” Deming himself had a problem with the term “check,” as he believed it could be misconstrued as “hold back.” I also prefer “Adjust” to “Act,” as it conveys a better sense of ongoing, incremental improvement. Just be aware that some very knowledgeable and experienced people prefer the pure PDCA!

Let’s take a look at each component of PDSA:

  • Plan: Ask objective questions about the process and create a plan to carry out the experiment: who, what, when, where, and a prediction.
  • Do: Execute the plan, make observations, and document problems and unexpected issues.
  • Study: Analyze the data, compare it to expectations, and summarize what was learned.
  • Adjust: Adopt and standardize the new method if successful; otherwise, identify changes to be made in preparation for starting the whole cycle over again.

It’s important to realize that PDSA cycle is valuable at both process and organizational levels, something we have already discussed (in slightly different terms) in this book. For example, you start the plan stage of the PDSA cycle while evaluating your current state and creating a hoshin plan. As you execute the annual and breakthrough objectives of the hoshin plan, you move into the “do” quadrant. On a regular basis, you evaluate the hoshin plan and the results of the goals (study), then modify it as necessary for the next revision of the hoshin plan (act).

Throughout the rest of this section, I will discuss various problem-solving and improvement tools and methods for process- scale improvements. Note that they all follow the same PDSA cycle.

Categories: Blogs

Neo4j: Find the intermediate point between two lat/longs

Mark Needham - Wed, 11/02/2016 - 00:10

Yesterday I wrote a blog post showing how to find the midpoint between two lat/longs using Cypher which worked well as a first attempt at filling in missing locations, but I realised I could do better.

As I mentioned in the last post, when I find a stop that’s missing lat/long coordinates I can usually find two nearby stops that allow me to triangulate this stop’s location.

I also have train routes which indicate the number of seconds it takes to go from one stop to another, which allows me to indicate whether the location-less stop is closer to one stop than the other.

For example, consider stops a, b, and c where b doesn’t have a location. If we have these distances between the stops:

(a)-[:NEXT {time: 60}]->(b)-[:NEXT {time: 240}]->(c)

it tells us that point ‘b’ is actually 0.2 of the distance from ‘a’ to ‘c’ rather than being the midpoint.

There’s a formula we can use to work out that point:

a = sin((1−f)⋅δ) / sin δ
b = sin(f⋅δ) / sin δ
x = a ⋅ cos φ1 ⋅ cos λ1 + b ⋅ cos φ2 ⋅ cos λ2
y = a ⋅ cos φ1 ⋅ sin λ1 + b ⋅ cos φ2 ⋅ sin λ2
z = a ⋅ sin φ1 + b ⋅ sin φ2
φi = atan2(z, √x² + y²)
λi = atan2(y, x)
 
δ is the angular distance d/R between the two points.
φ = latitude
λ = longitude

Translated to Cypher (with mandatory Greek symbols) it reads like this to find the point 0.2 of the way from one point to another

with {latitude: 51.4931963543, longitude: -0.0475185810} AS p1, 
     {latitude: 51.47908, longitude: -0.05393950 } AS p2
 
WITH p1, p2, distance(point(p1), point(p2)) / 6371000 AS δ, 0.2 AS f
WITH p1, p2, δ, 
     sin((1-f) * δ) / sin(δ) AS a,
     sin(f * δ) / sin(δ) AS b
WITH radians(p1.latitude) AS φ1, radians(p1.longitude) AS λ1,
     radians(p2.latitude) AS φ2, radians(p2.longitude) AS λ2,
     a, b
WITH a * cos(φ1) * cos(λ1) + b * cos(φ2) * cos(λ2) AS x,
     a * cos(φ1) * sin(λ1) + b * cos(φ2) * sin(λ2) AS y,
     a * sin(φ1) + b * sin(φ2) AS z
RETURN degrees(atan2(z, sqrt(x^2 + y^2))) AS φi,
       degrees(atan2(y,x)) AS λi
╒═════════════════╤════════════════════╕
│φi               │λi                  │
╞═════════════════╪════════════════════╡
│51.49037311149128│-0.04880308288561931│
└─────────────────┴────────────────────┘

A quick sanity check plugging in 0.5 instead of 0.2 finds the midpoint which I was able to sanity check against yesterday’s post:

╒═════════════════╤═════════════════════╕
│φi               │λi                   │
╞═════════════════╪═════════════════════╡
│51.48613822097523│-0.050729537454086385│
└─────────────────┴─────────────────────┘

That’s all for now!

Categories: Blogs

Falling For Agile Amped – Greatest Hits from Last Week’s Events!

BigVisible Solutions :: An Agile Company - Tue, 11/01/2016 - 18:00

safe-summit-partner-day-puppet

SolutionsIQ made our fall event run last week, first to Denver, Colorado, for SAFe Summit and then to our southeastern headquarters in Charlotte, North Carolina, for Southern Fried Agile. In just three days, we captured 30 brand new podcasts with a whole host of guests, some returning and others gracing the stage with us for the first time. Here are just a few of our favorites from each event.

Photo credit: Scott Frost (via Twitter)

SAFe Summit

This conference was all about the Scaled Agile Framework, its founding body Scaled Agile, delivery partners like SolutionsIQ and, of course, the many clients who are benefitting from this Agile implementation approach. Agile Amped sat down with many of the key people in Scaled Agile to talk about new developments in SAFe 4.0 (and even SAFe 4.1???) as well as their aspirations for the future.

Southern Fried Agile

This event is special to us for a couple of reasons: it’s happening right in Charlotte, where we have offices, and our own Agile consultant Neville Poole was the chairperson. We were astounded by the turn-out, which was about 700 people or more! Needless to say, the momentum of Agile is growing everywhere, not the least of which is in the South! Here are a couple of our favorites.

What kind of podcasts would you like to see? Let us know in the comments below! And always —

 

The post Falling For Agile Amped – Greatest Hits from Last Week’s Events! appeared first on SolutionsIQ.

Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.