Skip to content

Feed aggregator

Correlation does not imply causation

Agile World - Venkatesh Krishnamurthy - Thu, 12/11/2014 - 15:26

image


After every Agile conference, agilists return back to work with tons of new ideas.  They get excited about these new ideas and would be looking forward to roll them out sooner than later. However, based on my past experiences, I have realized that many ideas could do more harm than being helpful.  This is not because ideas we hear at conferences aren’t good, however, what we assume as the “idea” behind success may not be the “one” causing the success.

Popular ideas being borrowed in the Agile community include the Spotify’s  tribes/guilds, Google’s 20% innovation time and many more. In this advisor, I am challenging the readers to think before they act and understand the hidden secret’s of success.

Here is link to the complete article. image

This advisor has been written to enable the Agile conferences attendees to look deeper about the “causation” aspect rather than the “correlation” .

Categories: Blogs

Die Generation Y denkt anders als Sie denken

Scrum 4 You - Thu, 12/11/2014 - 08:26

In Berichten über die Generation Y (zum Beispiel Warum die Generation Y ständig unzufrieden ist) gibt es immer wieder dieses hübsche Bild vom Einhorn. Verkürzt gesagt glauben wir Y-ler, dass wir unglaublich schöne und gute Menschen seien, die alles erreichen können. Auf der anderen Seite steht die arme Geschäftswelt, die nicht weiß, was sie tun soll, da es wegen des demographischen Wandels zu einer Verknappung von Personal kommt. Wie soll man nun diese Generation Y mit ihrem übersteigerten Anspruch am besten ködern, um zukünftig an die knappen Personalressourcen ranzukommen?

Immer wenn ich sowas lese und höre, fühle ich mich traurig. Ich frage mich, ob sich wirklich jemand die Mühe gemacht hat, sich mit dieser Generation in aller Tiefe auseinanderzusetzen. Dazu ein paar Gedanken aus meinen Beobachtungen, für die ich keinen Anspruch auf Vollständigkeit erhebe:

  1. Ja, die Generation Y ist mit dem Gedanken aufgewachsen, sie könne alles erreichen, und es wurden ihr mehr Freiheit und Ressourcen zur Verfügung gestellt als der Generation Me.
  2. Diese Ressourcen und die Freiheit haben dazu geführt, dass sich diese Generation intensiv damit auseinandergesetzt hat, was Sie tun möchte - nicht damit, was Sie tun muss. Diese Art der Freiheit führt auch zu einem gewissen Problem: Wenn man alles tun kann, stellt sich die Frage, was das Allerbeste ist, und so wird eine Entscheidung immer zum Massenmord an all den übrigen Möglichkeiten. Die Generation Y wird sich immer fragen, ob sie das Beste aus sich gemacht hat, oder ob sie sich nicht für etwas anderes entscheiden hätte sollen.
  3. Zur Entscheidungsfreiheit und zur Ressourcenverfügbarkeit kommt zusätzlich die Entwicklung des Internets, die wir miterleben durften. Das Internet hatte vor allem drei Effekte:
    • Gleich nachdem wir eine Tätigkeit aufgenommen hatten, wussten wir, dass es irgendwo auf der Welt jemanden gab, der es schon mit 10 Jahren besser, schöner und perfekter gemacht hat – egal was.
    • Die Vernetzung hat dazu geführt, dass sich eine Art Untergangsstimmung breit gemacht hat. Kriege, Umweltverschmutzung, Ressourcenknappheit usw. sind unsere Freunde seit Kindesbeinen und begleiten uns auf all unseren Wegen.
    • Die Demokratisierung von Wissen über das Internet hat uns auch gezeigt, dass wir als Menschen Macht und Verantwortung haben. Wenn wir es nicht tun, wird sich kein anderer darum kümmern, auf diesem Planeten etwas besser zu machen.
  4. Die Globalisierung hat einen irrsinnigen Konkurrenzkampf entfacht. Es gibt zwar ein paar Studienrichtungen, dank derer sich die Absolventen die Firmen aussuchen können, aber für alle anderen spricht die Nachfrage am Markt eine ganz andere Sprache. Verhandlungssicherheit in zwei Sprachen ist eine Grundvoraussetzung, Studienabschlüsse mit 1.x sowieso, und dann bitteschön noch fünf Jahre Berufserfahrung im jeweiligen Bereich. Ach ja, und natürlich nicht älter als das Golden Age 30. Sowieso, eh klar …

Erschwerend kommt hinzu, dass die Generation Y mit ihrer exzellenten Ausbildung, analytischen Kompetenz und Praxiserfahrung seit den ersten Studienjahren fähig ist, hinter die Kulissen propagierter Aussagen zu sehen. Auf gut Deutsch: Wir sind kritisch, lassen uns kein X für ein U vormachen und bilden uns gerne unsere eigene Meinung. Denn dass man Medien nicht trauen kann, wissen wir schon längst.
In diesem Sinne sind mir auf dem höher gebildeten Sektor drei Typen der Generation Y aufgefallen:

Typ A – “So What”

Lethargie ist da, man akzeptiert was ist, arbeitet so viel wie nötig, und sucht sich seinen Ausgleich in der Freizeit. Die Dauerberieselung durch die Medien wird akzeptiert, um den Gedanken, was sich denn hinter dieser Welt verstecken möge, zu überdecken. Man kann seine Firma im besten Falle leiden, aber da man sich bewusst ist, dass einem selten die Wahrheit gesagt wird, schaut man lieber nicht hinter die Kulissen. Stattdessen legt man sich einen Hund und eine Freundin/einen Freund zu und macht das Beste aus seinem Privatleben. Die Arbeit wird als lästiger Bestandteil akzeptiert, die innere Kündigung wird zur Dauereinstellung.

Typ B – “Beat The System”

Einige der bestausgebildeten und bestvernetzten Personen sind sich darüber im Klaren wie es läuft, und versuchen es mit der “AUA-Methode” des Vertriebs: Im Sinne des “Anhauen – Umhauen – Abhauen” versuchen sie innerhalb kürzester Zeit die Karriereleiter zu erklimmen und mit viel Geld auf der hohen Kante aus dem System so bald wie möglich auszusteigen. Ob das gelingen wird, oder ob die Räder des Systems zu einem anderen Ergebnis führen, sei dahingestellt.

Typ C – “Better Do Something Than Nothing”

Dieser Typ hat sich letztendlich seiner Verantwortung gestellt. Ihm ist klar, dass sich die Welt nur mit seiner Hilfe verändern wird. Er hat sich lange gefragt, was ihm wirklich am Herzen liegt und wo seine Kompetenzen liegen. An diesen arbeitet er kontinuierlich und ist bereit, dafür Opfer aus seinen anderen Lebensbereichen zu bringen, da er mit seinen Fähigkeiten etwas (zum Besseren) bewegen will. Diese Leute überlegen gut, wo sie sich einbringen, gehen ihren Leidenschaften nach und sind kritisch und sehr aufmerksam, was den Zweck und den Wahrheitsgehalt einer Firma betrifft.

In diesem Sinne liegt es an Ihrer Firma zu entscheiden, welche Menschen sie anziehen möchte und wer sich für sie einsetzen soll. Typ A braucht nicht viel und wird Ihnen auch nicht viel zurückgeben. Typ B wird seinen Bezugspunkt über das Gehalt definieren, seien Sie sich dabei aber im Klaren, dass er Sie genau so schnell wieder verkaufen wird, wie Sie ihn eingekauft haben. Typ C ist vermutlich der heilige Gral, den Sie suchen.

Wenn Sie den wollen, wird Ihr Unternehmen authentisch sein müssen. Eines, das aufrecht zu seinen Aussagen steht. Ein Unternehmen, in dem Leute arbeiten, die inspirierend sind und nicht davor zurückschrecken, sich Fehler einzugestehen, die bereit sind, ihren ehrlichen, nach bestem Wissen und Gewissen eingeschätzten Beitrag zu einer besseren Welt beizusteuern. Abgesehen von einer gewissen emotionalen Intelligenz (aka Bauchgefühl) finden wir Y-ler auf Firmenbewertungsportalen mehr als genug Infos, um zu wissen, wer es ernst meint und wer nicht. Stecken Sie also in Ihrem eigenen Sinne die Energie und die Ressourcen Ihres Unternehmens nicht in das Verdecken von Schwachstellen, sondern arbeiten Sie ehrlich an der Authentizität.

Wenn Sie noch weiter gehen wollen, gehen Sie ein Stück in Richtung Demokratisierung. Ricardo Semler hat es mit Semco vorgezeigt: Vom Umsatzwachstum abgesehen kann er aus einem unerschöpflichen Pool der besten Studienabgänger schöpfen.

Es liegt an uns, an der Generation Y, zu entscheiden, wie es mit dieser Welt weitergehen soll. Es liegt aber an den Verantwortlichen in den Unternehmen, zu entscheiden, ob sie uns dabei helfen wollen, die Welt zu einem besseren Ort zu machen. Falls ja, können Sie sich unserer Loyalität sicher sein. Falls nicht, werden wir unsere eigenen Firmen gründen, um es zu tun. Denn irgendwer muss es tun.

Categories: Blogs

Relationship of Cycle Time and Velocity

George Dinwiddie’s blog - Thu, 12/11/2014 - 04:40

I sometimes see clashes between Kanban proponents and their Scrum counterparts. Despite the disagreements, both of these approaches tend toward the same goals and use many similar techniques. Karl Scotland and I did some root cause analysis of practices a few years back and came to the conclusion that there were a lot of similarities between Kanban and Scrum [as the poster-child of iterative agile] when viewed through that lens. I also noticed that while Scrum explicitly focuses on iterations as a control mechanism, Scrum teams tend get into trouble when they ignore Work In Progress (WIP). Conversely, while Kanban explicitly focuses on WIP, Kanban teams tend to get into trouble when they ignore Cadence.

A twitter conversation I was in revolved around Cycle Time and Velocity. Since this is a topic that’s come up before, I thought it would be valuable to describe it more fully. Again, I find there to be more similarities than differences between Kanban (which uses Cycle Time) and Scrum (which uses Velocity) in terms of predicting when a given amount of work will be done, or how much work will be done by a given time.

Cycle Time is the amount of time it takes for a single unit of work to progress from one identified point in the value stream to another. Since the term originated in manufacturing, it assumes that the units of work are essentially identical, and the variability of cycle time is low. The concept also assumes that one can relatively easily produce multiple units in parallel, increasing production capacity and reducing the average cycle time by a factor equal to the number of parallel streams (assuming they are all equally efficient). These assumptions can work with knowledge work like software development, but we do need to be careful that we’re not violating them.

Velocity is the amount of work that can be accomplished in a given unit of time. Velocity is often expressed in “story points” which take the perceived difficulty of a story into account. This generally adds noise to the data, so you’ll likely find it easier to work with a count of stories, instead. Even better, we can split our User Stories until they are of roughly equal size, and call them all “one point” stories. Even if you’re using story estimates for your tracking, then you can easily record the story count, too. If we use multiple teams to produce multiple units in parallel, then the velocities (in story count) may be added to give the overall capacity. (I recommend against adding story points from multiple teams.)

Given that Cycle Time is Time per Work, and Velocity is Work per Time, by adjusting for units we see that these are reciprocals of each other. They are related in the same way that wavelength and frequency are in physics.

If our average cycle time is 2.5 days per story, then the velocity for a two week cadence (10 working days) is 4. Similarly, if a team produces 15 stories in 10 working days, then the cycle time is 2/3 of a day.

For most decision making, people use average cycle time rather than the cycle time of individual units of work. Velocity will always average over the length of a sprint, at minimum. You can, of course, measure cycle times of individual stories. This will enable calculating or eyeballing the variability.

I was asked why I would want to make such a conversion. I can think of four reasons off the top of my head.

1.    To look at a situation from a different perspective to gain insights.
2.    To understand someone speaking from a different context.
3.    To work with organizations where they are, rather than expecting them to conform to my preferences.
4.    To reason using the data that’s available, rather than waiting to collect different data.

I don’t mind which framework people use, but I care that that they get the benefits of using it. Sometimes this requires borrowing concepts from another framework to understand what you’re seeing or make a needed adjustment.

Categories: Blogs

Rally and Dean on Scaled Agile: Get a Simplified System View

Rally Agile Blog - Wed, 12/10/2014 - 23:29

Everyone knows you need more than a whiteboard and stickies to practice Agile at scale. But the tool you use isn’t the only thing that matters: as Dean has said, “A fool with a tool is still a fool.”

The true value of Agile comes at scale, where it delivers benefits more broadly across your organization. Imagine how improved time to market, reduced development costs, and higher quality software will impact the goals your entire organization is trying to achieve.

Research studies have shown that, on average, Agile methods yield 470% better ROI than traditional waterfall methods. This kind of return means investing in scaled Agile, yet this is exactly where many people fall down. Projects and programs aligned to an organization’s strategy are successfully completed at nearly twice the rate of those that aren’t aligned, yet this kind of alignment doesn’t typically happen with stickies and a whiteboard.

So: how do you execute a program that’s aligned with business needs and customer value? How do you connect thousands of user stories, hundreds of features, and dozens of epics? How do you give everyone visibility so they can operate with velocity?

You need a simplified system view.

Check out these short videos with Scaled Agile Institute’s Dean Leffingwell and Rally VP of Product Management, Ryan Polk, to find out how simplified system views help people do the right work: fast, and at scale.

Steve Wolfe
Categories: Companies

Good Enough

Tyner Blain - Scott Sehlhorst - Wed, 12/10/2014 - 21:01

View of diminishing returns

We hear a lot about building products which are “good enough” or “just barely good enough.” How do we know what “good enough” means for our customers?  No one really tells us.

Different Perspectives of Good Enough

There are several important ways to think about a product being good enough – for this article, we will limit the context for discussion to “good enough to ship to customers” or “good enough to stop making it better (for now).”  Determining good enough informs the decision to ship or not.  Otherwise this is all academic

There are several perspectives on good enough which are important  – but don’t help product managers enough. The body of work seems to be focused on aspects of goodness which don’t help product managers to make the prioritization decisions which inform their roadmaps.  They are important – they are necessary but not sufficient for product managers.  Here are some pointers to some great stuff, before I dive into what I feel is a missing piece.

  • Good enough doesn’t mean you can just do 80% of the coding work you know you need to do, and ship the product – allowing technical debt to pile up.  Robert Lippert has an excellent article about this. Technical debt piles up in your code like donuts pile up on your waistline. This is important, although it only eventually affects product management as the code base becomes unwieldy and limits what the team can deliver – and increases the cost and time of delivery.
  • Be pragmatic about perfectionism when delivering your product.  Steve Ropa has an excellent article about this.  As a fellow woodworker, his metaphors resonate with me.  The key idea is, as a craftsman, to recognize when you’re adding cost and effort to improve the quality of your deliverable in ways your customer will never notice.  This is important, and can affect product managers because increasing the cost of deliverables affects the bang-for-the-buck calculations, and therefore prioritization decisions.

With the current mind share enjoyed by Lean Startup and minimally viable products (MVP), there is far too much shallow analysis from people jumping on the bandwagon of good ideas without fully understanding the ideas.  Products fail because of misunderstanding of the phrase minimum viable product.

  • Many people mis-define product in MVP to mean experiment.  Max Dunn has an excellent article articulating how people conflate “running an experiment” with “shipping product” and has a good commentary on how there isn’t enough guidance on the distinction.  This is important for product managers to understand.  Learning from your customers is important – but it doesn’t mean you should ship half-baked products to your market in order to validate a hypothesis.
  • MVP is an experimentation process, not a product development process. Ramli John makes this bold assertion in an excellent article.  Here’s a slap in the face which may just solve the problem, if we can get everyone to read it.  MVP / Lean Startup is a learning process fueled with hypothesis testing, following the scientific method.  Instead of trying to shoehorn it into a product-creation process, simply don’t.  Use the concept to drive learning, not roadmaps.
  • “How much can we have right now?” is important to customers.  Christina Wodtke has a particularly useful and excellent article on including customers in the development of your roadmap.  “Now, next, or later” is an outstanding framework for simultaneously getting prioritization feedback and managing the expectations of customers (and other stakeholders) about delivery.  My concern is that in terms of guidance to product managers, this is as good as it gets.  Most people manage “what and when” but not “how effectively.”
Three Perspectives

Three perspectives on product creation

There are three perspectives on how we approach defining good enough when making decisions about investment in our products.  The first two articles by Robert and Steve (linked above) address the concerns about when the team should stop coding in order to deliver the requested feature.  There is also the valid question of if a particular design – to which the developers are writing code – is good enough.  I’ll defer conversation about knowing when the design of how a particular capability will be delivered (as a set of features, interactions, etc) for another time.  [I’m 700 words into burying the lead so far].

For product managers, the most important perspective is intent.  What is it we are trying to enable our customers to do.  Christina’s article (linked above) expertly addresses half of the problem of managing intent. Note that this isn’t a critique of her focus on “what and when.”

We need to address the question “how capable for now?” How Capable Must it Be for Now?

Finally.  I wrote this article because everyone just waves their hand at this mystical concept of getting something out there for now and then making it better later.  But no one provides us with any tools for articulating how to define “good enough.”  Several years ago I wrote about delivering the not-yet-perfect product and satisficing your customers incrementally – but I didn’t provide any tools to help define good enough from an intent perspective.

Once we identify a particular capability to be included in a release (or iteration), we have to define how capable the capability needs to be.  Here’s an example of what I’m trying to describe:

  • We’ve decided that enabling our target customer to “reduce inventory levels” is the right investment to make during this release.
  • How much of a reduction in inventory levels is the right amount to target?

That’s the question.  What is  good enough?

Our customer owns the definition of good enough.  And Kano analysis gives us a framework for talking about it.  When looking at a more is better capability, from the perspective of our customers, increases in the capability of the capability (for non-native English speakers “increasing the effectiveness of the feature” has substantially the same meaning) increases the value to them.

diminishing returns in a kano

We can deliver a product with a level of capability anywhere along this curve.  The question is – at what level is it “good enough?”

good enough vs. more-is-better

Once we reach the point of delivering something which is “good enough,” additional investments to improve that particular capability are questionable – at least from the perspective of our customers.

Amplifying the Problem

Switch gears for a second and recall the most recent estimation and negotiation exercise you went through with your development team.  For many capabilities, making it “better” or “more” or “faster” also makes it more expensive.  “Getting search results in 2 seconds costs X, getting results in 1 second costs 10X.”

As we increase the capability of our product, we simultaneously provide smaller benefit to our customers at increasingly higher cost.  This sounds like a problem on a microeconomics final exam.  A profit-maximizing point must exist somewhere.

An Example

Savings from driving a more fuel efficient car is a good example for describing diminishing returns. Apologies to people using other measures and currencies.  The chart below shows the daily operating cost of a vehicle based on some representative values for drivers in the USA.

diminishing returns of benefits of fuel efficiency

Each doubling of fuel efficiency sounds like a fantastic improvement in a car.  80 MPG is impressively “better” than 40 MPG from an inside-out perspective.  Imagine the engineering which went into improving (or re-inventing) of technology to double the fuel efficiency.  All of that investment to save the average driver $1 per day.  This is less than $2,000 based on the average length of car ownership in the USA.

How much will a consumer pay to save that $2,000?  How much should the car maker invest to double fuel efficiency, based on how much they can potentially increase sales and/or prices? An enterprise software rule of thumb would suggest the manufacturer could raise prices between $200 and $300.  If the vendor’s development budget were 20% of revenue, they would be able to spend $40 – $60 (per anticipated car sold) to fund the dramatic improvement in capability.

One Step Closer

What good enough means, precisely, for your customer, for a particular capability of a particular product, given your product strategy is unique.  There is no one-size-fits-all answer.

There is also no unifying equation which applies for everyone either.  Even after you build a model which represents the diminishing returns to your customers of incremental improvement, you have to put it in context.  What does a given level of improvement cost, for your team, working with your tech-stack?  How does improvement impact your competitive position – both with respect to this capability and overall.

You have to do the customer-development work and build your understanding of how your markets behave and what they need.

At least with the application of Kano analysis, you have a framework for making informed decisions about how much you ultimately need to do, and how much you need to do right now.  As a bonus, you have a clear vehicle for communicating decisions (and gaining consensus) within your organization.

Categories: Blogs

When Should You Move from Iterations to Flow?

Johanna Rothman - Wed, 12/10/2014 - 15:45

I’m writing part of the program management book, talking about how you need to keep everything small to maintain momentum. Sometimes, to keep your work small, teams move from iterations to flow.

Here are times when you might consider moving from iteration to flow:

  • The Product Owner wants to change the order of features in the iteration for business reasons, and you are already working in one- or two-week iterations. Yes, you have that much change.
  • You feel as if you have a death march agile project. You lurch from one iteration to the next, always cramming too much into an iteration. You could use more teams working on your backlog.
  • You are working on too many projects in one iteration. No one is managing the project portfolio.

This came home to me when I was coaching a program manager working on a geographically distributed program in 2009. One of the feature teams was responsible for the database that “fed” all the other feature teams. They had their own features, but the access and what the database could do was centralized in one database team. That team tried to work in iterations. They had small, one- or two-day stories. They did a great job meeting their iteration commitments. And, they always felt as if they were behind.

Why? Because they had requests backed up. The rank of the requests into that team changed faster than the iteration duration.

When they changed to flow, they were able to respond to requests for the different reports, access, whatever the database needed to do much faster. They were no longer a bottleneck on the program. Of course, they used continuous integration for each feature. Every day, or every other day, they updated the access into the database, or what the database was capable of doing.

The entire program regained momentum.

kanban.iterationThis is a simplified board. I’m sure your board will look different.

When you work in flow, you have a board with a fixed set of Ready items (the team’s backlog), and the team always works on the top-ranked item first. Depending on the work in progress limits, the team might take more than one item off the Ready column at a time.

The Product Owner has the capability to change any of the items in the Ready column at any time. If the item is large, the team will spend more time working on that item. It is in the Product Owner’s and the team’s interest to learn how to make small stories. That way, work moves across the board fast.

If you use a board something like this, combined with an agile roadmap, the team still has the big picture of what the product looks like. Many of us like to know what the big picture is. And, we see from the board, what we are working on in the small. However, we don’t need to do iteration planning. We take the next item off the top of the Ready list.

There is no One Right Answer as to whether you should move from iteration to flow. It depends on your circumstances. Your Product Owner needs to write stories that are small enough that the team can complete them and move on to another story. Agile is about the ability to change, right? If the team is stuck on a too-large story, it’s just as bad as being stuck in an iteration, waiting for the iteration to end.

However, if you discover, especially if you are a feature team working in a program, that you need to change what you do, or the order of what you do more often than your iterations allow, consider moving to flow. You may decide that iterations are too confining for what you need.

Categories: Blogs

the reward for good work …

Derick Bailey - new ThoughtStream - Wed, 12/10/2014 - 13:00

more-work

“The reward for good work, is more work.” – Justin Gregory

It’s a phrase that I hear rather often, working with Justin, as we plow through yet another set of requirements, another project, another bug, another … the list is never-ending, and neither is the work. And that’s a good thing.

Dreading The Job

Far too often in my career, I have found that I was dreading the work that I was tasked with. It was tedious, error prone, difficult, not within my area of specialty / expertise, or whatever excuse I had. And far too often in my career, I found that as soon as I had rushed through the work, verified that it was correct and pushed the results out to where they needed to be, I was given more of that work. And more. And more.

And there was good reason for that.

Professionalism

It’s not that I was better at doing the work than others. It’s that I typically did good work, even when the work was not something I wanted to do. Sure, there were (are) exceptions to this. But in general, my sense of professionalism drives me to put at least a moderate amount of effort and quality in to what I produce.

I’m certainly not alone in my efforts and my ability to get things done. There are countless people in any given job that will do good work – and those are typically the people that rise through the ranks, becoming team leads, projects leads, etc.

A Growing Reputation

Becoming a leader in a team or on a project is more than just doing good work, even if that is a large part of it. There’s a very human side of software development that is important to understand, as well. But the good work that you do is often one of the first things that others look for, when deciding who they want to follow or pay attention to.

For me, my work and my reputation are tied together. I’m not known for social graces, being easy to work with, or having the best bed-side manner when helping others. But I am known for quality work, for pushing others to do better, and for clearing a path on which others can travel. Sometimes my technical ability makes up for my lack of empathy. Other times, though, I have to work hard to make sure I’m not talking down to people or saying things that won’t be understood correctly, outside of my head.

It’s not easy, but every time I do something right – with technology, or with other people – my reputation grows a little. I gain new followers, I get another person interested in what I’m doing, I make a new friend maybe – at least, when I do something nice for others.

To Build-A-Better Me

In spite of myself, I work hard to build a better, more human side of me every day. It takes work, and I constantly have to fight my “well, actually…” tendencies. But somehow, I manage to not make an ass out of myself all the time. Sometimes, I even do something right when it comes to working with others and managing teams.

And when that happens… when I do show that if I work at it, I can understand other people’s perspectives and provide some amount of insight and even leadership… when I do good work… well, there’s one thing that good work always leads to.

– Derick

Categories: Blogs

Epic tips for Epics

Pivotal Tracker Blog - Tue, 12/09/2014 - 21:47

We’ve had Epics out for quite some time now and I’d like to share some tips I’ve learned from teams at Pivotal Labs and Cloud Foundry.

Prioritize your EpicsiOS Simulator Screen Shot Dec 8, 2014, 4.23.44 PM
Just like stories, you should prioritize your epics. Is epic X more important than epic Y? Reorder your epics to communicate their relative priority to your team and stakeholders. It’s good to review this order every once in a while. Maybe a new API endpoint has become unblocked and it’s more important to work on that epic than the next prioritized epic.

Set an Epic Milestone
An epic milestone is just another epic, but using markdown syntax gives it a way to standout in the epics list (e.g. ***3.0 release***). This is Tracker hacking but I think it’s a good way to give visual distinction the same way release markers work for stories.

Besides the visual distinction, you can now prioritize which epics go above or below this confidence line. Tie an epic milestone to a version of the app, a release date, or a financial quarter depending upon how you plan.  You can link releases that fall into this epic milestone by adding the epic label to the release.

Complete your Epics
Not only is it satisfying to ship stories, it’s awesome to put an epic to bed. Yet, if you have forever epics, you and your team will be missing out on that feeling of completion. It may also be a sign that you’ve taken on too much.

Breaking up an epic by version is a natural way to still keep the theme but allow the Product Manager to manage scope. We’ve also used the visual hierarchy to break up an epic. This can have the added benefit of parallel development tracks.

When an epic turns green it’s a good reminder that it’s time to celebrate! If icebox stories remain in the done epic, you’ve got a few choices:

  • Make a new epic (e.g. Profile V2)
  • Keep them in the icebox and remove the epic label
  • Delete them

Prior to a retrospective, take the time to review stories in the epic to see what went well and what could be improved.

We hope these tips help you get more from your epics. To find out more, please see our FAQ.

The post Epic tips for Epics appeared first on Pivotal Tracker.

Categories: Companies

New Blog Site!!!

sprint.ly - scrum software - Tue, 12/09/2014 - 20:45

In case our redirect isn’t cooperating, please visit us at our new and much prettier blog at https://sprint.ly/blog/. All Sprintly product updates and happenings will be posted to that new site. Cheers!

Categories: Companies

Agile Layoffs – When Business is Bad, What Do You Do?

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Agile methods and the culture behind them focus on teamwork, safe environments, motivation, technical excellence and lots of other things that are easy when business is good.  But when business is bad, and you simply can’t afford to keep everyone around, what do you do?

… UPDATE …

Interesting: this tiny post has generated a lot of traffic… but no responses.  Please feel free to offer suggestions or ideas or questions in the comments.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

R: Cleaning up and plotting Google Trends data

Mark Needham - Tue, 12/09/2014 - 20:14

I recently came across an excellent article written by Stian Haklev in which he describes things he wishes he’d been told before starting out with R, one being to do all data clean up in code which I thought I’d give a try.

My goal is to leave the raw data completely unchanged, and do all the transformation in code, which can be rerun at any time.

While I’m writing the scripts, I’m often jumping around, selectively executing individual lines or code blocks, running commands to inspect the data in the REPL (read-evaluate-print-loop, where each command is executed as soon as you type enter, in the picture above it’s the pane to the right), etc.

But I try to make sure that when I finish up, the script is runnable by itself.

I thought the Google Trends data set would be an interesting one to play around with as it gives you a CSV containing several different bits of data of which I’m only interested in ‘interest over time’.

It’s not very easy to automate the download of the CSV file so I did that bit manually and automated everything from there onwards.

The first step was to read the CSV file and explore some of the rows to see what it contained:

> library(dplyr)
 
> googleTrends = read.csv("/Users/markneedham/Downloads/report.csv", row.names=NULL)
 
> googleTrends %>% head()
##                   row.names Web.Search.interest..neo4j
## 1 Worldwide; 2004 - present                           
## 2        Interest over time                           
## 3                      Week                      neo4j
## 4   2004-01-04 - 2004-01-10                          0
## 5   2004-01-11 - 2004-01-17                          0
## 6   2004-01-18 - 2004-01-24                          0
 
> googleTrends %>% sample_n(10)
##                   row.names Web.Search.interest..neo4j
## 109 2006-01-08 - 2006-01-14                          0
## 113 2006-02-05 - 2006-02-11                          0
## 267 2009-01-18 - 2009-01-24                          0
## 199 2007-09-30 - 2007-10-06                          0
## 522 2013-12-08 - 2013-12-14                         88
## 265 2009-01-04 - 2009-01-10                          0
## 285 2009-05-24 - 2009-05-30                          0
## 318 2010-01-10 - 2010-01-16                          0
## 495 2013-06-02 - 2013-06-08                         79
## 28  2004-06-20 - 2004-06-26                          0
 
> googleTrends %>% tail()
##                row.names Web.Search.interest..neo4j
## 658        neo4j example                   Breakout
## 659 neo4j graph database                   Breakout
## 660           neo4j java                   Breakout
## 661           neo4j node                   Breakout
## 662           neo4j rest                   Breakout
## 663       neo4j tutorial                   Breakout

We only want to keep the rows which contain (week, interest) pairs so the first thing we’ll do is rename the columns:

names(googleTrends) = c("week", "score")

Now we want to strip out the rows which don’t contain (week, interest) pairs. The easiest way to do this is to look for rows which don’t contain date values in the ‘week’ column.

First we need to split the start and end dates in that column by using the strsplit function.

I found it much easier to apply the function to each row individually rather than passing in a list of values so I created a dummy column with a row number in to allow me to do that (a trick Antonios showed me):

> googleTrends %>% 
    mutate(ind = row_number()) %>% 
    group_by(ind) %>%
    mutate(dates = strsplit(week, " - "),
           start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(),
           end =   dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>%
    head()
## Source: local data frame [6 x 6]
## Groups: ind
## 
##                        week score ind    dates      start        end
## 1 Worldwide; 2004 - present     1   1 <chr[2]>         NA         NA
## 2        Interest over time     1   2 <chr[1]>         NA         NA
## 3                      Week    90   3 <chr[1]>         NA         NA
## 4   2004-01-04 - 2004-01-10     3   4 <chr[2]> 2004-01-04 2004-01-10
## 5   2004-01-11 - 2004-01-17     3   5 <chr[2]> 2004-01-11 2004-01-17
## 6   2004-01-18 - 2004-01-24     3   6 <chr[2]> 2004-01-18 2004-01-24

Now we need to get rid of the rows which have an NA value for ‘start’ or ‘end':

> googleTrends %>% 
    mutate(ind = row_number()) %>% 
    group_by(ind) %>%
    mutate(dates = strsplit(week, " - "),
           start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(),
           end =   dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>%
    filter(!is.na(start) | !is.na(end)) %>% 
    head()
## Source: local data frame [6 x 6]
## Groups: ind
## 
##                      week score ind    dates      start        end
## 1 2004-01-04 - 2004-01-10     3   4 <chr[2]> 2004-01-04 2004-01-10
## 2 2004-01-11 - 2004-01-17     3   5 <chr[2]> 2004-01-11 2004-01-17
## 3 2004-01-18 - 2004-01-24     3   6 <chr[2]> 2004-01-18 2004-01-24
## 4 2004-01-25 - 2004-01-31     3   7 <chr[2]> 2004-01-25 2004-01-31
## 5 2004-02-01 - 2004-02-07     3   8 <chr[2]> 2004-02-01 2004-02-07
## 6 2004-02-08 - 2004-02-14     3   9 <chr[2]> 2004-02-08 2004-02-14

Next we’ll get rid of ‘week’, ‘ind’ and ‘dates’ as we aren’t going to need those anymore:

> cleanGoogleTrends = googleTrends %>% 
    mutate(ind = row_number()) %>% 
    group_by(ind) %>%
    mutate(dates = strsplit(week, " - "),
           start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(),
           end =   dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>%
    filter(!is.na(start) | !is.na(end)) %>%
    ungroup() %>%
    select(-c(ind, dates, week))
 
> cleanGoogleTrends %>% head()
## Source: local data frame [6 x 3]
## 
##   score      start        end
## 1     3 2004-01-04 2004-01-10
## 2     3 2004-01-11 2004-01-17
## 3     3 2004-01-18 2004-01-24
## 4     3 2004-01-25 2004-01-31
## 5     3 2004-02-01 2004-02-07
## 6     3 2004-02-08 2004-02-14
 
> cleanGoogleTrends %>% sample_n(10)
## Source: local data frame [10 x 3]
## 
##    score      start        end
## 1      8 2010-09-26 2010-10-02
## 2     73 2013-11-17 2013-11-23
## 3     52 2012-07-01 2012-07-07
## 4      3 2005-06-19 2005-06-25
## 5      3 2004-12-12 2004-12-18
## 6      3 2009-09-06 2009-09-12
## 7     71 2014-09-14 2014-09-20
## 8      3 2004-12-26 2005-01-01
## 9     62 2013-03-03 2013-03-09
## 10     3 2006-03-19 2006-03-25
 
> cleanGoogleTrends %>% tail()
## Source: local data frame [6 x 3]
## 
##   score      start        end
## 1    80 2014-10-19 2014-10-25
## 2    80 2014-10-26 2014-11-01
## 3    84 2014-11-02 2014-11-08
## 4    81 2014-11-09 2014-11-15
## 5    83 2014-11-16 2014-11-22
## 6     2 2014-11-23 2014-11-29

Ok now we’re ready to plot. This was my first attempt:

> library(ggplot2)
> ggplot(aes(x = start, y = score), data = cleanGoogleTrends) + 
    geom_line(size = 0.5)
## geom_path: Each group consist of only one observation. Do you need to adjust the group aesthetic?
2014 12 09 17 57 49

As you can see, not too successful! The first mistake I’ve made is not telling ggplot that the ‘start’ column is a date and so it can use that ordering when plotting:

> cleanGoogleTrends = cleanGoogleTrends %>% mutate(start =  as.Date(start))
> ggplot(aes(x = start, y = score), data = cleanGoogleTrends) + 
    geom_line(size = 0.5)

2014 12 09 18 00 03

My next mistake is that ‘score’ is not being treated as a continuous variable and so we’re ending up with this very strange looking chart. We can see that if we call the class function:

> class(cleanGoogleTrends$score)
## [1] "factor"

Let’s fix that and plot again:

> cleanGoogleTrends = cleanGoogleTrends %>% mutate(score = as.numeric(score))
> ggplot(aes(x = start, y = score), data = cleanGoogleTrends) + 
    geom_line(size = 0.5)

2014 12 09 18 02 39

That’s much better but there is quite a bit of noise in the week to week scores which we can flatten a bit by plotting a rolling mean of the last 4 weeks instead:

> library(zoo)
> cleanGoogleTrends = cleanGoogleTrends %>% 
    mutate(rolling = rollmean(score, 4, fill = NA, align=c("right")),
           start =  as.Date(start))
 
> ggplot(aes(x = start, y = rolling), data = cleanGoogleTrends) + 
    geom_line(size = 0.5)

2014 12 09 18 05 26

Here’s the full code if you want to reproduce:

library(dplyr)
library(zoo)
library(ggplot2)
 
googleTrends = read.csv("/Users/markneedham/Downloads/report.csv", row.names=NULL)
names(googleTrends) = c("week", "score")
 
cleanGoogleTrends = googleTrends %>% 
  mutate(ind = row_number()) %>% 
  group_by(ind) %>%
  mutate(dates = strsplit(week, " - "),
         start = dates[[1]][1] %>% strptime("%Y-%m-%d") %>% as.character(),
         end =   dates[[1]][2] %>% strptime("%Y-%m-%d") %>% as.character()) %>%
  filter(!is.na(start) | !is.na(end)) %>%
  ungroup() %>%
  select(-c(ind, dates, week)) %>%
  mutate(start =  as.Date(start),
         score = as.numeric(score),
         rolling = rollmean(score, 4, fill = NA, align=c("right")))
 
ggplot(aes(x = start, y = rolling), data = cleanGoogleTrends) + 
  geom_line(size = 0.5)

My next step is to plot the Google Trends scores against my meetup data set to see if there’s any interesting correlations going on.

As an aside I made use of knitr while putting together this post – it works really well for checking that you’ve included all the steps and that it actually works!

Categories: Blogs

Agile Teams Should Sprint, But in the Same Direction as Enterprise PPM Strategy

Agile Management Blog - VersionOne - Tue, 12/09/2014 - 18:35

CA logo

 

Guest post by James Chan, director, technical presales at CA Technologies

James-Chan

Catch James Chan’s talk on AgileLIVE Wednesday, December 10, 2014: “Portfolio Strategy + Agile Execution: Coordinating, Not Competing.” Details at http://pm.versionone.com/AgileLIVE-2014-CA-PPM.

I think we can all agree that agile techniques have made their way into large enterprises. As agile adoption crosses the threshold into large enterprises, agile teams sometimes begin to struggle with determining the items of highest business value when delivering syndicated enterprise solutions.

Depending on which client the team interacts with, the definition of priority and business value can easily fluctuate. In large enterprises, product owners and agile teams need some way to connect to the bigger picture of enterprise strategy and ensure that enterprise goals are met, while being fiscally responsible to deliver on items that are being funded.

To add an additional level of complexity, enterprises may have pockets of agile work, as well as pockets of traditional work. Enterprises are rarely homogeneous. In many cases, enterprises run under a hybrid model, where the development and QA teams leverage agile techniques, but other teams such as the PMO, product management, sysops, enterprise architecture, user experience, and devops may use other techniques.

Because enterprises live in this space, there needs to be a way for those in charge of strategy to perform their market-sensing activities to develop, define, select, and fund an enterprise’s strategy. This strategy then needs to be communicated to those responsible for product delivery to align them with strategy and identity problems they can solve, as well as opportunities they want to exploit. Once these items of highest business value are defined and prioritized, that vision can then be communicated to the execution teams to realize the vision in a manner that is most effective for them and the enterprise.

To ensure that everyone in the enterprise is rowing in the same direction, and to fully realize the benefits of agility, enterprises need to take a look at the bigger picture. Many are looking at agile execution techniques as part of a larger enterprise ecosystem through project and portfolio management solutions. Enterprise PPM (portfolio management) helps with alignment, enabling enterprises to couple speed with thoughtful planning. Project and portfolio management solutions help portfolio managers define the enterprise’s trajectory needs and balance initiatives that support the strategy with financial capital and human skills. They can also collaborate closely with product management to define the problems they should solve as well as the opportunities they want to exploit in support of the enterprise’s vision.

Finally, that collaborative roadmap needs to be the guiding force for agile execution teams to ensure that as they are sprinting, they are sprinting in the same direction. If teams just focus on the smallest unit of work, which is a user story, they can quickly lose sight of value and get lost in the trees. So in order to ensure that teams drive toward overall delivered business value, enterprise strategy and trajectory need to be taken into account.

So how do you do this? A good place to start is by checking out the CA PPM (Project and Portfolio Management) integrated solution with VersionOne. This unified enterprise PPM solution gives you a clear picture of all projects from the top down, from high-level feature planning to work item assignment. By combining strategic, financial portfolio management capabilities with ALM software, you can get your agile teams sprinting in the same direction as your enterprise strategy – so business value gets delivered faster and the transition to enterprise agile is simpler.

James will present on this topic during VersionOne’s AgileLIVE™ webinar on Wednesday, December 10, 2014 from Noon to 1 p.m. EDT. Details and registration:

AgileLIVE: CA PPM and VersionOne webinar:

“Portfolio Strategy + Agile Execution:  Coordinating, Not Competing”
December 10, 12-1 p.m. EDT

Categories: Companies

Agile and Scrum Trello Extensions

Scrum Expert - Tue, 12/09/2014 - 18:27
Trello is a free on-line project management tool that provides a flexible and visual way to organize anything. This approach is naturally close to the visual boards used in the Scrum or Kanban approaches. As the tool as an open architecture, some extensions have been developed for a better implementation of Agile project management in Trello. The visual representation and the card system used by Trello already make it possible to use it for Scrum project that need a virtual board to display their user stories backlog and their sprint tasks. ...
Categories: Communities

Help! My Company is Stuck… (Part 1)

Illustrated Agile - Len Lagestee - Mon, 12/08/2014 - 23:00

My coaching engagements often bring me to organizations well underway with an Agile transformation or have attempted, with little impact, an Agile transformation in the past. After a short time of observation, it becomes apparent some of these organizations are stuck between the old and the new…and the strong pull of the old provides easy opportunity for stubborn habits and behaviors to fortify their position.

Perhaps you have been experiencing this. Everyone says “We’re Agile!” but something doesn’t feel very agile. Ceremonies are followed and time boxes are in place but it still takes months before anything is released to customers. A complex and fragile architecture keeps maintenance high and quality low. While continuing to say they want to be “agile,” leaders continue to exhibit controlling leadership styles and their behavior limiting people from full engagement and restricting organizational health and innovation.

Everyone says “We’re Agile!” but something doesn’t feel very agile.
Click To Tweet

Your company is stuck…and this feeling of being stuck between two worlds can feel worse than what is being left behind. This is often when you hear the voice of the “I liked it better before.” crowd gaining volume.

If find your company with stalled transformational progress, here are a few situations to address and a few simple suggestions:

Hanging on to old artifacts. Status reports and resource capacity planning are a few coming to mind and I’m sure you can come up with a few more. Long, unneeded requirements documents or test strategy documents can also be prime suspects.

Ask why. Dig into why the old artifacts are still being requested and collectively discover ways to remove and purge. Guide people on a journey to discover new, lightweight ways to accommodate the needs of the original artifacts. As you know, its much easier to add process than remove them so fight to keep things simple and valuable.

Lingering defensive processes. Similarly, there is often a build-up of processes put in place over time to provide blame-free evidence when something goes wrong. This is on of the the reasons projects become lengthy and significantly reduces the efficiency gains from a transition to Agile. Numerous sign-offs and exhaustive governance approvals are often the culprits here.

Change the language. During a recent coaching assignment, a new agile team was finishing up their first sprint planning when a tester spoke up and said “My manager requires me to sign-off on our testing plan.” One of the other team members responded, “How about we all sign it? Quality is all of our responsibility and if something goes wrong we’ll all take responsibility.” This subtle shift from “me”, “my”, and “I” to “we”, “us”, and “our” begins to whittle away at the need for unneeded defensive steps. It was a powerful moment for this team.

Multi-month (or year) “projects.” While there may be a reason for the occasional lengthy project or release, many organizations (especially large ones) have made this the de facto approach to building and releasing products. If the number of sprints since the last release is in the double-digits, you probably have a problem.

Stop talking and start building. A recent tweet stated, “one prototype is worth a thousand decks.” So true. This excellent presentation from Leisa Reichelt  is worth viewing for an example of how the UK government has used Agile techniques to deliver new features quickly (and also learn about their customers). You can also leverage techniques such as story mapping to find smaller functional pieces to deliver.

Poor craftsmanship. In my opinion, nothing keeps a team or organization from experiencing optimal agility more than poor craftsmanship. When valuable time is spent fixing bugs or patching a fragile architecture, precious time and effort is focused inward, on our own needs, instead of outward, on the needs of our customers.

Use elements of XP (eXtreme Programming). Far too often, this is where I see organizations fail to invest the proper amount of time and money. Agile practices are implemented but the technical aptitude continues to be lacking, keeping the team and organization out of flow. Begin experimenting with and investing in techniques to drive technical excellence. Enterprise architecture can also help by intertwining your architectural vision and roadmap with the product vision and roadmap. This helps by implementing technical capabilities in advance of the needs of the product (instead of reacting with short-term solutions). 

Being “stuck” in a transformation can be frustrating but it is possible to build momentum again. In part 2, we’ll cover a few of the more challenging scenarios causing stagnation and diminished agility and what you can do about it.

References: eXtreme Programming (http://www.extremeprogramming.org/rules.html)

Becoming a Catalyst - Scrum Master Edition

The post Help! My Company is Stuck… (Part 1) appeared first on Illustrated Agile.

Categories: Blogs

After the Webinar “Test Managment Using TeamForge” – More Insights from the Experts

Danube - Mon, 12/08/2014 - 22:33

We recently conducted a webinar called Test Management Using TeamForge . The audience engagement was fantastic and we received many insightful questions from our listeners that we were not able to get to during the live webinar.  Here are additional questions from the audience and answer from our Test Management expert CollabNet Director of engineering, Venkat Janardhanam.

Doug: I would like to know how Selenium is communicating with TestLink to update the status of the Testcase, API ? Is there an available API to contact TestLink?

Venkat: The automation works through integration of three different tools. 1. Jenkins 2. Selenium and 3. TestLink plug-in for Jenkins. The Jenkins TestLink plug-in communicates to TestLink through an open source Java API. After the build is complete the plug-in will execute all test cases that are marked Automated by invoking the test file that is specified in the Test Case. The user defined field in Test Case will have the test file that Jenkins will pull and execute. There should be one to one relationship between at TestCase in TestLink to automated test file. The test file is an automation script that represents a TestCase and the script can be a plain Java class, JUNIT class or Selenium Java class. Once the test files are executed the result pass or fail will be updating the test case with the right status through the plug-in. The test file in our reference automation is Java Selenium file generated through Selenium RC or through manual Java coding of test case using the Selenium API. You can use other CI tools and test files from different automation tool to make the whole automation work.

 

Doug: Do you have automation reference with JMeter?

Venkat: Not at the moment. We are in process of investigating the JMeter for our reference automation.

 

Doug: Please show defects generated automatically with selenium.

Venkat:  This feature will be available in our interim release and currently it is disabled due to feedback received from pilot customers. The reason is there are many scenarios where the automation will generate duplicate defects. For example, if a CI tool is used for the build and test runs daily automatically then the defects will be created. If there are failures and the team is not able to fix the defects before the test runs again then duplicate defects will be created. So we are looking at all the scenarios before turning on this feature in next release and also trying to make the defect creation more intelligent to update an existing defect instead of creating new defects.

 

Doug: How is selenium defects added?

Venkat: The selenium defects are added into TestLink through Jenkins TestLink plug-in. In your Selenium Java test case there is a one line code snippet that needs to be added for a success scenario and a failure scenario in the Selenium Java script. The assert snippet will be recognized by plug-in to update the corresponding TestCase in TestLink with the right status.

 

Doug: My test results are stored in XML. Is there a way to import these test results into the TestLink. I am using the C# to create the XML is one of the participants response.

Venkat: There is an import feature in TestLink where you can import XML results generated from other automation tool into TestLink.

 

Doug: We currently use both TeamForge and TestLink. Is there a way we can glue users?

Venkat: The integration has to be installed to glue existing TeamForge and TestLink systems to work as one. After that there are migration scripts to migrate data between TeamForge and TestLink. The users who are in TestLink will be migrated to TeamForge and users in TeamForge will be migrated to TestLink. The script has to be run only once after installing the integration after that any projects, users, permissions and roles are automatically synchronized between the two systems. If the same user name in TestLink exists in TeamForge then there is a user conflict that the migration script will report and this conflict user will not be migrated. The scenario below will help us understand how to resolve the conflict.

 

Doug: For example, John is a user who exists in TeamForge and TestLink. The TeamForge already had Venkat: Project A and TestLink had Project B which is in two separate systems. The one time migration script moves Project B from TestLink to TeamForge. The user John has access to Project A in TeamForge only. To fix this the issue the TeamForge admin has to be check if the conflict user John and TeamForge user John is the same person. If it is same person then in TeamForge John already has access to Project A and John has to be given permission to Project B so that the conflict gets resolved. Now John will have access to both Project A and Project B.

 

Doug: We a test result is set at TestLink as Failed is it possible to automatically fill some of the fields at the created Defect? Specially the Assing To field.

Venkat: Currently, we are pushing Time of Execution, Test Plan Name, Build and User Comments to TeamForge. The feedback received so far is automating AssignedTo is difficult because there are cases where the person who is going to work on a defect may need specialized skills that only one person in a team can fix it and it is hard for system to identify AssignedTo. In case, the system identifies the AssignedTo there may be cases where the person to whom the defect is assigned may be too busy then we have to work on work load distribution to see who is free and then assign.

I am open to receive more feedback from users on what other field that will make sense to be added to the auto generated defect so that we can plan to incorporate the request in the future roadmap.

 

Doug: Is it possible to configure the TestLink project to generate a specific Defect tracker type? For instance we have functional defects and performance Defects

Venkat: During initial configuration of TestLink integration with TeamForge by project admin the TestLink expects the TeamForge user to provide all the tracker ids of all requirement trackers. For example, if you have Epic, Story and Task tracker then you need to provide tracker id for all three trackers with comma separation during configuration. Similarly for defect tracker the integration accept only one defect tracker id and you cannot have multiple defect tracker.

 

Doug: What TestLink version support synchr with TeamForce?(1.7.?)

Venkat: The integration works on TeamForge 7.0, 7.1 and 7.2 with TestLink version 1.9.11.

 

Doug: Can you automatic fetch test result from TestLink to dashboard webpage? (and from multiple project in TestLink?)

Venkat: At the moment the integration maps one TeamForge project with one TestLink project. Currently, you cannot report out of multiple TestLink project into one TeamForge project.

 

Doug: Any input for how to ensure quality test cases? (and not only x number of test cases per requirement) e.g. Checkmark to indicate review of test cases? Yes my questions was to understand if there can be set a checkmark to indicate test cases have been reviewed or any similar way to ensure good quality test cases.

Venkat: The TestLink provide custom fields where multiple custom fields can be created that can be used as a checklist. The reviewer of the test case has to review test case and update the checklist one by one. The check list can be a single text area or multiple check boxes. If all the items in the check list are complete then the user can mark the test case as approved by clicking another custom field check box.

 

Doug: Can we shut off auto defect generation?

Venkat: Sure, you can turn on/off auto defect generation at a project level.

 

Doug: I don’t want a test suite created for each requirement.

Venkat: You can select the TestSuite field to ‘NONE’ instead of ‘Create’ while creating requirement and this will not create Test Suite in TestLink.  This is controlled at every requirement artifact level.

 

Read more on Venkat’s blog about test management in TeamForge.

Listen to the complete webinar to understand how test management can be tied into continuous integration using TeamForge, allowing your agile teams to collaborate and get early feedback. Also, see how traceability can be easily maintained from requirement to test cases to defects and builds.

 

Follow CollabNet on Twitter and LinkedIn for more insights from our industry experts #AskCollabNet.

 

 

 

 

 

The post After the Webinar “Test Managment Using TeamForge” – More Insights from the Experts appeared first on blogs.collab.net.

Categories: Companies

Getting More from your Distributed Agile Teams

TV Agile - Mon, 12/08/2014 - 22:18
Agile is now commonly used in most organisations but most people struggle when the conditions are not in the ideal agile sweet-spot of collocation and small teams. In fact most organisations rarely experience situations that are ideal for agile due to working internationally, with multiple teams and perhaps 3rd party suppliers. This presentationlooks at what […]
Categories: Blogs

R: dplyr – mutate with strptime (incompatible size/wrong result size)

Mark Needham - Mon, 12/08/2014 - 21:02

Having worked out how to translate a string into a date or NA if it wasn’t the appropriate format the next thing I wanted to do was store the result of the transformation in my data frame.

I started off with this:

data = data.frame(x = c("2014-01-01", "2014-02-01", "foo"))
> data
           x
1 2014-01-01
2 2014-02-01
3        foo

And when I tried to do the date translation ran into the following error:

> data %>% mutate(y = strptime(x, "%Y-%m-%d"))
Error: wrong result size (11), expected 3 or 1

As I understand it this error is telling us that we are trying to put a value into the data frame which represents 11 rows rather than 3 rows or 1 row.

It turns out that storing POSIXlts in a data frame isn’t such a good idea! In this case we can use the as.character function to create a character vector which can be stored in the data frame:

> data %>% mutate(y = strptime(x, "%Y-%m-%d") %>% as.character())
           x          y
1 2014-01-01 2014-01-01
2 2014-02-01 2014-02-01
3        foo       <NA>

We can then get rid of the NA row by using the is.na function:

> data %>% mutate(y = strptime(x, "%Y-%m-%d") %>% as.character()) %>% filter(!is.na(y))
           x          y
1 2014-01-01 2014-01-01
2 2014-02-01 2014-02-01

And a final tweak so that we have 100% pipelining goodness:

> data %>% 
    mutate(y = x %>% strptime("%Y-%m-%d") %>% as.character()) %>%
    filter(!is.na(y))
           x          y
1 2014-01-01 2014-01-01
2 2014-02-01 2014-02-01
Categories: Blogs

Agile outside of Software

Agile Complexification Inverter - Mon, 12/08/2014 - 19:44

Agile in your schools, an announcement from Agile Learning Centers.




Our Agile Learning Centers are growing and we're preparing to support the launch of several more ALCs this summer! Rumor has it that we may be going international.

Read on to find enrollment information for our two full-time schools in NYC and Charlotte, film screenings, and some really juicy featured blog posts from our growing community of Agile Learning Facilitators.



ALC NYC

Learn about our enrollment process here

Attend a Parent Interest Night to begin exploring enrollment options for the current year and/or the 2015-16 school year.
January, 15th 2015
March, 5th 2015

RSVP to a Parent Interest Night here!


ALC Mosaic

Learn about our enrollment process here

Attend a Parent Interest Night to begin exploring enrollment options for the current year and/or the 2015-16 school year.
February, 18th 2015
March 24th, 2015

RSVP to a Parent Interest Night here!


Film Screenings

The Agile Learning Center in NYC will be hosting a screening of Race to Nowhere on February 12th at 7pm!

Tickets are quite limited and only $10 -- reserve yours here now before we start advertising on our website and social media.

A bunch of the Agile Learning Facilitators were able to catch screenings of Class Dismissed in NYC and Charlotte this past week and loved it! Highly recommended!



Check out some featured blog posts from facilitators across the ALC Network!




The Weekly Sprint (in review):

Ryan highlights some of the popular learning activities at the ALC - WikiTrails, GeoGuessr, Philosophy, Chronology, #NoCheats, and more.





Daily Rituals: The Heartbeat of Intentional Culture Creation: Tomis talks about the afternoon candle ritual at the ALC in NYC and importance of intentional culture creation.




The Opportunity in Conflict: Nancy shares examples of how our tool, the Community Master Board, is used for community-wide problem solving.




GeoCaching Treasure Hunt: Dan shares a summary of the GeoCaching adventure he set up and facilitated at ALC Mosaic.





Catch Me In Transition: How to Lorax so Kids will Listen: Bear reflects on the importance of tuning in and right-timing for effective ALFing.





Week in Review: Drew shares a detailed writeup of a recent week at ALC Everett, including the beginnings of the ALC egg drop challenge.




Why I'm Cool With Day-Long Dr Who Marathons: Abby shares her reflections on the value of storytelling and intentional engagement with "screens".





Painting, Pasta, Parent Interest Night, and Past=Present: Nina shares some amazing ALC offerings and thoughtful reflections on her journey to open ALC Oahu.




Mosaic Monday: Charlotte gives an update on some happenings at ALC Mosaic, as well as a beautiful write up of her ongoing offering, Ecology Club.




Clinkity, Clink, Clink: Extended Inquiry into Marble Mazes: Lacy dives deep into the marble maze projects from the kids at Roots of Mosaic.





Answers Are Truly No Better Than Questions: Art talks about the importance of asking valuable questions.





ALC Everett - Last Day: Abe shares the highlights of his month-long stay at ALC Everett.

We hope you enjoyed this update from Agile land!

With love and agility,

Agile Learning Centers

Copyright © 2014 Agile Learning Center, All rights reserved. You expressed an interest in the Agile Learning Center.


Categories: Blogs

Agile Bangladesh Conference, Dhaka, Bangladesh, December 27 2014

Scrum Expert - Mon, 12/08/2014 - 18:30
Agile Bangladesh Conference is a one-day conference that focuses on the use of Agile methodologies in software development projects. It aims to provide a forum for fruitful interactions and discussions among practitioners and researchers. In the agenda of Agile Bangladesh Conference you can find topics like Agile implementation, Agile requirements (user stories), Agile design, Agile testing, Agile and security, tools and applications. The keynotes will discuss “Agile Story Points”, ” How to Integrate Security in Agile Software Development Methods?”. Web site: http://agileatbusiness.com/agilebd2014/ Location for the 2014 conference: Dhaka, Bangladesh
Categories: Communities

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.