Skip to content

Feed aggregator

Making Unconscious Habits in Culture Conscious in Agile Teams

Learn more about transforming people, process and culture with the Real Agility Program In the past, in our North American culture, power and authority in an organization was held by those who earned the most money, had the titles to go along with their authority, and they had the right to make decisions about where they went, when they went places and who they associated with. They also had the power and authority to decide what others did and didn’t do in their work environment. That was in the past. Where we are headed in a more unified and equal culture, based on principles of collaboration and understanding is that power is now more equally distributed. Those who didn’t have access to education now do. Those who were previously barred from environments of wealth and prosperity are now welcomed in. Corporate cultures, and organizational models across the board are changing and it’s good for everyone. The biggest challenge in any change arises when someone’s fear of being excluded is realized. The issue is no longer about money or time or integrity. The issue is that as work environments change, old (mostly unconscious) patterns of exclusion are changing too. It means janitors associating with doctors and delivery teams eating lunch with those in leadership (imagine that!). When an organization is going through a transformation, when they notice behaviours which were limiting and exclusive and change them, they are actively contributing to an ever-advancing civilization. They are creating a new and inclusive culture. At times, mistakes will be made. Old ways will sneak their way back in and one or more team member may get snubbed or excluded for one reason or another. This happens. It’s normal and is part of the learning process. But in time, the aim for any agile team is to continually make these old exclusive unconscious habits conscious so that work environments can continue to embrace a greater diversity of people, not just of cultural backgrounds but from different social and economic backgrounds, too. The difference in life experience from someone who has lived in poverty to someone who has lived in wealth is as if they grew up in different worlds, even though we inhabit the same earth. Everything is different. Language. Behaviours. Hopes and Dreams. Everything is different on any level. However, just as different races are now joining together in work and in marriage more often, so are people from different socio-economic backgrounds coming together too, in work, in community building, in families. The pain of the growth is a worthwhile investment into a brighter and more unified future not just for us but for the generation to follow us. Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Making Unconscious Habits in Culture Conscious in Agile Teams appeared first on Agile Advice.

Categories: Blogs

AutoMapper 5.0 speed increases

Jimmy Bogard - Fri, 06/24/2016 - 23:43

Just an update on the work we’ve been doing to speed up AutoMapper. I’ve captured times to map some common scenarios (1M mappings). Time is in seconds:

  Flattening Ctor Complex Deep Native 0.0148 0.0060 0.9615 0.2070 5.0 0.2203 0.1791 2.5272 1.4054 4.2.1 4.3989 1.5608 134.39 29.023 3.3.1 4.7785 1.3384 72.812 34.485 2.2.1 5.1175 1.7855 122.0081 35.863 1.1.0.118 6.7143 n/a 29.222 38.852

The complex mappings had the biggest variation, but across the board AutoMapper is *much* faster than previous versions. Sometimes 20x faster, 50x in others. It’s been a ton of work to get here, mainly from the change in having a single configuration step that let us build execution plans that exactly target your configuration. We now build up an expression tree for the mapping plan based on the configuration, instead of evaluating the same rules over and over again.

We *could* get marginally faster than this, but that would require us sacrificing diagnostic information or not handling nulls etc. Still, not too shabby, and in the same ballpark as the other mappers (faster than some, marginally slower than others) out there. With this release, I think we can officially stop labeling AutoMapper as “slow” ;)

Look for the 5.0 release to drop with the release of .NET Core next week!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Are Project Boards Really Kanban? How Old-School Project Management Prevents Continuous Improvement

Project boards limit your visibility into your team's capacity and performance. Learn how team Kanban boards enable continuous improvement.

The post Are Project Boards Really Kanban? How Old-School Project Management Prevents Continuous Improvement appeared first on Blog | LeanKit.

Categories: Companies

Workshops in Kladno on Getting More out of Agile and Lean

Ben Linders - Fri, 06/24/2016 - 20:53

AguarraI will give two workshops in Kladno (near Prague) on Getting More out of Agile and Lean. In these workshops you’ll learn practices to develop the right products for your business and customers, reduce your delivery time, increase the quality of your software, and create happy high performing teams.

Subscribe now to attend a workshop on November 2 or December 2 in Kladno, Czech Republic.


advertisement:

Retrospectives Exercises Toolbox - Design your own valuable Retrospectives

These workshops are done in collaboration with Aguarra, the competence center for agile techniques and technology innovations in Czech Republic, Slovakia and Hungary. Aguarra serves as a platform for experts, who work on research and implementation of agile techniques.

The workshop on Getting More out of Agile and Lean can be combined with the workshop on Valuable Agile Retrospectives that I’m giving on on November 1 or December 1. These two days of workshops on Retrospectives and Agile and Lean practices help you to boost the performance of your teams, enabling them to deliver more value to their customers and stakeholders.

Regular price is 480 EUR / 576 EUR. Price when ordering until 01. 09. 2016 : 440 EUR / 528 EUR.

Categories: Blogs

Unix: Find files greater than date

Mark Needham - Fri, 06/24/2016 - 18:56

For the latter part of the week I’ve been running some tests against Neo4j which generate a bunch of log files and I wanted to filter those files based on the time they were created to do some further analysis.

This is an example of what the directory listing looks like:

$ ls -alh foo/database-agent-*
-rw-r--r--  1 markneedham  wheel   2.5K 23 Jun 14:00 foo/database-agent-mac17f73-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel   8.6K 23 Jun 11:49 foo/database-agent-mac19b6b-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   8.6K 23 Jun 11:49 foo/database-agent-mac1f427-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   2.5K 23 Jun 14:00 foo/database-agent-mac29389-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel    11K 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
-rw-r--r--  1 markneedham  wheel   4.8K 23 Jun 14:00 foo/database-agent-mac35563-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel   3.8K 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
-rw-r--r--  1 markneedham  wheel   4.8K 23 Jun 14:00 foo/database-agent-mac40798-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel    12K 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
-rw-r--r--  1 markneedham  wheel   2.5K 23 Jun 14:00 foo/database-agent-mac5f094-1-logs-archive-201606231300189.tar.gz
-rw-r--r--  1 markneedham  wheel   5.8K 23 Jun 14:00 foo/database-agent-mac636b8-1-logs-archive-201606231300176.tar.gz
-rw-r--r--  1 markneedham  wheel   9.5K 23 Jun 11:49 foo/database-agent-mac7e165-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   2.7K 23 Jun 11:49 foo/database-agent-macab7f1-1-logs-archive-201606231049507.tar.gz
-rw-r--r--  1 markneedham  wheel   2.8K 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
-rw-r--r--  1 markneedham  wheel   3.1K 23 Jun 11:49 foo/database-agent-macbcbe8-1-logs-archive-201606231049520.tar.gz
-rw-r--r--  1 markneedham  wheel    13K 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
-rw-r--r--  1 markneedham  wheel   3.8K 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
-rw-r--r--  1 markneedham  wheel   3.9K 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz
-rw-r--r--  1 markneedham  wheel   3.1K 23 Jun 11:49 foo/database-agent-mace075e-1-logs-archive-201606231049520.tar.gz
-rw-r--r--  1 markneedham  wheel   3.1K 23 Jun 11:49 foo/database-agent-mace8859-1-logs-archive-201606231049507.tar.gz

I wanted to split the files in half so that I could have the ones created before and after 12pm on the 23rd June.

I discovered that this type of filtering is actually quite easy to do with the ‘find’ command. So if I want to get the files after 12pm I could write the following:

$ find foo -name database-agent* -newermt "Jun 23, 2016 12:00" -ls
121939705        8 -rw-r--r--    1 markneedham      wheel                2524 23 Jun 14:00 foo/database-agent-mac17f73-1-logs-archive-201606231300176.tar.gz
121939704        8 -rw-r--r--    1 markneedham      wheel                2511 23 Jun 14:00 foo/database-agent-mac29389-1-logs-archive-201606231300176.tar.gz
121934591       24 -rw-r--r--    1 markneedham      wheel               11294 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
121939707       16 -rw-r--r--    1 markneedham      wheel                4878 23 Jun 14:00 foo/database-agent-mac35563-1-logs-archive-201606231300176.tar.gz
121934612        8 -rw-r--r--    1 markneedham      wheel                3896 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
121939708       16 -rw-r--r--    1 markneedham      wheel                4887 23 Jun 14:00 foo/database-agent-mac40798-1-logs-archive-201606231300176.tar.gz
121934589       24 -rw-r--r--    1 markneedham      wheel               12204 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
121939720        8 -rw-r--r--    1 markneedham      wheel                2510 23 Jun 14:00 foo/database-agent-mac5f094-1-logs-archive-201606231300189.tar.gz
121939706       16 -rw-r--r--    1 markneedham      wheel                5912 23 Jun 14:00 foo/database-agent-mac636b8-1-logs-archive-201606231300176.tar.gz
121934588        8 -rw-r--r--    1 markneedham      wheel                2895 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
121934590       32 -rw-r--r--    1 markneedham      wheel               13427 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
121934587        8 -rw-r--r--    1 markneedham      wheel                3882 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
121934611        8 -rw-r--r--    1 markneedham      wheel                3970 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz

And to get the ones before 12pm:

$ find foo -name database-agent* -not -newermt "Jun 23, 2016 12:00" -ls
121879391       24 -rw-r--r--    1 markneedham      wheel                8856 23 Jun 11:49 foo/database-agent-mac19b6b-1-logs-archive-201606231049507.tar.gz
121879394       24 -rw-r--r--    1 markneedham      wheel                8772 23 Jun 11:49 foo/database-agent-mac1f427-1-logs-archive-201606231049507.tar.gz
121879390       24 -rw-r--r--    1 markneedham      wheel                9702 23 Jun 11:49 foo/database-agent-mac7e165-1-logs-archive-201606231049507.tar.gz
121879393        8 -rw-r--r--    1 markneedham      wheel                2812 23 Jun 11:49 foo/database-agent-macab7f1-1-logs-archive-201606231049507.tar.gz
121879413        8 -rw-r--r--    1 markneedham      wheel                3144 23 Jun 11:49 foo/database-agent-macbcbe8-1-logs-archive-201606231049520.tar.gz
121879414        8 -rw-r--r--    1 markneedham      wheel                3131 23 Jun 11:49 foo/database-agent-mace075e-1-logs-archive-201606231049520.tar.gz
121879392        8 -rw-r--r--    1 markneedham      wheel                3130 23 Jun 11:49 foo/database-agent-mace8859-1-logs-archive-201606231049507.tar.gz

Or we could even find the ones last modified between 12pm and 2pm:

$ find foo -name database-agent* -not -newermt "Jun 23, 2016 14:00" -newermt "Jun 23, 2016 12:00" -ls
121934591       24 -rw-r--r--    1 markneedham      wheel               11294 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
121934612        8 -rw-r--r--    1 markneedham      wheel                3896 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
121934589       24 -rw-r--r--    1 markneedham      wheel               12204 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
121934588        8 -rw-r--r--    1 markneedham      wheel                2895 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
121934590       32 -rw-r--r--    1 markneedham      wheel               13427 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
121934587        8 -rw-r--r--    1 markneedham      wheel                3882 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
121934611        8 -rw-r--r--    1 markneedham      wheel                3970 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz

Or we can filter by relative time e.g. to find the files last modified in the last 1 day, 5 hours:

$ find foo -name database-agent* -mtime -1d5h -ls
121939705        8 -rw-r--r--    1 markneedham      wheel                2524 23 Jun 14:00 foo/database-agent-mac17f73-1-logs-archive-201606231300176.tar.gz
121939704        8 -rw-r--r--    1 markneedham      wheel                2511 23 Jun 14:00 foo/database-agent-mac29389-1-logs-archive-201606231300176.tar.gz
121934591       24 -rw-r--r--    1 markneedham      wheel               11294 23 Jun 13:44 foo/database-agent-mac3533f-1-logs-archive-201606231244152.tar.gz
121939707       16 -rw-r--r--    1 markneedham      wheel                4878 23 Jun 14:00 foo/database-agent-mac35563-1-logs-archive-201606231300176.tar.gz
121934612        8 -rw-r--r--    1 markneedham      wheel                3896 23 Jun 13:44 foo/database-agent-mac35f7e-1-logs-archive-201606231244165.tar.gz
121939708       16 -rw-r--r--    1 markneedham      wheel                4887 23 Jun 14:00 foo/database-agent-mac40798-1-logs-archive-201606231300176.tar.gz
121934589       24 -rw-r--r--    1 markneedham      wheel               12204 23 Jun 13:44 foo/database-agent-mac490bf-1-logs-archive-201606231244151.tar.gz
121939720        8 -rw-r--r--    1 markneedham      wheel                2510 23 Jun 14:00 foo/database-agent-mac5f094-1-logs-archive-201606231300189.tar.gz
121939706       16 -rw-r--r--    1 markneedham      wheel                5912 23 Jun 14:00 foo/database-agent-mac636b8-1-logs-archive-201606231300176.tar.gz
121934588        8 -rw-r--r--    1 markneedham      wheel                2895 23 Jun 13:44 foo/database-agent-macbb8e1-1-logs-archive-201606231244151.tar.gz
121934590       32 -rw-r--r--    1 markneedham      wheel               13427 23 Jun 13:44 foo/database-agent-macc8177-1-logs-archive-201606231244152.tar.gz
121934587        8 -rw-r--r--    1 markneedham      wheel                3882 23 Jun 13:44 foo/database-agent-maccd92c-1-logs-archive-201606231244151.tar.gz
121934611        8 -rw-r--r--    1 markneedham      wheel                3970 23 Jun 13:44 foo/database-agent-macdf24f-1-logs-archive-201606231244165.tar.gz

Or the ones modified more than 1 day, 5 hours ago:

$ find foo -name database-agent* -mtime +1d5h -ls
121879391       24 -rw-r--r--    1 markneedham      wheel                8856 23 Jun 11:49 foo/database-agent-mac19b6b-1-logs-archive-201606231049507.tar.gz
121879394       24 -rw-r--r--    1 markneedham      wheel                8772 23 Jun 11:49 foo/database-agent-mac1f427-1-logs-archive-201606231049507.tar.gz
121879390       24 -rw-r--r--    1 markneedham      wheel                9702 23 Jun 11:49 foo/database-agent-mac7e165-1-logs-archive-201606231049507.tar.gz
121879393        8 -rw-r--r--    1 markneedham      wheel                2812 23 Jun 11:49 foo/database-agent-macab7f1-1-logs-archive-201606231049507.tar.gz
121879413        8 -rw-r--r--    1 markneedham      wheel                3144 23 Jun 11:49 foo/database-agent-macbcbe8-1-logs-archive-201606231049520.tar.gz
121879414        8 -rw-r--r--    1 markneedham      wheel                3131 23 Jun 11:49 foo/database-agent-mace075e-1-logs-archive-201606231049520.tar.gz
121879392        8 -rw-r--r--    1 markneedham      wheel                3130 23 Jun 11:49 foo/database-agent-mace8859-1-logs-archive-201606231049507.tar.gz

There are lots of other flags you can pass to find but these ones did exactly what I wanted!

Categories: Blogs

Currying: A Functional Alternative To fn.bind

Derick Bailey - new ThoughtStream - Fri, 06/24/2016 - 13:30

In my quest to learn functional programming with JavaScript, I seem to have been focusing on the idea of currying – taking a function that expects more than one argument, and turning it into a series of functions that only take one argument, executing the original function once all required arguments have been supplied.

In the last few weeks – with the help of everyone in the WatchMeCode community slack – I’ve found a places where currying seems to be beneficial. One of those places is a replacement for a function’s .bind method.

Curry vs bind

What Does .bind Do?

The .bind method – available on every function in JavaScript – allows you to do 2 things: specify the context (“this”) for the function, and specify one or more arguments that the function will receive when it is finally executed.

In this example, I have a basic add function on which I call the .bind method. The first parameter – undefined, in this case – sets the value of “this”. The second parameter – 1 – sets the first argument that will be passed to the function when it is finally executed.

The result of the .bind call is a new function. When I call this function, it only needs 1 parameter to execute.

The general term for what just happens is “partial functional application”. That is, the function was partially applied with the .bind call to set the context and the first parameter.

The final execution of the function didn’t happen until later, when I invoked the function, passing in one more argument in this case.

This is a common pattern – I’ve used partial function application in a lot of code, over the years. But now, with currying in my tool belt, I see less need for this.

Currying The Add Function

With currying, we can get the same effect as the partial function application from above, but without using the .bind method. The intermediate steps, though, provide much more flexibility than .bind does.

Let’s take the same add function, and manually curry it, as I showed in my video on the basics of currying.

In this example, there are 2 functions. The first function, add, takes a single parameter and returns the second function. The second function also takes a single parameter and then executes the addition, returning the result.

Both the .bind code above, and this code, show an “add1” method that is the result of the first operation. The both show the resulting function taking a single, second parameter to perform the calculation, as well.

I have effective produced the result of partial function application, using currying instead of .bind.

So, what’s the real difference? Is currying better than .bind? Why?

A Functional Alternative

For the simple comparison above, there is very little in benefit to using currying vs .bind.

But there are 2 major improvements that currying offers over .bind.

  1. I don’t have to specify the context (“undefined”, in that example) when currying
  2. Currying can reduce the code by not chaining function calls

While you can .bind any function – including an already partially applied function – you end up with some rather ugly code with the .bind littered everywhere.

This example shows how you’re required to continuously pass the context parameter to the .bind call, even though it’s never being used.

The currying alternative gives you slightly less code, as well:

Here, the code is a little more succinct. The use of ramda’s curry method allows you to curry the same function that was previously used.

If you’re wondering about supplying multiple parameters, though, both the .bind and curried version can do that:

There’s very little difference in this code, when it comes down to it. Do you want to supply the “undefined” parameter, or add extra parenthesis?

All this of this leads to the question…

Is Currying Better?

I don’t know if currying is “better” or “worse” or even “more flexible” than partial function application – at least not in these examples.

I think they can largely be interchanged, based on what you’re more comfortable using.

However, currying gives you options for additional functional programming tools and techniques.

From what little I know of functional code, it is common for composition, mapping and other tools to require functions only take a single argument. And in these examples, currying would likely be the choice to make it happen – though I bet you could make it work with .bind, as well.

For now, at least, I can say that currying does provide a functional alternative to the .bind method. And, frankly, I find it easier read the curried version of my code, when compared to .bind calls everywhere.

Categories: Blogs

Links for 2016-06-23 [del.icio.us]

Zachariah Young - Fri, 06/24/2016 - 09:00
Categories: Blogs

Article Review: Thinking About the Agile Manifesto

Learn more about transforming people, process and culture with the Real Agility Program

Often times, as I’ve been researching about agile methods and how to apply these to create real and sustainable change in an organization, I come across reference to the Agile Manifesto. I list it here today for those who are new to the field or who are getting back to the roots after trying a few things with different-than-expected results. It is an instrumental document. The values and principles listed here truly do shape the way agilists think and operate and to some degree or another the results appear to be better than before this founding document was introduced. So here is my “hats off” to this remarkable item which plays a pivotal role in cultural transformation.

The four key values are:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

Personally, I find the first one the most meaningful of all. When we value individuals and interactions over process and tools we are truly improving in leaps and bounds in creating collaborative environments which are continuously improving.

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Article Review: Thinking About the Agile Manifesto appeared first on Agile Advice.

Categories: Blogs

Just released! SAFe 4.0 Leading SAFe LiveLessons video training

Agile Product Owner - Thu, 06/23/2016 - 17:42

leading_safe_4_live_lessons_300With so many enterprises adopting SAFe over the years, we’ve learned what works, what doesn’t, and what the success stories have in common. One thing we know for certain—implementations that deliver results may vary somewhat in context and execution, but all share a common attribute; a workforce well trained and educated in SAFe practices, and a desire for continuous learning and improvement.

Experience tells us that face-to-face training is ideal, but realities in the field can sometimes make it difficult for folks to attend a public class. We get that, and that’s why we’ve provided this video tutorial: Leading SAFe® 4.0 Live Lessons: Leading the Lean-Agile Enterprise with the Scaled Agile Framework®.

It bridges the gap for people who may not be able to initially attend the 2-day Leading SAFe certification course, but still need to gain the knowledge necessary to start or continue their Lean-Agile transformation by leveraging SAFe. The self-paced LiveLessons video format is ideal for busy professionals as it allows you to explore one topic at one time and then come back later and learn a different subject.

What you’ll learn

The course is delivered in nine lessons where I present high-level overviews, as well as specifics where needed, exercises to test the viewer on what they’ve learned, and at the end of the course, clear-cut steps to start the journey of transformation. After watching this video, viewers can expect to have an understanding of the Scaled Agile Framework; Lean thinking and embracing Agility: how to apply SAFe principles; how to plan, execute, and implement an Agile Release Train; how to build an Agile Portfolio; how to build really large systems with the Value Stream layer, and how to scale leadership to the next level of enterprise performance.

Fully updated to SAFe 4.0

If you’re familiar with the SAFe 3.0 version of this video, I can tell you (from sitting in front of a video camera for three days) that this is an entirely new video produced specifically for SAFe 4.0, and covers the latest benefits that can be achieved through the new Framework for software and systems-dependent enterprises.

More information and discount promotions can be found at scaledagile.com/leading4. There is also an option for enterprise licensing if you’re dealing with a larger scale training initiative.

We’re committed to providing these resources to the SAFe community, and welcome your feedback on the video, and your experience with this type of training.

Stay SAFe!
–Dean

Categories: Blogs

An Essential Update on Essential SAFe

Agile Product Owner - Thu, 06/23/2016 - 17:34

Earlier this year, we published our first draft of the Essential SAFe® Big Picture via blog post. Since then, we have received lots of comments, from the blog, our classroom settings, direct customer and analyst feedback, and more. It’s compellingly obvious that this simpler, essential view is a clear aid to understanding the minimum roles and practices that are necessary to be successful with a SAFe implementation.

Simple is good. Feedback is good, too. To that end, we have now incorporated the input and present an updated version of the Essential SAFe® Big Picture:

 the core of the framework, critical to every implementation. Figure 1. Essential SAFe: the core of the framework, critical to every implementation.

Here are the nine key elements of Essential SAFe; without which, an implementation of the framework really isn’t “safe”:

  • SAFe Lean-Agile Principles. Lean-Agile principles provide the basis for every successful transformation and guide decision making as the process evolves and adapts.
  • Lean-Agile Leaders. Successful transformations are based on educating management to become “lean-thinking manager-teachers”. Thereafter, they lead, rather than follow, the transformation.
  • Agile Teams, Agile Release Trains, Value Streams. The Agile Release Train is a key building block of a SAFe enterprise. Trains are organized around Value Streams, and consist of Agile Teams. Teams use Scrum, Kanban and Built-in Quality practices to frequently produce integrated increments of value. DevOps practices close the loop on customer value delivery.
  • Cadence. A standardized PI and iteration cadence is the heartbeat of every ART and Value Stream. Periodic synchronization of all aspects limits variance to a single time interval.
  • Key Program Events. PI Planning, System Demo, and Inspect and Adapt assure that teams plan together, implement and demo together, and routinely improve their processes.
  • IP Iteration. The Innovation and Planning iteration is like extra oxygen in the tank: without it the train may start gasping under the pressure of the tyranny of the urgent, a plan that forgives no mistakes, nor provides dedicated time for innovation.
  • Critical Roles. Product Management, RTE, and System Arch/Eng— provide content and technical authority, and an effective development process. Product Owners and Scrum Masters help the teams meet their objectives. The Customer is part of the Value Stream, and is integrally engaged throughout development.
  • Vision and Backlog. Vision, backlogs and economic prioritization deliver business results by assuring that the teams are building the right thing.
  • Architectural Runway. Architectural runway provides “just enough” technical enablement to keep program velocities high, and avoid excessive redesign.

Of course, we are still open for feedback, so feel free to comment away. In addition, I think this is where we are headed next:

  1. Create a guidance article for Essential SAFe, so it can become a permanent part of the knowledge base
  2. Over time we will make the picture in the article clickable, allowing the viewer to navigate to a specific article from there
  3. Provide an Essential SAFe® poster PDF for download
  4. Incorporate this simpler thinking into some future version of SAFe (yes, @Chris, we really did say that …)

Also, Inbar is presenting Essential SAFe® at Agile Israel this week. We will share his presentation materials at some point soon. I’ll also be scheduling a webinar on the topic, probably in August. There I will discuss—not only what is essential in SAFe®—but also how other SAFe® constructs can be adapted to best fit your enterprise context. The link will be available soon, so stay tuned for that.

Please share your thoughts in the comments below. Without your input, there’s no “C” (and therefore, no “A”) in our PDCA cycle. Thank you and be safe, essentially speaking…

-Alex, and the Framework team: Dean, Inbar, Richard

Categories: Blogs

Product Owners and Learning, Part 3

Johanna Rothman - Thu, 06/23/2016 - 16:32

Part 1 was about how the PO needs to see the big picture and develop the ranked backlog. Part 2 was about the learning that arises from small stories. This part is about ranking.

If you specify deliverables in your big picture and small picture roadmaps, you have already done a gross form of ranking. You have already made the big decisions: which feature/parts of features do you want when? You made those decisions based on value to someone.

I see many POs try to use estimation as their only input into ranking stories. How long will something take to complete? If you have a team who can estimate well, that might be helpful. It’s also helpful to see some quick wins if you can. See my most recent series of posts on Estimation for more discussion on ranking by estimation.

Estimation talks about cost. What about value? In agile, we want to work (and deliver) the most valuable work first.

Once you start to think about value, you might even think about value to all your different somebodies. (Jerry Weinberg said, “Quality is value to someone.”)  Now, you can start considering defects, technical debt, and features.

The PO must rank all three possibilities for a team: features, defects, and technical debt. If you are a PO who has feature-itis, you don’t serve the team, the customer, or the product. Difficult as it is, you have to think about all three to be an effective PO.

The features move the product forward on its roadmap. The defects prevent customers from being happy and prevent movement forward on the roadmap. Technical debt prevents easy releasing and might affect the ease of the team to deliver. Your customers might not see technical debt. They will feel the effects of technical debt in the form of longer release times.

Long ago, I suggested that a specific client consider three backlogs to store the work and then use pair-wise comparison with each item at the top of each queue. (They stored their product backlog, defects, and technical debt in an electronic tool. It was difficult to see all of the possible work.) That way, they could see the work they needed to do (and not forget), and they could look at the value of doing each chunk of work. I’m not suggesting keeping three backlogs is a good idea in all cases. They needed to see—to make visible—all the possible work. Then, they could assess the value of each chunk of work.

You have many ways to see value. You might look at what causes delays in your organization:

  • Technical debt in the form of test automation debt. (Insufficient test automation makes frictionless releasing impossible. Insufficient unit test automation makes experiments and spikes impossible or quite long.)
  • Experts who are here, there, and everywhere, providing expertise to all teams. You often have to wait for those experts to arrive to your team.
  • Who is waiting for this? Do you have a Very Important Customer waiting for a fix or a feature?

You might see value in features for immediate revenue. I have worked in organizations where, if we released some specific feature, we could gain revenue right away. You might look at waste (one way to consider defects and technical debt).

Especially in programs, I see the need for the PO to say, “I need these three stories from this feature set and two stories from that other feature set.” The more the PO can decompose feature sets into small stories, the more flexibility they have for ranking each story on its own.

Here are questions to ask:

  • What is most valuable for our customers, for us to do now?
  • What is most valuable for our team, for us to do now?
  • What is most valuable for the organization, for us to do now?
  • What is most valuable for my learning, as a PO, to decide what to do next?

You might need to rearrange those questions for your context. The more your PO works by value, the more progress the team will make.

The next post will be about when the PO realizes he/she needs to change stories.

If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.

Categories: Blogs

Agile Hiring & Goal Management in Methods & Tools Summer 2016 issue

DevAgile.com - Thu, 06/23/2016 - 16:11
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Summer 2016 issue that discusses hiring for agility, load testing scripts errors, managing with goals on every level and BDD with the Turnip tool.
Categories: Communities

Help! My first grooming session.

Growing Agile - Thu, 06/23/2016 - 15:55
Today we got an email from a new Scrum Master who has been using our online training courses and books to learn. She has just started at a new company (her first job as a Scrum Master) and identified that the team needs to groom their work before taking it into a sprint, something they […]
Categories: Companies

Are You Agile Enough for DevOps?

Agile Management Blog - VersionOne - Thu, 06/23/2016 - 14:30

Are you agile enough for DevOps?

One of the biggest buzzwords in the industry lately is DevOps.  We all know by now what DevOps is intended to offer, and most organizations are looking for at least some subset of the promise of a continuous delivery flow and the power of “pulling ops into the room”.  But can we really do that if our own job of becoming “more agile” is still incomplete?

Let’s explore for a moment what we even mean by agile.  I recall back in the early days that agile discussions were about how to turn around features quickly by breaking them down into smaller “bite sized” chunks, delivering those, and then determining where to go next based on that feedback.  We invented cool things like user stories, and utilized mechanisms like short iterations and daily standups to move closer to this fast-paced, turn-on-a-dime philosophy toward software development.  We discovered, without a doubt, that this was a better way.  One major portion of these methods was a set of technical practices that would enable the teams to write software in a way that would support such a nimble environment.

So, where are we now?  We have discovered that doing things in these small chunks is hard.  It is counter intuitive too.  We want to look at things in big picture terms.  The question I used to hear the most was “how can I manage a portfolio this way?”  Now that question has been turned into “how can we scale this?”  My answer to each question is the same:  Don’t.    The reason we moved to the smaller chunks and stories is because the “big picture” approach doesn’t work.  So finding ways to shoehorn agile methods into  “scaled” or “Big Up Front Agile” is a waste of time and energy.  Rather, let’s learn how to do the real agile methods better, and reap the well-known benefits.

What does this have to do with DevOps?  Hang on, we’re getting there.  One of the things  that got set aside along the way was the focus on practices that enable agility.  Test Driven Development (TDD) was at best assumed it would magically happen, and more often set aside as something “we’ll get to once we get all of our release trains and architectural runways laid out”.  In other words, never.  A possible metaphor is saying “I will start exercising once I’m in better shape.”  You have to do the technical practices first, or the rest is just a waste of time.  And this is where DevOps comes into play.

DevOps is most closely associated with the idea of Continuous Delivery.  The idea that we can at any time build and deploy the results of our development efforts allows us a huge amount of flexibility with deciding what software gets delivered and when.  The tools that help us, whether it be for visualizing and orchestrating the moving parts of build, test, and delivery, or the tools that automate these parts, have reached a level of maturity that allows us to move forward.  The question remains, does your team have that same level of maturity?

If the extent of your team’s agile mechanisms is identifying “portfolio items” that will be broken into stories that will then be scheduled into sprints, do NOT try to go straight to DevOps.  Learn how to truly embrace TDD, both at the Unit Test level and the Acceptance Test level.  Once you feel comfortable with that, you can move to Continuous Integration and then Continuous Delivery and DevOps.

If you are doing “some TDD” and daily builds, you are getting there, but ramp up the tests first.  You might be inclined to at least get some of the cool DevOps tools into place, but I highly recommend getting your TDD house in order first.  Time and energy are finite, so let’s spend them appropriately.

If you still have a “change control board” of some type that controls when a merge happens, you aren’t ready for DevOps.  Ensuring that your tests are in place and automated will help build the trust necessary to avoid constructs that are explicitly designed to slow the development process down.  Building habits of checking in code and building several times a day will allow us to catch what errors might make it through quickly, and with a much smaller delta between check-ins to identify where the errors might have come from.

So, am I being somewhat absolutist here?  Absolutely.  Rather than taking our agile practices halfway there and then saying “hey I know, let’s do DevOps now”, work on making agile everything it possibly could be.  Once you feel comfortable with your automated tool stack and delivering every iteration, then move to Continuous Delivery and DevOps.

The post Are You Agile Enough for DevOps? appeared first on The Agile Management Blog.

Categories: Companies

Language Plugins Rock SonarQube Life!

Sonar - Thu, 06/23/2016 - 13:43

SonarAnalyzers are fundamental pillars of our ecosystem. The language analyzers play a central role, but the value they bring isn’t always obvious. The aim of this post is to highlight the ins and outs of SonarAnalyzers.

The basics

The goal of the SonarAnalyzers (packaged either as SonarQube plugins or in SonarLint) is to raise issues on problems detected in source code written in a given programming language. The detection of issues relies on the static analysis of source code and the analyzer’s rule implementations. Each programming language requires a specific SonarAnalyzer implementation.

The analyzer


The SonarAnalyzer’s static analysis engine is at the core of source code interpretation. The scope of the analysis engine is quite large. It goes from basic syntax parsing to the advanced determination of the potential states of a piece of code. At minimum, it provides the bare features required for the analysis: basic recognition of the language’s syntax. The better the analyzer is, the more advanced it’s analysis can be, and the trickier the bugs it can find.

Driven by the will to perform more and more advanced analyses, the analyzers are continuously improved. New ambitions in terms of validation require constant efforts in the development of the SonarAnalyzers. In addition, to be able to handle updates to each programming language, regular updates are required in the analyzers to keep up with each language’s evolution.

The rules



The genesis of a rule starts with the writing of its specification. The specification of each rule is an important step. The description should be clear and unequivocal in order to be explicit about what issue is being detected. Not only must the description of the rule be clear and accurate, but code snippets must also be supplied to demonstrate both the bad practice and it’s fix. The specification is available from each issue raised by the rule to help users understand why the issue was raised.

Rules also have tags. The issues raised by a rule inherit the rule’s tags, so that both rules and issues are more searchable in SonarQube.

Once the specification of a rule is complete, next comes the implementation. Based on the capabilities offered by the analyzer, rule implementations detect increasingly tricky patterns of maintainability issues, bugs, and security vulnerabilities.


Continuous Improvement


By default, SonarQube ships with three SonarAnalyzers: Java, PHP, and JavaScript.
The analysis of other languages can be enabled by the installation of additional SonarAnalyzer plugins.

SonarQube community officially supports 24 language analyzers. Currently about 3500 rules are implemented across all SonarAnalyzers.

More than half of SonarSource developers work on SonarAnalyzers. Thanks to the efforts of our SonarAnalyzer developers, there are new SonarAnalyzer versions nearly every week.

A particular focus is currently made on Java, JavaScript, C#, and C/C++ plugins. The target is to deliver a new version of each one every month, and each delivery embeds new rules.

In 2015, we delivered a total of 61 new SonarAnalyser releases, and so far this year, another 30 versions have been released.


What it means for you


You can easily benefit from the regular delivery of SonarAnalyzers. At each release, analyzer enhancements and new rules are provided. But, you don’t need to upgrade SonarQube to upgrade your analysis; as a rule, new releases of each analyzers are compatible with the latest LTS.

When you update a SonarAnalyzer, the static analysis engine is replaced and new rules are made available. But at this step, you’re not yet benefiting from those new rules. During the update of your SonarAnalyzer, the quality profile remains unchanged. The rules executed during the analysis are the same ones you previously configured in your quality profile.
It means that if you want to benefit from new rules you must update your quality profile to add them.

Categories: Open Source

Who Owns This House?

Leading Agile - Mike Cottmeyer - Thu, 06/23/2016 - 13:30

That was the question that was posed to the freshly minted staff at the Open House for Friends and Family for Publix Grocery Stores store #1520 yesterday. It was amazing to be invited to witness the internal opening of one of Publix’s newest stores in Cary, NC.

The air was thick with excitement. Executives traveled in from the regional offices in Charlotte and from the corporate headquarters in Tampa, FL. We met the store leadership. We met everyone.

Employees pose for Publix Store #1520's Grand Opening

Employees pose for Publix Store #1520’s Grand Opening

When it came time for the ribbon cutting, the newly minted store manager took the stage and posed this question, “Who owns this house?” It was met with a resounding, “We own this house!”

Three times the call came.

Three times it was met with with a loud cheer, “We own this house!”

Kevin Murphy, SVP of Retail Operations, summed up Publix’s success as being rooted in two key principles: ownership and pride in your work at every level of the organization. Kevin should know. He started as a front-service clerk at a Publix in 1984. He worked in various positions before being promoted to store manager in 1995. He was promoted to Jacksonville Division district manager in 2003, Atlanta Division regional director in 2009, Miami Division VP in 2014, and his current position was created in 2016.

Ownership and pride in work at all levels. Sounds like the same formula for success in Agile Product Development.

This is also the core of LeadingAgile’s approach to transformation from Basecamp One through Basecamp Five. Without local ownership of decision making at the point of the work being done, we send the message consciously on subconsciously that we don’t trust that the work being performed is high-quality and valuable.

If it isn’t valuable then why are you doing it? Non-valuable work is called waste.

If the work isn’t high-quality, then why? Do you have the correct expectations of how long the work should take? Are you measuring quality correctly? (hint: it’s not just about defect injection rate.) Do you reward the wrong things like heroic efforts?

This is the heart of Agile practices. It expects ownership and pride in work. It expects trusting the people doing the work to know what they are doing. If they don’t, it expects you to let them self-organize to the extent that people who know how to do the work well, can volunteer to do it with the expectation that they also mentor those that don’t.

What about your company? Does it espouse a culture of ownership and pride in work? How would you know? Our assessments cut right to the heart of the matter and help organizations determine if leadership is creating and empowering a culture of ownership and pride in work.

Wouldn’t you like to know?

Congratulations to the people of Publix Store #1520. I can’t wait to experience more ownership and pride in work. The world needs more of it.

The post Who Owns This House? appeared first on LeadingAgile.

Categories: Blogs

Our First DFW Scrum Lean Coffee!

DFW Scrum User Group - Thu, 06/23/2016 - 00:49
Last night was a “Bring Your Own Topic” night for our group, and we used the Lean Coffee format to organize our conversations. As a group organizer, it can feel risky to not have a predetermined topic for a meetup. … Continue reading →
Categories: Communities

5 Things the Product Owner Can Learn From Project Management

BigVisible Solutions :: An Agile Company - Wed, 06/22/2016 - 23:00

The role of product owner was introduced by Ken Schwaber and Jeff Sutherland in their creation of Scrum as a lightweight project management method in the mid-1990s. Since then, after literally thousands of Scrum projects, the product owner role has come to be recognized as both the most critical role for the success of the product and the hardest role to do successfully.
Simple Scrum Team Icons_Artboard 1

In Scrum, project management is divided between the product owner, the ScrumMaster, and the team. These are the three recognized roles in Scrum; there is no one project manager role. The various needs of every project must be understood by one of these three roles, and someone, whether team member, ScrumMaster, or product owner, must take responsibility for management of these needs.

The project manager role has been well defined and is supported by published standards, and it can be used to inform and enhance the role of the product owner. Here are five ways that the product owner can benefit from studying the project manager role as understood by the Project Management Institute.

 Project ManagerProduct Owner 1Responsible for delivering the project on time, on schedule, and on budget. The project manager is to work with the team to ensure that value is delivered according to the plan in Traditional Project Management (TPM).Responsible for the delivery of the product. The focus is on value, quality, time to market, and return on investment. The key here is responsibility. 2Manages project scope, including the ongoing change control process to ensure that the scope is contained and that impacts to schedule and budget are identified and made visible to stakeholders.With the help of the team, stakeholders, architects, SME’s and analysts, creates the prioritized product backlog. This is the scope of the project with change management done every sprint through re-prioritization of the features/user stories in the backlog. The key here is scope management. 3Works directly with the team to ensure that it is working on the right items in the right order to accomplish the project goals.Works directly with the team to ensure that it understands and is working on the right features in the right prioritized order to deliver value at the end of each sprint. The key here is close team collaboration. 4Works closely with the stakeholders to ensure that their interests and concerns are balanced against each other, that they feel heard, and that their requirements are part of the project.Engages the stakeholders to ensure that their requirements are a part of the backlog and that they are included in the sprint review when their features are demonstrated. The key here is good communication and stakeholder management. 5Manages the budget and monitors the progress of the project against the expense of both personnel and material resources. Earned Value Management may be used to understand if the project is on track or is slipping.Responsible for managing the project budget and for tracking expense against return. The product owner is always looking to deliver early and often, in line with Scrum’s interactive approach to value delivery.

These five areas, which are a part of the project manager’s approach to projects, can help the product owner to perform his or her role more effectively. Some have argued that the product owner role is merely an extension and recasting of the traditional project manager role. That debate will continue, but for now the lessons we can share and learn from each role can enhance our delivery of value to our customers.

 

Like this? You’ll love Agile Eats

Agile Eats is our semi-monthly e-blast chock full of tips and tricks too good not to share. Subscribe now!

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 United States License.

The post 5 Things the Product Owner Can Learn From Project Management appeared first on SolutionsIQ.

Categories: Companies

Product Owners and Learning, Part 1

Johanna Rothman - Wed, 06/22/2016 - 18:05

When I work with clients, they often have a “problem” with product ownership. The product owners want tons of features, don’t want to address technical debt, and can’t quite believe how long features will take.  Oh, and the POs want to change things as soon as they see them.

I don’t see this as problems.To me, this is all about learning. The team learns about a feature as they develop it. The PO learns about the feature once the PO sees it. The team and the PO can learn about the implications of this feature as they proceed. To me, this is a significant value of what agile brings to the organization. (I’ll talk about technical debt a little later.)

AgileRoadmap.copyright-1080x794One of the problems I see is that the PO sees the big picture. Often, the Very Big Picture. The roadmap here is a 6-quarter roadmap. I see roadmaps this big more often in programs, but if you have frequent customer releases, you might have it for a project, also.

I like knowing where the product is headed. I like knowing when we think we might want releases. (Unless you can do continuous delivery. Most of my clients are not there. They might not ever get there, either. Different post.)

Here’s the problem with the big picture. No team can deliver according to the big picture. It’s too big. Teams need the roadmap (which I liken to a wish list) and they need a ranked backlog of small stories they can work on now.

Example.AgileRoadmapOneQuarter In Agile and Lean Program Management, I have this picture of what an example roadmap might look like.

This particular roadmap works in iteration-based agile. It works in flow-based agile, too. I don’t care what a team uses to deliver value. I care that a team delivers value often. This image uses the idea that a team will release internally at least once a month. I like more often if you can manage it.

Releasing often (internally or externally) is a function of small stories and the ability to move finished work through your release system. For now, let’s imagine you have a frictionless release system. (Let me know if you want a blog post about how to create a frictionless release system. I keep thinking people know what they need to do, but maybe it’s as clear as mud to  you.)

The smaller the story, the easier it is for the team to deliver. Smaller stories also make it easier for the PO to adapt. Small stories allow discovery along with delivery (yes, that’s a link to Ellen Gottesdiener’s book). And, many POs have trouble writing small stories.

That’s because the PO is thinking in terms of feature sets, not features. I gave an example for secure login in How to Use Continuous Planning. It’s not wrong to think in feature sets. Feature sets help us create the big picture roadmap. And, the feature set is insufficient for the frequent planning and delivery we want in agile.

I see these problems in creating feature sets:

  • Recognizing the different stories in the feature set (making the stories small enough)
  • Ranking the stories to know which one to do first, second, third, etc.
  • What to do when the PO realizes the story or ranking needs to change.

I’ll address these issues in the next posts.

If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.

Categories: Blogs

Product Owners and Learning, Part 2

Johanna Rothman - Wed, 06/22/2016 - 18:03

In Part 1, I talked about the way POs think about the big picture and the ranked backlog. The way to get from the big picture to the ranked backlog is via deliverables in the form of small (user) stories. See the wikipedia page about user stories. Notice that they are a promise for a conversation.

I talked about feature sets in the first post, so let me explain that here. A feature set is several related stories. (You might think of a feature set as a theme or an epic.) Since I like stories the team can complete in one day or less, I like those stories to be small, say one day or less. I have found that the smaller the story, the more feedback the team gets earlier from the product owner. The more often the PO sees the feature set evolving, the better the PO can refine the future stories. The more often the feedback, the easier it is for everyone to change:

  • The team can change how they implement, or what the feature looks like.
  • The PO can change the rest of the backlog or the rank order of the features.

I realize that if you commit to an entire feature set or a good chunk for an iteration, you might not want to change what you do in this iteration. If you have an evolving feature set, where the PO needs to see some part before the rest, I recommend you use flow-based agile (kanban). A kanban with WIP limits will allow you to change more often. (Let me know if that part was unclear.)

Now, not everyone shares my love of one-day stories. I have a client whose team regularly takes stories of size 20 or something like that. The key is that the entire team swarms on the story and they finish the story in two days, maybe three. When I asked him for more information, he explained this it in this way.

“Yes, we have feature sets. And, our PO just can’t see partial finishing. Well, he can see it, but he can’t use it. Since he can’t use it, he doesn’t want to see anything until it’s all done.”

I asked him if he ever had problems where they had to redo the entire feature. He smiled and said,

“Yes. Just last week we had this problem. Since I’m the coach, I explained to the PO that the team had effectively lost those three days when they did the “entire” feature instead of just a couple of stories. The PO looked at me and said, “Well, I didn’t lose that time. I got to learn along with the team. My learning was about flow and what I really wanted. It wasn’t a waste of time for me.”

“I learned then about the different rates of learning. The team and the PO might learn differently. Wow, that was a big thing for me. I decided to ask the PO if he wanted me to help him learn faster. He said yes, and we’ve been doing that. I’m not sure I’ll ever get him to define more feature sets or smaller stories, but that’s not my goal. My goal is to help him learn faster.”

Remember that PO is learning along with the developers and testers. This is why having conversations about stories works. As the PO explains the story, the team learns. In my experience, the PO also learns. It’s also why paper prototypes work well. Instead of someone (PO or BA or anyone) developing the flow, when the team develops the flow in paper with the PO/BA, everyone learns together.

Small stories and conversations help the entire team learn together.

Small features are about learning faster. If you, too, have the problem where the team is learning at a different rate than the PO, ask yourself these questions:

  • What kind of acceptance criteria do we have for our stories?
  • Do those acceptance criteria make sense for the big feature (feature set) in addition to the story?
  • If we have a large story, what can we do to show progress and get feedback earlier?
  • How are we specifying stories? Are we using specific users and having conversations about the story?

I’ve written about how to make small stories in these posts:

The smaller the story, the more likely everyone will learn from the team finishing it.

I’ll address ranking in the next post.

If you want to learn how to deliver what your customers want using agile and lean, join me in the next Product Owner workshop.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.