Skip to content

Feed aggregator

Conventional HTML in ASP.NET MVC: Client-side templates

Jimmy Bogard - Thu, 08/14/2014 - 22:21

Other posts in this series:

In our last post, we brought everything together to build composable blocks of tags driven off of metadata. We did this to make sure that when a concept exists in our application, it’s only defined once and the rest of our system builds off of this concept. This reduces logic duplication across several layers, ensuring that we don’t have to “remember” to do repetitive tasks, like a required field needing an asterisk and data attributes.

All of this works great because we’ve got all the information at our disposal on the server-side, and we can push the completed product down to the client (browser). But what if we’re building a SPA, using Angular or Knockout or Ember or Backbone? Do we have to revert back to our old ways of duplication? Or can we have the best of both worlds?

There tend to be three general approaches:

  • Just hard code it and accept the duplication
  • Include metadata in your JSON API calls, through hypermedia or other means
  • Build intelligence into templates

I’ve done all three, and each have their benefits and drawbacks. Most teams I talk to go with #1, and some go with #2. Very few teams I meet even think about #3.

What I’d like to do is have the power of my original server-side Razor templates, with the strongly-typed views and intelligent expression-based helpers, but instead of complete HTML templates, have these be Angular views or Ember templates:

image

When we deliver our templates to the client as part of our SPA, we’ll serve up a special version of them, one that’s been parsed by our Razor engine. Normally, the Razor engine performs two tasks:

  • HTML generation
  • Binding model data

Instead, we’ll only generate our template, and the client will then bind the model to our template.

Serving templates, Ember style

Normally, the MVC view engine runs the Razor parser. But we’re not going that path, we’re going to parse the templates ourselves. The result of parsing will be placed inside our script tags. This part is a little long, so I’ll just link to the entire set of code.

HtmlHelperExtensions

A couple key points here. First, the part that runs the template through the view engine to render an HTML string:

builder.AppendLine("<script type=\"text/x-handlebars\" data-template-name=\"" + fullTemplateName + "\">");
var controllerContext = new ControllerContext(helper.ViewContext.HttpContext, new RouteData(), helper.ViewContext.Controller);
controllerContext.RouteData.Values["controller"] = string.IsNullOrEmpty(relativeDirName) ? "Home" : Path.GetDirectoryName(relativeDirName);
var result = ViewEngine.FindView(controllerContext, subtemplateName, null, false);
var stringWriter = new StringWriter(builder);
var viewContext = new ViewContext(controllerContext, result.View, new ViewDataDictionary(), new TempDataDictionary(), stringWriter);
result.View.Render(viewContext, stringWriter);

builder.AppendLine("</script>");

We render the view through our normal Razor view engine, but surround the result in a script tag signifying this is a Handlebars template. We’ll place the results in cache of course, as there’s no need to perform this step more than once. In our context objects we build up, we simply leave our ViewData blank, so that there isn’t any data bound to input elements.

We also make sure our templates are named correctly, using the folder structure to match Ember’s conventions. In our one actual MVC action, we’ll include the templates in the first request:

@Scripts.Render("~/bundles/ember")
@Scripts.Render("~/bundles/app")

@Html.Action("Enumerations", "Home")
@Html.RenderEmber()

Now that our templates are parsed and named appropriately, we can focus on building our view templates.

Conventional Handlebars

At this point, we want to use our HTML conventions to build out the elements needed for our Ember templates. Unfortunately, we won’t be able to use our previous tools to do so, as Ember uses Handlebars as its templating language. If we were using Angular, it might be a bit easier to build out our directives, but not by much. Client-side binding using templates or directives requires special syntax for binding to scope/model/controller data.

We don’t have our convention model, or even our HtmlTag library to use. Instead, we’ll have to use the old-fashioned way – building up a string by hand, evaluating rules as we go. I could have built a library for creating Ember view helpers, but it didn’t seem to be worth it in my case.

Eventually, I want to get to this:

@Html.FormBlock(m => m.FirstName)

But have this render this:

<div class="form-group">
    <label class="required control-label col-md-2"
       {{bind-attr for="view.firstName.elementId"}}>
       First Name
    </label>
    <div class="col-md-10">
        {{view TextField class="required form-control" 
               data-key="firstName" 
               valueBinding="model.firstName" 
               viewName="firstName" 
               placeholder="First"
        }}
    </div>
</div>

First, let’s start with our basic input and just cover the very simple case of a text field.

public static MvcHtmlString Input<TModel, TValue>(this HtmlHelper<TModel> helper,
    Expression<Func<TModel, TValue>> expression,
    IDictionary<string, object> htmlAttributes)
{
    var text = ExpressionHelper.GetExpressionText(expression).ToCamelCase();
    var modelMetadata = ModelMetadata.FromLambdaExpression(expression, helper.ViewData);
    var unobtrusiveAttributes = GetUnobtrusiveValidationAttributes(helper, expression);

    var builder = new StringBuilder("{{view");

    builder.Append(" TextField");

    if (unobtrusiveAttributes.ContainsKey("data-val-required"))
    {
        builder.Append(" class=\"required\"");
    }

    builder.AppendFormat(" data-key=\"{0}\"", text);

    builder.AppendFormat(" valueBinding=\"model.{0}\"", text);
    builder.AppendFormat(" viewName=\"{0}\"", text);

    if (!string.IsNullOrEmpty(modelMetadata.NullDisplayText))
        builder.AppendFormat(" placeholder=\"{0}\"", modelMetadata.NullDisplayText);

    if (htmlAttributes != null)
    {
        foreach (var item in htmlAttributes)
        {
            builder.AppendFormat(" {0}=\"{1}\"", item.Key, item.Value);
        }
    }

    builder.Append("}}");

    return new MvcHtmlString(builder.ToString());
}

We grab the expression text and model metadata, and begin building up our Handlebars snippet. We apply our conventions manually for each required attribute, including any additional attributes we need based on the MVC-style mode of passing in extra key/value pairs as a dictionary.

Once we have this in place, we can layer on our label helper:

public static MvcHtmlString Label<TModel, TValue>(
    this HtmlHelper<TModel> helper, 
    Expression<Func<TModel, TValue>> expression)
{
    var text = ExpressionHelper.GetExpressionText(expression);
    var metadata = ModelMetadata.FromLambdaExpression(expression, helper.ViewData);
    var unobtrusiveAttributes = GetUnobtrusiveValidationAttributes(helper, expression);

    var builder = new StringBuilder("<label ");
    if (unobtrusiveAttributes.ContainsKey("data-val-required"))
    {
        builder.Append(" class=\"required\"");
    }
    builder.AppendFormat(" {{{{bind-attr for=\"view.{0}.elementId\"}}}}", text.ToCamelCase());
    builder.Append(">");

    string labelText = metadata.DisplayName ?? (metadata.PropertyName == null
        ? text.Split(new[] {'.'}).Last()
        : Regex.Replace(metadata.PropertyName, "(\\B[A-Z])", " $1"));

    builder.Append(labelText);
    builder.Append("</label>");

    return new MvcHtmlString(builder.ToString());
}

It’s very similar to the code in the MVC label helper, with the slight tweak of defaulting label names to the property names with spaces between words. Finally, our input block combines these two together:

public static MvcHtmlString FormBlock<TModel, TValue>(
    this HtmlHelper<TModel> helper,
    Expression<Func<TModel, TValue>> expression)
{
    var builder = new StringBuilder("<div class='form-group'>");
    builder.Append(helper.Label(expression));
    builder.Append(helper.Input(expression));
    builder.Append("</div>");
    return new MvcHtmlString(builder.ToString());
}

Now, our views start to become a bit more sane, and it takes a keen eye to see that it’s actually a Handlebars template. We still get strongly-typed helpers, metadata-driven elements, and synergy between our client-side code and our server-side models:

@model MvcApplication.Models.AccountCreateModel
{{title 'Create account'}}

<form {{action 'create' on='submit'}}>
    <fieldset>
        <legend>Account Information</legend>
        @Html.FormBlock(m => m.Username)
        @Html.FormBlock(m => m.Password)
        @Html.FormBlock(m => m.ConfirmPassword)
    </fieldset>

We’ve now come full-circle, leverage our techniques that let us be ultra-productive building out pages on the server side, but not losing that productivity on the client-side. A concept such as “required field” lives in exactly one spot, and the rest of our system reads and reacts to that information.

And that, I think, is pretty cool.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

What You Need to Know About Taskboards vs. Drill-Through Boards

Have you ever wondered when to use a taskboard or a drill-through board in LeanKit? Taskboards and drill-through boards are both designed to assist with visualizing the breakdown of work, yet they have distinctly different uses. In an interview with a panel of our product experts, we learned that each option offers its own unique advantages. What’s the main distinction […]

The post What You Need to Know About Taskboards vs. Drill-Through Boards appeared first on Blog | LeanKit.

Categories: Companies

A is for Agile... and changing the world

Scrum Breakfast - Thu, 08/14/2014 - 17:53
We are uncovering better ways of developing software, by doing it and helping others to do it...

-- The Manifesto for Agile Software Development 
In 2001, seventeen people signed a 73 word statement that changed software development forever. Before: a bunch of guys doing "lightweight project management." After: a set of values which coalesced into a name, "Agile" and later into a movement. People identified with the values and transformed their working worlds with things like Scrum, Extreme Programming, Kanban, Lean Startup, and still other ideas and frameworks we haven't invented yet.

As we got better at developing software, we discovered the need for better ways at lot of things. The Agile movement inspired DevOps, which is looking for better ways to operate computer systems, and Stoos, which is looking for better ways of management. Soon, maybe scientists will look for better ways of conducting research, and maybe you will look for better ways of doing whatever you do.

How can you use the Agile Manifesto to help you? Start with the Agile Manifesto, find some colleagues who do what you do, and play with the Manifesto a bit. Not developing software? What is your essential goal or reason for being? Adjust the manifest as necessary to fit your situation. It probably won't need many changes.

So if you are in the HR department, what do you do? Maybe it is something like 'developing our human potential.' So what would a manifesto for agile human resources look like? Maybe something like this:
Sample Manifesto for Agile Human DevelopmentWe are uncovering better ways of developing human potential, by doing it and helping others to do it. Through this work we have come to value:
  • Individuals and interactions over processes and tools
  • Autonomy, mastery and purpose over documentation, directives and control
  • Collaboration over contract negotiation
  • Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.This is probably not quite right, but you get the idea. And it should be your manifesto, not mine, so feel free to keep working on it!
Want to change your world? Start looking for a better way! Maybe write your own manifesto... Want to find like-minded individuals, check out the Open Space event 24 Think Park in Zurich...



Categories: Blogs

Where does r studio install packages/libraries?

Mark Needham - Thu, 08/14/2014 - 12:24

As a newbie to R I wanted to look at the source code of some of the libraries/packages that I’d installed via R Studio which I initially struggled to do as I wasn’t sure where the packages had been installed.

I eventually came across a StackOverflow post which described the .libPaths function which tells us where that is:

> .libPaths()
[1] "/Library/Frameworks/R.framework/Versions/3.1/Resources/library"

If we want to see which libraries are installed we can use the list.files function:

> list.files("/Library/Frameworks/R.framework/Versions/3.1/Resources/library")
 [1] "alr3"         "assertthat"   "base"         "bitops"       "boot"         "brew"        
 [7] "car"          "class"        "cluster"      "codetools"    "colorspace"   "compiler"    
[13] "data.table"   "datasets"     "devtools"     "dichromat"    "digest"       "dplyr"       
[19] "evaluate"     "foreign"      "formatR"      "Formula"      "gclus"        "ggplot2"     
[25] "graphics"     "grDevices"    "grid"         "gridExtra"    "gtable"       "hflights"    
[31] "highr"        "Hmisc"        "httr"         "KernSmooth"   "knitr"        "labeling"    
[37] "Lahman"       "lattice"      "latticeExtra" "magrittr"     "manipulate"   "markdown"    
[43] "MASS"         "Matrix"       "memoise"      "methods"      "mgcv"         "mime"        
[49] "munsell"      "nlme"         "nnet"         "openintro"    "parallel"     "plotrix"     
[55] "plyr"         "proto"        "RColorBrewer" "Rcpp"         "RCurl"        "reshape2"    
[61] "RJSONIO"      "RNeo4j"       "Rook"         "rpart"        "rstudio"      "scales"      
[67] "seriation"    "spatial"      "splines"      "stats"        "stats4"       "stringr"     
[73] "survival"     "swirl"        "tcltk"        "testthat"     "tools"        "translations"
[79] "TSP"          "utils"        "whisker"      "xts"          "yaml"         "zoo"

We can then drill into those directories to find the appropriate file – in this case I wanted to look at one of the Rook examples:

$ cat /Library/Frameworks/R.framework/Versions/3.1/Resources/library/Rook/exampleApps/helloworld.R
app <- function(env){
    req <- Rook::Request$new(env)
    res <- Rook::Response$new()
    friend <- 'World'
    if (!is.null(req$GET()[['friend']]))
	friend <- req$GET()[['friend']]
    res$write(paste('<h1>Hello',friend,'</h1>\n'))
    res$write('What is your name?\n')
    res$write('<form method="GET">\n')
    res$write('<input type="text" name="friend">\n')
    res$write('<input type="submit" name="Submit">\n</form>\n<br>')
    res$finish()
}
Categories: Blogs

SAP classic – agil

Scrum 4 You - Thu, 08/14/2014 - 07:35

“Softwareentwicklung” wird häufig mit Menschen assoziiert, die für eine Lösung Zeile für Zeile Code in ihrem Computer schreiben. Tatsache ist, dass viele Geschäftsprozesse von Unternehmen heute durch Standardsoftware abgebildet werden. Für eine neue Anforderung muss in dieser Software nicht zwangsweise wirklich neuer Code generiert werden. In vielen Fällen reicht ein „Anpassen“, das so genannte „Customizing“, der Standardsoftware aus. Aus diesem Customizing von Standardsoftware sind bei den bekanntesten Produkten ganze Geschäftszweige entstanden. Auf den Punkt gebracht spreche ich zum Beispiel von SAP-Beratern, die mit ihrer Expertise die einzelnen SAP-Module, auch als “SAP classic” bezeichnet, so anpassen, dass sie die Prozesse des Unternehmens abbilden und somit unterstützen.

Nun gibt es in der realen Business-Welt oft Projekte, die bereits vorhandene SAP-Module nutzen, aberpixabay_pdpics über reines Customizing der Software hinausreichen und das Modul mit maßgeschneiderten Zusätzen ergänzen. Und genau hier wird es spannend: SAP-Berater treffen auf Softwareentwickler. Warum, ist nicht ganz einfach zu erklären. Vielleicht sind es die unterschiedlichen Typen, die in den einzelnen Berufsgruppen anzutreffen sind, vielleicht sind es auch die unterschiedlichen Rollen – vielleicht aber auch etwas ganz anderes. Der Grund tut vielleicht gar nicht soviel zur Sache. Die Tatsache ist, dass ein solches cross-funktionales Team sehr gut, effizient und erfolgreich mit agilen Methoden wie Scrum zusammenarbeiten kann.

Ich habe in meinen Projekten die Erfahrung gemacht, dass sich gerade die Berater an diese neuen Teamkonstellationen gewöhnen müssen. Ihre Vorgehensweise ist oft geprägt von traditionellem Denken –  vor der Implementierung wird ein Konzept ausgearbeitet wird. Von Vorteil ist, dass dieses Konzept zumeist in intensiver Interaktion mit dem Kunden in gemeinsamen Workshops erarbeitet wird. Problematisch ist dabei häufig die fehlende Vorstellungskraft der Kunden, der Fachbereiche des Unternehmens. Für sie ist die Welt der Datenstrukturen, Objekte und Transaktionen äußerst irreal und zumeist zu komplex, um sie vollständig zu verstehen. So entstehen oft aus monatelang diskutierten und mit dem Kunden abgestimmten Konzepten erst recht wieder Prozessabbildungen im System, die den Anwender nicht optimal unterstützen.

Die gute Nachricht ist: Es ist möglich. Erfahrungen zeigen, dass Berater nebst Entwicklern in cross-funktionalen Teams in kurzen Iterationen miteinander arbeiten und so kontinuierlich Business Value liefern können. Customizing-Einstellungen und Codierung können so Hand in Hand in Form von User Stories erfolgen. Müssen verschiedene Möglichkeiten der Prozessabbildung doch einmal intensiver mit den Anwendern erarbeitet, besprochen und abgestimmt werden, so sollten diese Arbeitsschritte ebenfalls in Form von User Stories abgebildet werden. Ausgearbeitete und aufbereitete Entscheidungsgrundlagen entsprechen ebenfalls Lieferungen von Kundennutzen. Mit Hilfe der kontinuierlichen Lieferungen ist es dem Anwender nun auch möglich, nicht nur die Datenmodelle, sondern lauffähige Software zu begutachten. Im optimalen Fall kann er seinen Geschäftsprozess oder Teilprozesse davon bereits nach wenigen Sprints nutzen, um Feedback für das agile Team zu generieren. Alle Beteiligten können so durch die Anwendung agiler Methoden profitieren, auch oder gerade weil es sich in diesem Fall nicht nur um reine Softwareentwicklung handelt.

Related posts:

  1. In der Welt des SAP Customizings
  2. Scrum und Festpreis
  3. Teams – co-located, distributed, dispersed

Categories: Blogs

Firing The Customer You Love

Derick Bailey - new ThoughtStream - Wed, 08/13/2014 - 21:38

Everyone talks about the toxic customers… the ones that complain all the time, take up all your resources, demand free things in exchange for not complaining, and generally make your life as a business person hell. It’s relatively easy to fire the toxic customer, I think. I’ve done it a few times as a consultant, and it gets easier over time at least. 

But there’s one thing that no one ever talks about… something that I’ve come to think of as the 2nd worst email that your business will ever send:

Firing the customer that you love.

Firing the customer you love

Building A Relationship

It’s late 2013. I’m searching twitter for a few select keywords regarding podcasting, looking for people with whom I can engage conversation. My goal is to help others get up and running with a podcast, answer questions, and generally be a valuable resource as they are starting out. It’s how I attract customers at this early stage in SignalLeaf’s life. 

And there they are… the perfect person for me to help. A person in a music scene that I have been involved in before; someone that has knowledge of sound, recording and production; a podcast that is going to be started soon, with a mission that I fully support and want to be part of, but is not yet set up anywhere. So I reach out. I ask if they have any questions about podcasting specifically… about hosting, about RSS feeds, about the fine details of getting a podcast online. A few tweets back and forth leads to email exchanges. More questions emerge from the emails, and it quickly turns in to phone calls. I spent probably 5 or 6 hours total, doing research, answering questions and providing the information they needed to get their podcast up and running correctly. 

I was ecstatic when they said they wanted to use SignalLeaf for hosting even after we had talked about their concerns with using a new, unknown player in the arena. It was going to be great… and it was great. SignalLeaf grew in new and exciting ways, with the traffic that this podcast generated. Business was good and growing through sponsorship messages with them. 

And then I asked them to leave SignalLeaf, offering to help them migrate to another service.

The Harsh Realities Of Customer Success

In most business, the success of your customers typically brings in more income and more customers for you. When someone uses your service a lot, they buy in to more features, higher priced plans and word-of-mouth referrals that bring you even more happy customers.

Podcast hosting is a commodities business, to a large extent. It’s file hosting and bandwidth providing. Having a hugely successful customer when you’re a podcast host can present a few interesting challenges, though. When a podcast is explosively successful (gaining 15,000+ listens on their first episode, and moving in to the 30K to 40K listener range, per episode, in 6 months) it costs a lot in terms of bandwidth.

Even now, in 2014, bandwidth is not free. Yes, it’s cheap. At $0.12/GB, I can serve a few thousand listens per episode on the average podcast, and not worry about exceeding the monthly subscription rate I am charging for that podcast. But when you look at my last post on figuring out where the bandwidth charges are coming from in my Amazon S3 account, it’s clear that the bandwidth costs can quickly destroy your bank account and far exceed all income from all customers, for a podcast that is genuinely gigantic in its listener base.

Asking Them To Pay Or Leave

It took a few months of ignoring the cause, eating the $600 to $800 per month bandwidth bill and driving my credit cards to their limit for me to realize what I had to do. Yes, this podcast was bringing in new customers through SignalLeaf sponsorships. But sadly it wasn’t enough new income for me to absorb the cost, let alone remain profitable. So I did the only thing I could do…

I contacted the people that run the podcast, explained the situation to them (in detail – leaving nothing ambiguous) and asked them to either find another host or pay for the bandwidth.

This was especially difficult for me, as I believed in the mission of the podcast, found it to be thoroughly entertaining and was proud to be at least a man-behind-the-curtains part of what they were doing. But I had to do it, because I was going to end up bankrupt if I didn’t.

Ultimately, they chose to move to another host and I respect that decision. I worked to make sure the migration from SignalLeaf to the new host was as smooth as possible. I made it as easy for them to leave as it was for them to get started with SignalLeaf. I certainly hope that the effort was appreciated, even as I was escorting them out the door in to the arms of another host. 

Some Interesting Lessons In Business

Lesson #1: Allowing your customers to exceed your income with their use of your system is untenable. “Really? NO!” – yup. I know. It’s basic math that even I can do in my head. And it’s a problem that has befallen more than one company in the last few years. I’ve read stories about “freemium” photo sharing services that tank and shut down in the face of 30,000+ users of their service… when customers don’t pay for bandwidth in a bandwidth/commodities service… 

Lesson #2: Sponsorships do work… but you have to target the right audience with the right message. The audience for this particular podcast was very large, and I did manage to pull in some new customers through the sponsorship. But I can only imagine how much more effective the sponsorship would be if SignalLeaf were in the same market space as this podcast… if SignalLeaf were sponsorship an entrepreneurial podcast, or a podcast about how to podcast for business, etc. 

Lesson #3: Your favorite customer might be your “worst” customer. While it’s true that I loved what this podcast was about and want to support them, I was allowing myself to be taken for a ride and lose my shirt in the process. They may have been one of my favorite customers from a personal perspective… but they were a “bad” customer from a business perspective. I just didn’t want to admit it.

Lesson #4: If I had been in contact with them from the start, about the cost of hosting their show, there’s a chance that we could have worked out a better arrangement. But when it came down to it, I presented the problem too late and gave them an almost ultimatum email. Shame on me for that, and I’m not surprised they chose to go somewhere else. I only hope my grace in helping them move was noted on their way out.

An Interesting Lesson In Podcast Hosting

In an effort to help the podcast that I was booting out the door, I contacted a few of the other podcast hosting services; the big dogs; the 8,000lb gorillas in the room. I’ve had contact with them before, so it was easy for me to get directly in touch with the right person. I asked questions about what the costs of hosting this particular podcast would be, informed the services of the downloads per episode, average file size, etc. I brought all of this information back to the people running the podcast and they made their decision based on the information I presented to them. 

And you wanna know something REALLY interesting about the podcast hosting industry? Every podcast hosting company out there will ask you to pay for bandwidth. 

I honestly didn’t know this, before. I was under the delusion that these services were providing unlimited bandwidth to even the smallest paid plan. It turns out this isn’t quite true, even if other services directly advertise unlimited bandwidth. The thing they don’t advertise in very large writing (but it’s there if you look hard enough) is that the “unlimited” bandwidth comes at 1 of 2 prices: sponsorship or cash.

Most of these services don’t care about the meager traffic that your podcast generates. It’s like a gym membership… you sell more subscriptions than are being used at any given moment. But when you get in to the kind of traffic that a podcast host cares about, then you are moving in to their “professional” services. At this level, they will either ask you to use their sponsorship program or ask you to pay for the bandwidth. In any case, you end up paying for the traffic… whether it’s paid for with cash or with sponsorships. 

This is probably the most valuable lesson for me, in all of this. This gives me new insight in to how to properly structure pricing for SignalLeaf… how to present it not as unlimited, but as one option for paying for the bandwidth that your podcast will use. I’m still struggling to define this completely, but it’s giving me a lot to think about and work with.

SignalLeaf Is A Better Service Because Of This

In spite of the cost associated with this podcast, they had a direct influence on the quality and stability of SignalLeaf. I probably wouldn’t have as robust a service as I do, without having served 20K to 40K of episode listen each week, for the last 6 months. I’m grateful for the opportunity that I had in hosting this podcast. I want to see nothing but the best of luck in their continued and growing success. I wish I could have found a way to turn this situation in to a profitable scenario for SignalLeaf… but it is what it is. I’m looking for the lessons that I can get from this experience. I’m doing what I can to improve the service. And I’m 100% certain that I lived up to my ideals of always helping the podcaster, to create a fan and someone that advertises for me, even when they choose to use another service.

 

 

 

P.S. I called this the 2nd worst email that your business will ever send. So what’s the first? I’d have to say the worst email you’ll ever send (and hope you never have to) is the “we’re shutting down” email.

 

 

     Related Stories 
Categories: Blogs

Don’t try and do it alone.

Derick Bailey - new ThoughtStream - Wed, 08/13/2014 - 20:51

dont-go-alone

I don’t know where I would be without my friends and my entrepreneurship group.

Staring at a blank canvas or an empty file that I am supposed to fill with code is terrifying. It’s like looking over the edge of the world, in to the abyss. It can drive me mad at times. The emptiness… the limitless possibilities… where to start? What to do? What’s the most important thing? What are the pieces that absolutely must be there, first? With code, at least, I can typically just start hacking junk together and then later realize where I should have actually started. But when it comes to business and planning … I get lost, quickly.

Yet I’m a quick learner, an intrinsically motivated person and someone that is always moving projects forward toward a goal. It’s what makes me good at consulting, I think. I can walk in to a situation, evaluate the circumstances and create movement in the direction that is needed. But it gets overwhelming, quickly. Doing things on my own, with no accountability other than me and my customers, is really hard some times.

Fortunately, I don’t have to do this alone. I’m never without help, and I can’t imagine ever being without help.

While I may not have a boss telling me what to do, and I don’t have coworkers that I can bounce ideas off all the time, I do have a support network that can help me in the same ways. In my case, I have 3 different “groups” that I can talk with. I have my entrepreneurship group that meets every friday. I have a good friend who also happens to be my current client for contract work, that is doing his own thing in some very different ways. And I have my dad – a man that has been an entrepreneur for longer than I’ve been alive.

I’m never truly alone in any of this. And you shouldn’t be, either.

It’s dangerous to go alone – especially when you don’t know what you’re doing. In anything new, important, difficult, or even just different in your life, find a support group of some kind. Whether it’s coworkers that get together and talk at lunch, a group of people outside work that talk once a week and/or via email, friends that you know and trust, or whoever it might be. Find someone, some group, somewhere that you can be a part of, that will accept you for all your amazing talents and horrifying faults, and that will be there for you when you need help.

I rely on my support network every day. Knowing what my limits are, where I lack discipline and experience – this is part of growing and understanding, becoming a better entrepreneur, developer, or just plain better me.

I don’t know where I would be without Josh, John, Justin and my dad.  I do know that I would not be where I am today without them, or without everyone else that has helped me in my career. All of my coworkers. All of my friends. All of the people that are currently in my life, helping me through this - I can’t imagine how I would be where I am now, without them.

I only hope that I can offer the same kind of support, inspiration and motivation to someone else, one day.

    – Derick

     Related Stories 
Categories: Blogs

Business Capabilities and Microservices

Leading Agile - Mike Cottmeyer - Wed, 08/13/2014 - 17:25

I don’t often use this forum to link out to other websites and authors, but I read a post last night by Martin Fowler and James Lewis that really gets to the heart of this issue around encapsulation, decoupling, and value streams I’ve been talking about lately.

http://www.leadingagile.com/2014/08/encapsulating-value-streams-object-oriented-enterprise/

http://www.leadingagile.com/2014/08/agressive-decoupling-scrum-teams/

The article does a great job of describing the problem and the end-state solution… it doesn’t say much about how to get there. Even so, I was impressed by the article and I wanted to share it with you guys in case you haven’t seen it.

I think this kind of architecture might be a prerequisite for true agile at scale, take a look:

http://martinfowler.com/articles/microservices.html

UPDATE: Here is another interesting post I just discovered on Twitter by Richard Clayton highlighting some of the mistakes they made implementing this approach. Doesn’t invalidate the concept, just some good things to be aware of.

https://rclayton.silvrback.com/failing-at-microservices

The post Business Capabilities and Microservices appeared first on LeadingAgile.

Categories: Blogs

People Are Not Resources

Johanna Rothman - Wed, 08/13/2014 - 15:06

My manager reviewed the org chart along with the budget. “I need to cut the budget. Which resources can we cut?”

“Well, I don’t think we can cut software licenses,” I was reviewing my copy of the budget. “I don’t understand this overhead item here,” I pointed to a particular line item.

“No,” he said. “I’m talking about people. Which people can we lay off? We need to cut expenses.”

“People aren’t resources! People finish work. If you don’t want us to finish projects, let’s decide which projects not to do. Then we can re-allocate people, if we want. But we don’t start with people. That’s crazy.” I was vehement.

My manager looked at me as if I’d grown three heads. “I’ll start wherever I want,” he said. He looked unhappy.

“What is the target you need to accomplish? Maybe we can ship something earlier, and bring in revenue, instead of laying people off? You know, bring up the top line, not decrease the bottom line?”

Now he looked at me as if I had four heads.

“Just tell me who to cut. We have too many resources.”

When managers think of people as resources, they stop thinking. I’m convinced of this. My manager was under pressure from his management to reduce his budget. In the same way that technical people under pressure to meet a date stop thinking, managers under pressure stop thinking. Anyone under pressure stops thinking. We react. We can’t consider options. That’s because we are so very human.

People are resourceful. But we, the people, are not resources. We are not the same as desks, licenses, infrastructure, and other goods that people need to finish their work.

We need to change the language in our organizations. We need talk about people as people, not resources. And, that is the topic of this month’s management myth: Management Myth 32: I Can Treat People as Interchangeable Resources.

Let’s change the language in our organizations. Let’s stop talking about people as “resources” and start talking about people as people. We might still need layoffs. But, maybe we can handle them with humanity. Maybe we can think of the work strategically.

And, maybe, just maybe, we can think of the real resources in the organization. You know, the ones we buy with the capital equipment budget or expense budget, not operating budget. The desks, the cables, the computers. Those resources. The ones we have to depreciate. Those are resources. Not people.

People become more valuable over time. Show me a desk that does that. Ha!

Go read Management Myth 32: I Can Treat People as Interchangeable Resources.

Categories: Blogs

Success Articles for Work and Life

J.D. Meier's Blog - Wed, 08/13/2014 - 08:01

"Success consists of going from failure to failure without loss of enthusiasm." -- Winston Churchill

I now have more than 300 articles on the topic of Success to help you get your game on in work and life:

Success Articles

That’s a whole lot of success strategies and insights right at your fingertips. (And it includes the genius from a wide variety of sources including  Scott Adams, Tony Robbins, Bruce Lee, Zig Ziglar, and more.)

Success is a hot topic. 

Success has always been a hot topic, but it seems to be growing in popularity.  I suspect it’s because so many people are being tested in so many new ways and competition is fierce.

But What is Success? (I tried to answer that using Zig Ziglar’s frame for success.)

For another perspective, see Success Defined (It includes definitions of success from Stephen Covey and John Maxwell.)

At the end of the day, the most important definition of success, is the one that you apply to you and your life.

People can make or break themselves based on how they define success for their life.

Some people define success as another day above ground, but for others they have a very high, and very strict bar that only a few mere mortals can ever achieve.

That said, everybody is looking for an edge.   And, I think our best edge is always our inner edge.

As my one mentor put it, “the fastest thing you can change in any situation is yourself.”  And as we all know, nature favors the flexible.  Our ability to adapt and respond to our changing environment is the backbone of success.   Otherwise, success is fleeting, and it has a funny way of eluding or evading us.

I picked a few of my favorite articles on success.  These ones are a little different by design.  Here they are:

Scott Adam’s (Dilbert) Success Formula

It’s the Pebble in Your Shoe

The Wolves Within

Personal Leadership Helps Renew You

The Power of Personal Leadership

Tony Robbins on the 7 Traits of Success

The Way of Success

The future is definitely uncertain.  I’m certain of that.   But I’m also certain that life’s better with skill and that the right success strategies under your belt can make or break you in work and life.

And the good news for us is that success leaves clues.

So make like a student and study.

Categories: Blogs

Who should be in (agile) HR?

Scrum 4 You - Wed, 08/13/2014 - 07:41

In his short article “It’s time to split HR” Ram Charan proposes to split HR into an administrative department and a department for leadership and organization. His main point is that HR members need experience in other management functions such as i.e. finance. His criticizes that most of the current HR people cannot relate to business issues from the „real world“. I understand what his point is all about. People who study HR usually want to work with people and help them to release their potential. But this seems rather difficult as HR is mostly sitting in parts of the building with restricted access for “real world” people due to confidentiality reasons. The majority of them become experts in one specific field of HR (i.e. training, recruiting). Again, relating to business issues from the “real world” is rather difficult.

In an agile organization I would propose the ScrumMasters / agile coaches to take some of the HR duties. Mainly those that concern leadership and organization, but also, from time to time, administrative duties. Such a setup creates links to product development teams and their daily business issues. The ScrumMasters, being lateral leaders, know what it means to solve these “real world” problems – also known as impediments.

ScrumMasters are responsible for increasing the productivity of development teams. In order to reach their goal, they are supposed to change the organization as needed. Being involved in HR activities would be the perfect opportunity to create a link between the HR expertise and the “real world”. The adjustment of “People Systems”, as described by Jay Lorsch in the Strategy Pyramid, would be much easier. Also the other way round: the integration of the HR perspective into change initiatives would be given at any time.
What I also like about the idea of Charan is that HR is not a job position for life. Rather it should be a “pass through”, where one can gain experience in another field of management. In an agile organization this could mean that ScrumMasters and HR experts organize themselves in communities of practice. This way, they can work together and contribute to the success of the enterprise in different ways, for example like this:

  • ScrumMasters could fill a full-time HR position for a certain period of time .
  • ScrumMasters and their teams could participate as pilots for new concepts developed by HR.
  • ScrumMasters could be friendly users for i.e. new concepts of leadership training etc.
  • Engagement in different phases of the development process of new “people systems” is also possible (proposing ideas, defining the concept, collecting feedback etc.)

For a limited amount of time, ScrumMasters can be solely engaged in HR activities. Still it is mandatory that they return into leading a team after a certain HR deliverable has been released. An HR deliverable could be a new training, a cultural change that is clearly visible, new processes etc. But not only ScrumMasters can engage in HR topics, also other team members can take part in the communities of practice. How? The ScrumMasters and HR experts will find a way!

Related posts:

  1. Organisations need to understand …
  2. 5 minutes on management
  3. Massive Multiplayer Online Games the Digital Business School of the Next Generation

Categories: Blogs

GOAT 2014 Call for Speakers – Appel aux conférenciers

Agile Ottawa - Wed, 08/13/2014 - 04:58
Gatineau Ottawa Agile Tour 2014 Call for Speakers The Gatineau  Ottawa Agile Tour (#GOAT14) is a one day conference around the theme of Agility applied to software development, management, marketing, product management and other areas of today’s businesses. This year’s event … Continue reading →
Categories: Communities

An Existence Proof and The Value of Coaching

Practical Agility - Dave Rooney - Tue, 08/12/2014 - 19:00
I found a tweet I saw this morning rather disconcerting: An embedded #agile coach billing $2500 a day for 221 days can in theory generate this much per yr: $552,500. Q: What does the client get? — Daniel Mezick (@DanielMezick) August 12, 2014 The clear implication is that coaches, like all consultants, follow the mantra, "If you can't be part of the solution, there's plenty of money to be made
Categories: Blogs

Agile Chronicles (Composite Stories) – Agile Artifacts – Ephemeral v. Enduring Value

Leading Agile - Mike Cottmeyer - Tue, 08/12/2014 - 17:22

Agile Chronicles – Composite Stories 

Agile Artifacts – Ephemeral v. Enduring Value

During retrospection, when evaluating the quality and value of our artifacts for Epic, Feature and Story decomposition a common theme for our scrum teams is that these artifacts are by design barely sufficient and as such are ephemeral and provide no enduring value.

The design is in the code, the documentation is in the code, so we leave these artifacts attached to the engineering cards in our Agile Lifecycle Management (ALM) tool, close the cards when complete and never reference them again. Well, maybe we retain some Quality Assurance scripts that are still performed manually, but soon we will complete our QA Automation program and then the documentation will be in the code (automated scripts) and we won’t need to maintain a document artifact for QA scripts either. We accept this as a natural consequence of “barely sufficient” and we move on to the next sprint.

What if there was some undetected value in some of this information, and if sustained over time with minimal effort could provide enduring value, and help us achieve our team and business objectives.

Consider the case for managing software assets by creating and sustaining a definitive list of features for the software asset. This list becomes the feature dictionary, a common language for all teams and manifests itself throughout the Epic, Feature, Story life cycle.

Here is the brief story of an Agile Transformation and the value we discovered by performing software asset feature management and using that common feature definition to enable traceability for scrum team accountability, Quality Assurance test planning, code file ownership, portfolio analysis, competitive analysis and financial analysis.

We have just over 5M lines of code and the list of features began as a two-tiered description of 20 Capabilities and 70 related Features. The features were later delineated to 675 sub-features (about 10 sub-features per feature) to add more granularity to our traceability.

The driving business reasons for agile transformation were Quality first and foremost, but Predictability was also a problem that needed to be solved.

“We’ve done the PxQ analysis and if we dedicate two resources from each scrum team we can fix 700 defects in 9 months. We can do it in 6 months if we hire some contract resources”

Scrum teams were delineated by the list of features and corresponding software that they “own” and are accountable for. This enabled the scrum teams to focus on improving their knowledge of their software asset and focus on improving the quality of the software asset by allocating sprint time for refactoring and for reducing the technical debt that they inherited. Each defect was re-triaged in order to assign it to a specific scrum team for resolution, and as a result each scrum team had clear visibility to their defect backlog.

“You are fixing a few problems always breaking something else”.

Our client’s experience with our product was expressed as a negative impact to our business in the form of a declining Net Promoter Score and other reference-ability measurements. Participation in our client beta test program had dwindled to just a few long-term clients. The client pain manifested itself in the form of client incidents, some (or many depending on who you talked to) of which were caused by software defects. To reduce mean time to repair (MTTR), the scrum teams began providing recurring support in the form of team member rotations to the client incident triage process. They focused on resolving the incidents that were easily correctable without software changes quickly and were also responsible for assigning any defects that evolved to the scrum team that was accountable for the root cause feature set.

The predictability of delivering value to our customers depends on a well groomed backlog, how well we define the Epic that enables that value.  The Epic is defined by the common list of features that are changed or added as a result of the Epic objective. This list of features per Epic is used to assign the features the accountable scrum teams, to elaborate the Feature modifications required for the Epic, define dependencies, perform Feature to Story decomposition and story point estimation.

“Why are we focusing our QA Automation efforts on an industry standard code coverage objective instead of focusing on defect hot spots and areas of code complexity? We need depth of coverage in targeted areas more than when need breadth of coverage for feature sets and features with minimal technical debt.”

Now let’s extend feature traceability to Quality Assurance (QA) scripts and to code files in the Software Version Management tool by denoting the QA scripts and code files associated with each feature. This enables the QA team members to plan based on the complexity of the feature changes to specific code files and to schedule the automated and manual testing that is necessary during each sprint. They can further verify this plan by relating the code file change reports produced in each of the build processes during the sprint to the corresponding features and Quality Assurance scripts. This enables the focus of QA feature testing to be (not limited to, but) focused on the specific and adjacent feature sets deltas in each code build.

“Why are we using our least experienced scrum team members and contract resources to fix defects in our highest complexity code?”

Next let’s study our software asset by analyzing the cyclomatic complexity of the code files. This standard McCabe evaluation provides some insight into which code files required subject matter expertise and extra scrutiny when the corresponding features were scheduled for delta in sprint planning. These dependencies were discussed during sprint planning, annotated in the ALM tool and scheduled for early resolution in the sprint.

Why are we doing this, why are we adding or changing this feature of the product”?

Next, the scrum teams were encouraged to ask the product managers and product owners to explain the product vision so they could include that information in their respective sprint goals and release goals.   The most important question to answer for the scrum teams was “why are we doing this, why are we adding or changing this feature of the product”? The answers were usually a rote response of “competitive response or competitive advantage”.

These recurring questions led the product management team to take a more proactive approach to answering this question and use the software asset feature list for quantitative and qualitative evaluation of competing and adjacent products. Our scrum team members were able to compare the specific feature sets for which they were accountable to the corresponding feature sets of competitive products. This was a knowledge accelerator for the scrum teams and most team members made it a priority to regularly assess these competitors for feature changes and shared this information during story grooming and sprint planning sessions.

Do we have a strategy for investment and are we executing it?

Over time, because we attached the feature annotation to all of the engineering cards in our ALM tool for our work on investments, enhancements, maintenance, and defect reparation we accumulated a lot of good information.

For each portfolio investment category and each feature set and feature, we had a near real-time and continuous flow of information, such as effort expended, story point investment levels, and defect hot spots. All of these measurements could be correlated to investment strategy, code complexity, QA coverage (depth and breadth) and competitor assessment. This information mostly confirmed, but sometimes indicated contradictions in our portfolio planning.

We used a 3-6 month portfolio plan horizon to rationalize future scrum team feature re-alignment, and impact assessments for near term investment spending adjustments, and budget constraints. The value and sight distance of this planning horizon was directly proportional to how well groomed our backlog was at the time.

So, to summarize the business value we received from software asset feature management:

  • We initially used the feature list to define scrum team accountability, the features were related to code files in the software version management repository and team based access control was assigned for specific code files associated with the scrum team’s feature set ensured 100% accountability for all software changes to that feature set.
  • Sprint Planning based on code complexity assured that the proper level of subject matter expertise was applied to high complexity software deltas in the form of team members with the most knowledge validating the work of less knowledgeable team members, and applying the commensurate the level of quality assurance effort, including increased depth of testing and more testing of adjacent features.
  • The focus of quality efforts per build based on the features and code files that changed provided the optimal use of the limited QA resources of time and effort (even automated testing takes time).
  • The competitive analysis information was new information to the scrum team members. It accelerated their knowledge of the product and made them active participants in continual market analysis.
  • The portfolio view of accurate information enabled fact-based decisions for WIP and increased the accuracy and sight distance of our planning horizon.

The tangible benefits to our clients included:

  • Much better results from our technical debt reduction program, and it got us out of the cycle of, according to our customers “fixing a few problems and breaking something else”.
  • Most impactful was the renewed participation in our client beta test program and the willingness of the participants to express the value they received in terms of improved quality and feature improvements to other customers.
  • This was reflected in improved client reference-ability.

The benefits to our software development organization were:

  • Made our scrum teams much more knowledgeable of the software asset that they “own” in terms of complexity and feature value to the business.
  • Provided a common language and some standardized practices for all scrum teams that improved the time to productivity for new team members by providing the Epic-Feature-Story-Code-File-QA-Script traceability.
  • Enabled the scrum teams to understand the methods and level of effort required to produce zero defect software and made them realize that that was a realistic and achievable goal.

So in conclusion, having had this experience, we have agreed that each of these questions and approaches would be handled differently the next time.

“We’ve done the PxQ analysis and if we dedicate two resources from each team we can fix 700 defects in 9 months. We can do it in 6 months if we hire some contract resources”

Throwing money and resources at a quality problem will certainly fix many defects, but the incremental defect injection or leakage may go undetected.

“You are fixing a few problems and breaking something else”.

Believe the terrain, if your customers are telling you this then it is true and you have a problem that needs to be analyzed and resolved. Please do not rationalize it, as we did by telling yourself that “we are fixing far more defects than we inject”.

“Why are we focusing our QA Automation efforts on an industry standard code coverage objective instead of focusing on defect hot spots and areas of code complexity? We need depth of coverage in targeted areas more than when need breadth of coverage.”

Many of us have followed the rainbow trying to find the (mythical) 70% or 80% code coverage. Focus instead on incrementing quality where it will most impactful to your customers and business.

 “Why are we using our least experienced team members and contract resources to fix defects in our highest complexity code?”

This was thought to be the most cost effective means of fixing a large number of defects in a short time. It was also the primary source of “fixing a few problems and breaking something else”. Apply subject matter expertise commensurate with the level of complexity.

Why are we doing this, why are we adding or changing this feature of the product” 

This is a non-engineering activity, but proved to have the largest positive impact to our team cohesiveness and culture. Understanding our product’s relative position in the market place made the team members cognitive of the value of the features they were building.

Do we have a strategy for investment and are we executing it?

This is two questions. The first is an easy one to answer. A strategy statement is easy to find somewhere in most organizations. Having a method to evaluate strategy attainment requires thoughtful effort to achieve.

 

See you on the journey!

 

 

 

The post Agile Chronicles (Composite Stories) – Agile Artifacts – Ephemeral v. Enduring Value appeared first on LeadingAgile.

Categories: Blogs

Meet: Scrum 2.0

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Meet-Scrum

Feedback from first version incorporated.

More welcome!

Thanks

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail
Categories: Blogs

Agile Bootcamp Talk Posted on Slideshare

Johanna Rothman - Tue, 08/12/2014 - 13:46

I posted my slides for my Agile 2014 talk, Agile Projects, Program & Portfolio Management: No Air Quotes Required on Slideshare. It’s a bootcamp talk, so the majority of the talk is making sure that people understand the basics about projects. Walk before you run. That part.

However, you can take projects and “scale” them to programs. I wish people wouldn’t use that terminology. Program management isn’t exactly scaling. Program management is when the strategic endeavor  of the program encompases each of the projects underneath.

If you have questions about the presentation, let me know. Happy to answer questions.

Categories: Blogs

SonarQube 4.4 in Screenshots

Sonar - Tue, 08/12/2014 - 11:29

The team is proud to announce the release of SonarQube 4.4, which includes many exciting new features:

  • Rules page
  • Component viewer
  • New Quality Gate widget
  • Improved multi-language support
  • Built-in web service API documentation

Rules page

With this version of SonarQube, rules come out of the shadow of profiles to stand on their own. Now you can search rules by language, tag, SQALE characteristic, severity, status (E.G. beta), and repository. Oh yes, and you can also search them by profile, activation, and profile inheritance.

Once you’ve found your rules, this is now where you activate or deactivate them in a profile – individually through controls on the rule detail or in bulk through controls in the search results list (look for the cogs). In fact, the profiles page no longer has it’s own list of rules. Instead, it offers a summary by severity, and a click through to a rule search.

Another shift in rule handling comes for what used to be called “cloneable rules”. We’ve realized that strictly speaking, these are really “templates” rather than rules, and now treat them as such.

Templates can no longer be directly activated in a profile. Instead, you create rules from them and activate those.

Component viewer

The component viewer also experienced major changes in this version. The tabs across the top now offer filtering, which controls what parts of the code you see (E.G. only show me the code that has issue), and decoration, which controls what you see layered on top of the code (show/hide the issues, the duplications, etc.).

A workspace concept debuts in this version. As you navigate from file to file through either code coverage or duplications, it helps you track where you are and where you’ve been.

New Quality Gate widget

A new Quality Gate widget makes it clearer just what’s wrong if your project isn’t making the grade. Now you can see exactly which measures are out of line:

Improved multi-language support

Multi-language analysis was introduced in 4.2 and it just keeps getting better. Now we’ve added the distribution of LOC by language in the size widget for multi-language projects.

We’ve also added a language criterion to the Issues search:

Built-in web service API documentation

To find this last feature, look closely at at 4.4′s footer.

We now offer on-board API documentation.

That’s all, Folks!

Time now to download the new version and try it out. But don’t forget to read the installation or upgrade guide.

Categories: Open Source

If Everybody’s Happy, You’re Doing It Wrong

Agile Tools - Tue, 08/12/2014 - 08:18

So there you are, wrapping up another successful release planning session. Sprints are all laid out for the entire release. All the user stories you can think of have been defined. All the daunting challenges laid down. Compromises have been made. Dates committed to. Everyone contributed to the planning effort fully.

So why isn’t everyone happy? Let’s check in with the product owner: The product owner looks like somebody ran over his puppy. The team? They won’t make eye contact and they’re flinching like they’ve just spent hours playing Russian roulette. What’s up? Well, here’s the dynamic that typically plays out:

  • The product owner has some fantasy of what they think they will get delivered as part of the release. This fantasy has absolutely no basis in reality, it just reflects the product owner’s hopes for what he/she thinks they can get out of the team (it’s just human nature). This is inevitably far beyond what the team is actually capable of. My rule of thumb? A team is typically capable of delivering about 1/3 of what a product owner asks for in a release. That’s not based on any metrics, its just an observation. However, more often than not, it seems to play out that way.
  • The team is immediately confronted with a mountain of work they can’t possibly achieve in the time allotted – even under the most optimistic circumstances. It’s their job to shatter the dreams of the product owner. Of course, strangling dreams is hard work. Naturally enough, the product owner doesn’t give up easy. They fight tooth and nail to retain any semblance of their dream.
  • After an hour, perhaps two, maybe even three or four (shudder), the battle is over.

I’m going to go out on a limb here and speculate that this is no one’s idea of a positive dynamic. But it seems to happen pretty often with agile projects. It sure doesn’t look like much fun. I’m pretty sure this isn’t in the Agile Manifesto. So how do we avoid this kind of trauma?

  • The product owner needs to be a central part of the team. They need to live with the team, be passionate about the product, and witness to what a team does daily. Fail to engage in any of this and a product owner loses touch with the work the team does and loses the ability to gauge their capabilities. Doing all of this is hard. There’s a reason that the product owner is the toughest job in Scrum.
  • The team needs to embrace their product owner as an equal member of the team. You have to let them in. Work together. Let go of the roles and focus on the work.
  • Prepare for the release planning in advance. There is no reason for it to be a rude surprise. Spend time together grooming the backlog together. As a team.
  • Don’t cave to pressure from upper management. Behind every product owner is a slavering business with an insatiable desire for product. Ooh, did I just write that?

Release planning doesn’t have to be a nightmare. OK, in theory…


Filed under: Agile, Scrum, Teams Tagged: Agile, management, Planning, product management, Release Planning, software development
Categories: Blogs

Hierarchies remove scaling properties in Agile Software projects

Software Development Today - Vasco Duarte - Tue, 08/12/2014 - 06:00

There is a lot of interest in scaling Agile Software Development. And that is a good thing. Software projects of all sizes benefit from what we have learned over the years about Agile Software Development.

Many frameworks have been developed to help us implement Agile at scale. We have: SAFe, DAD, Large-scale Scrum, etc. I am also aware of other models for scaled Agile development in specific industries, and those efforts go beyond what the frameworks above discuss or tackle.

However, scaling as a problem is neither a software nor an Agile topic. Humanity has been scaling its activities for millennia, and very successfully at that. The Pyramids in Egypt, the Panama Canal in central America, the immense railways all over the world, the Airbus A380, etc.

All of these scaling efforts share some commonalities with software and among each other, but they are also very different. I'd like to focus on one particular aspect of scaling that has a huge impact on software development: communication.

The key to scaling software development

We've all heard countless accounts of projects gone wrong because of lack (inadequate, or just plain bad) communication. And typically, these problems grow with the size of the team. Communication is a major challenge in scaling any human endeavor, and especially one - like software - that so heavily depends on successful communication patterns.

In my own work in scaling software development I've focused on communication networks. In fact, I believe that scaling software development is first an exercise in understanding communication networks. Without understanding the existing and necessary communication networks in large projects we will not be able to help those project adapt. In many projects, a different approach is used: hierarchical management with strict (and non-adaptable) communication paths. This approach effectively reduces the adaptability and resilience in software projects.

Scaling software development is first and foremost an exercise in understanding communication networks.

Even if hierarchies can successfully scale projects where communication needs are known in advance (like building a railway network for example), hierarchies are very ineffective at handling adaptive communication needs. Hierarchies slow communication down to a manageable speed (manageable for those at the top), and reduce the amount of information transferred upwards (managers filter what is important - according to their own view).

In a software project those properties of hierarchy-bound communication networks restrict valuable information from reaching stakeholders. As a consequence one can say that hierarchies remove scaling properties from software development. Hierarchical communication networks restrict information reach without concern for those who would benefit from that information because the goal is to "streamline" communication so that it adheres to the hierarchy.

In software development, one must constantly map, develop and re-invent the communication networks to allow for the right information to reach the relevant stakeholders at all times. Hence, the role of project management in scaled agile projects is to curate communication networks: map, intervene, document, and experiment with communication networks by involving the stakeholders.

Scaling agile software development is - in its essential form - a work of developing and evolving communication networks.

A special thank you note to Esko Kilpi and Clay Shirky for the inspiration for this post through their writings on organizational patterns and value networks in organizations.

Picture credit: John Hammink, follow him on twitter

Categories: Blogs

Aggressive Decoupling of Scrum Teams

Leading Agile - Mike Cottmeyer - Tue, 08/12/2014 - 04:09

What does aggressive decoupling look like?

Last post I talked about the failure modes of Scrum and SAFe and how the inability to encapsulate the entire value stream will inevitably result in dependencies that will kill your agile organization.

But Mike… as some level of scale, you have to have dependencies? Even if we are able to form complete cross-functional feature teams, we may still have features which have to be coordinated across teams or at least technology dependencies which make it tough to be fully independent.

But Mike… you talk about having teams formed around both features and components… in this case, it is inevitable that you are going to have dependencies between front end and back end systems. Whatever we build on the front end, has to be supported on the back end.

What if…

What if you looked at each component, or service, or business capability as a product in and of itself. What if that product had a product owner guiding it as if it were a standalone product in its own right?

What if you looked at each feature that might possibly need to consume a component, or service, or business capability as the customer of said service who had to convince the service to build on it’s behalf?

What if the component, service, or business capability team looked at each of the feature teams as their customer, and had the freedom to evolve it’s product independently to best satisfy the needs of all it’s customers?

What if the feature teams could only commit to market based on services that already existed in the services layer, and could never force services teams to commit based on a predetermined schedule?

What if feature teams could *maybe* commit to market based on services which were on the services teams near term roadmap, but did so at their own risk, with no guarantees from the service owner?

What if feature teams were not allowed to commit to market based on services that didn’t exist in the service, nor were on the near term roadmap, eliminating the ability to inject features to the service?

I think…

I think you’d have a collection of Scrum teams… some Scrum teams that were built around features and some Scrum teams that were built around shared services and components… each being treated as it’s own independent product building on it’s own cadence under the guidance of it’s own PO.

There would be no coordination between the feature teams and the services teams because each set of teams would be evolving independently, but with a general awareness of each others needs. The services teams develop service features to best satisfy the collective needs of their feature team customers.

So…

I’m not suggesting this something that most companies can go do today. There is some seriously intentional decoupling of value streams, technical architecture, business process, and org structure that has to happen before this model would could be fully operational.

That said, if you want to have a fully agile, object oriented, value stream encapsulated organization, this is what it looks like. You not only have to organize around objects (features, services, components, business capabilities), but you have to decouple the dependencies and let them evolve independently.

The problems ALWAYS come in when you allow the front end to inject dependencies into the back end shared services. You will inevitably will create bottlenecks that have to be managed across the software development ecosystem. Dependencies are bad, bottlenecks might be worse.

If we can create Scrum teams around business objects, work to progressively decouple these business objects from each other, and allow the systems to only consume what’s in place now, and never allow the teams to dictate dependencies between each other… I think you have a shot.

Do this, and you really have agile at scale.

The post Aggressive Decoupling of Scrum Teams appeared first on LeadingAgile.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.