Skip to content

Feed aggregator

LAST (Lean Agile Systems Thinking) 2014 Conference

image Last week I had the pleasure of speaking  at the LAST 2014 (Lean Agile Systems Thinking) conference. This is my second consecutive year of having opportunity to speak at this popular Melbournian event.I  have seen this event growing year after year. First year, we had 150 attendees, the second year 350 and third year is even more successful with 450 people. The event is highly affordable and run by the Melbourne community.  Some call this conference as  “Meet up on Steroids”. 

The two passionate people who are successfully managing this event are Craig Brown  and Ed Wong.  Organizing such a large scale event managing speakers, schedule, events and sponsors is not a simple thing. The event was such a smooth one, didn’t realize that the day had already passed.

This is a classic example of power of passion and network in the community.  You don’t need many people to make a positive difference to the society, you just need one or two passionate givers.

The session was organized by  TABAR 

I spoke about  10 Irrefutable laws of Agile Coaching.  The presentation slides are available on Slideshare as well. Feel free to download/share.  

My intent for sharing these ideas was to encourage Agile coaches to think beyond  Scrum, Lean, XP, etc.   Agile coaching involves a broader systems knowledge to succeed.

More details about my session:  Agile coaching is one of sought after skills in the IT industry and many experienced coaches are doing extremely well. However some change agents are struggling to make an impact, not because, they don't know Agile but because, they don't know some ground rules dealing with the coaching teams and leaders.

Whether you are a novice or an experienced coach, there are irrefutable laws governing Agile coaches. Based on my own personal experiences coaching teams/leaders since the last several years, I have come to realize the 10 secrets. Irrespective of where you are in the journey as an Agile coach, practicing these 10 laws will help you to become a successful Agile coach. These handy rules can help you anywhere in Agile coaching journey.

Categories: Blogs

Fearless Speaking

J.D. Meier's Blog - 2 hours 11 min ago

“Do one thing every day that scares you.” ― Eleanor Roosevelt

I did a deep dive book review.

This time, I reviewed Fearless Speaking.

The book is more than meets the eye.

It’s actually a wealth of personal development skills at your fingertips and it’s a powerful way to grow your personal leadership skills.

In fact, there are almost fifty exercises throughout the book.

Here’s an example of one of the techniques …

Spotlight Technique #1

When you’re overly nervous and anxious as a public speaker, you place yourself in a ‘third degree’ spotlight.  That’s the name for the harsh bright light police detectives use in days gone by to ‘sweat’ a suspect and elicit a confession.  An interrogation room was always otherwise dimly lit, so the source of light trained on the person (who was usually forced to sit in a hard straight backed chair) was unrelenting.

This spotlight is always harsh, hot, and uncomfortable – and the truth is, you voluntarily train it on yourself by believing your audience is unforgiving.  The larger the audience, the more likely you believe that to be true.

So here’s a technique to get out from under this hot spotlight that you’re imagining so vividly turn it around! Visualize swiveling the spotlight so it’s aimed at your audience instead of you.  After all, aren’t you supposed to illuminate your listeners? You don’t want to leave them in the dark, do you?

There’s no doubt that it’s cooler and much more comfortable when you’re out under that harsh light.  The added benefit is that now the light is shining on your listeners – without question the most important people in the room or auditorium!

I like that there are so many exercises and techniques to choose from.   Many of them don’t fit my style, but there were several that exposed me to new ways of thinking and new ideas to try.

And what’s especially great is knowing that these exercise come from professional actors and speakers – it’s like an insider’s guide at your fingertips.

My book review on Fearless Speaking includes a list of all the exercises, the chapters at a glance, key features from the book, and a few of my favorite highlights from the book (sort of like a movie trailer for the book.)

You Might Also Like

7 Habits of Highly Effective People at a Glance

347 Personal Effectiveness Articles to Help You Change Your Game

Effectiveness Blog Post Roundup

Categories: Blogs

An Open Letter to Executives Leading Agile Transformations

Agile Management Blog - VersionOne - 4 hours 10 min ago

Dear Executive,

Let me congratulate you on your decision to introduce agile methods within your organization. It is a wise decision that holds incredible potential for your employees, your company, and especially your customers. If you are just beginning your improvement, or are yet to begin, the journey upon which you are about to embark is one that will be well worth the effort. And it will take effort—long, arduous, and at times frustrating effort.

Although Machiavellians do exist, my experience is that they are exceedingly rare.  In general, people are good, honest, and hard-working and really want to do the right thing. We hold a desire to do our jobs well, be recognized for it, and make a difference in the world by being part of something larger than ourselves; to have a purpose at work, if you will. To this end, we will do what is necessary to get our work done in the manner that our environment best supports. Put more simply, we will take the path of least resistance and complexity. Your move toward agility may be more challenging than necessary if you don’t keep this in mind while traversing your path toward improvement.

Rather than simply introducing and mandating agile methods such as ScrumeXtreme Programming (XP), and/or Kanban, create an organizational environment where agility is the path of least resistance for your employees and colleagues to get their work done. Here is a ten-step plan of action to help you create that environment within your organization.

  1. Stop referring to the change as organizational and/or agile transformation. We’ve all heard and understand the cliché that “change is the only constant.” Using phrases like agile transformation can shoot fear of the unknown into the psyche of everyone as it screams massive change. A less scary word is improvement. We all like to improve. Start talking about improving delivery, increasing customer engagement, and enhancing responsiveness to new challenges.
  2. Restructure your organization to reduce emphasis on functional specialization.One of the factors strongly contributing to the slow responsiveness of waterfall is the typical hand-off of work as it passes from the hands of one functional group to another—Business Analysts for requirements to Architects for design, to Developers for build, etc. Create teams that are cross-functional and require little to no hand-off of work. If so desired, create Centers of Excellence (CoE) organized around professional career ladders but remove reporting ties to any functional manager in the CoE.
  3. Start demonstrating the behavior you desire in your organization. One of the most powerful ways to build momentum is to first change yourself and your behavior. You really can’t force change within others, but you can inspire it. It’s amazing the change you’ll see in others when you first seek to change yourself. Actively seek ways you can serve the teams within your organization. With genuine curiuosity, caring, and empathy ask them what gives them headaches in their jobs. Be the aspirin for these headaches and remove the obstacles that are getting in their way. You might also consider working cross-functionally as a leadership team. Break down functional silos by creating teams that consist of representatives from many departments such as sales, marketing, IT, support, etc. The gears in any machine only work if cogs make contact and work collectively.
  4. For technology projects, refuse to move forward on any project without business representation and team involvement. Be uncompromising about beginning any project that does not have business representation. I was recently asked what I thought was the one thing that, if it did not exist, you could not consider yourself to be agile. This is that one thing—business involvement. If it’s not important enough to warrant business involvement, it’s not important enough to work on. Thank you, Samar Elatta, for this reminder because it’s so obvious it can easily be overlooked.
  5. Completely drop discussions of “resource allocation” and speak only of “team availability”. Agile is a team sport. The focus needs to shift from individuals to teams. Instead of identifying individuals within a functional specialty and percentage of availability to do work seek out only complete teams with availability. Also, don’t begin work with incomplete teams. Have all of the requisite skill sets available so you can avoid slowing them down with the burden that’s created to bring a new team member up to speed on the project.
  6. Increase your delivery cadence. The ideal situation is to be able to deliver at any time (or all the time, as in continuous). Take incremental steps toward this goal by reducing the delivery window by at least half. If you’re on a semi-annual (6-month) delivery cycle reduce it to quarterly. If it’s annual, reduce it to semi-annual; quarterly, release every six weeks. This will automatically require teams to focus on smaller increments of work instead of looking at very long horizon delivery windows that only serve to increase estimating complexity and uncertainty about scope of delivery.
  7. Give high praise for “complete” features only. The only features that provide customer value are those that are complete. Almost done features may have future value potential but they have zero value right now. Consider work either done or not done. There is no credit for work that is not 100% complete. Define expectations very clearly for what is meant by “complete”.
  8. Recognize entire teams. The best way to support teamwork is to value and recognize teamwork, especially in a public setting. Praise entire teams and avoid the temptation to call out individuals on the team that did an excellent job. While it’s true that some of them may well have went above and beyond, it’s dangerous to the cohesiveness of the team if you praise an individual. If you must mention names, mention the names of all individual members of the team. Only the team (those on the frontline of value delivery) has the credibility to recognize individual team members for their efforts.
  9. Don’t mandate a methodology at the team level. Rather than requiring teams to do Scrum, or any other methodology framework, put the ingredients in place for agility such as those described in points 1-8 above, provide and demonstrate your personal support and commitment to the removal of organizational obstacles and dysfunction, and stand out of the way. These people are professionals, they are smart, and are very capable. If not, why are they working in your organization? They will do what’s required to get the results expected of them, sometimes regardless of whether those results are even realistic. This is autonomy and is one of the incubators of the only form of motivation that is effective, intrinsic.
  10. Invest in your workforce and your teams and they will invest in you. I do not tolerate tales of woe about organizations experiencing a lack of worker loyalty when those same organizations refuse to invest in their workforce. When times get tough and the C-suite is getting pressure to show near-term stock price growth, drastic measures are taken to reduce costs. Unfortunately, these short-term measures such as workforce reductions, slashing benefits, and cutting investment in training usually have a negative long-term effect. Make decisions that will positively impact the long-term value of both your stock price and your organization (boths its viability and people). If you’ve created an unsustainable cost structure that requires drastic measures, then you obviously need to take some drastic measures. If you and your management team created it, admit it because your people probably already know it. They have great “BS” meters and will see right through any attempt you or your leadership team make at masking it. Demonstrating humanity through vulnerability and admitting personal fault may feel terrible but it will increase your credibility in the eyes of your constituents by orders of magnitude. If you create an environment that inspires workers to give their all, run a company that gives to the community, and demonstrates that it values its workforce by investing in them, you and your organization will have a much greater chance of competing in the marketplace for a long time to come.

I opened this letter by expressing my belief that people are generally good, honest, and want to do the right thing. I close this letter by restating that this belief I hold has only been reinforced through my personal interaction with a diverse group of colleagues throughout many organizations spanning a long career. I would like to affirm that this belief also extends to you, the organizational leader. I believe that you genuinely want to do the right thing, to contribute to something larger than yourself, and to support those around you to improve the quality of life for each of us.

With high confidence in your ability, personal desire, and integrity I urge you to create an environment within your organization that inspires those around you and that enables agility. The future of your organization is depending on it.

Categories: Companies

R: ggplot – Plotting back to back charts using facet_wrap

Mark Needham - Fri, 07/25/2014 - 23:57

Earlier in the week I showed a way to plot back to back charts using R’s ggplot library but looking back on the code it felt like it was a bit hacky to ‘glue’ two charts together using a grid.

I wanted to find a better way.

To recap, I came up with the following charts showing the RSVPs to Neo4j London meetup events using this code:

2014 07 20 17 42 40

The first thing we need to do to simplify chart generation is to return ‘yes’ and ‘no’ responses in the same cypher query, like so:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
query = "MATCH (e:Event)<-[:TO]-(response {response: 'yes'})
         WITH e, COLLECT(response) AS yeses
         MATCH (e)<-[:TO]-(response {response: 'no'})<-[:NEXT]-()
         WITH e, COLLECT(response) + yeses AS responses
         UNWIND responses AS response
         RETURN response.time AS time, e.time + e.utc_offset AS eventTime, response.response AS response"
allRSVPs = cypher(graph, query)
allRSVPs$time = timestampToDate(allRSVPs$time)
allRSVPs$eventTime = timestampToDate(allRSVPs$eventTime)
allRSVPs$difference = as.numeric(allRSVPs$eventTime - allRSVPs$time, units="days")

The query is a bit because we want to capture the ‘no’ responses when they initially said yes which is why we check for a ‘NEXT’ relationship when looking for the negative responses.

Let’s inspect allRSVPs:

> allRSVPs[1:10,]
                  time           eventTime response difference
1  2014-06-13 21:49:20 2014-07-22 18:30:00       no   38.86157
2  2014-07-02 22:24:06 2014-07-22 18:30:00      yes   19.83743
3  2014-05-23 23:46:02 2014-07-22 18:30:00      yes   59.78053
4  2014-06-23 21:07:11 2014-07-22 18:30:00      yes   28.89084
5  2014-06-06 15:09:29 2014-07-22 18:30:00      yes   46.13925
6  2014-05-31 13:03:09 2014-07-22 18:30:00      yes   52.22698
7  2014-05-23 23:46:02 2014-07-22 18:30:00      yes   59.78053
8  2014-07-02 12:28:22 2014-07-22 18:30:00      yes   20.25113
9  2014-06-30 23:44:39 2014-07-22 18:30:00      yes   21.78149
10 2014-06-06 15:35:53 2014-07-22 18:30:00      yes   46.12091

We’ve returned the actual response with each row so that we can distinguish between responses. It will also come in useful for pivoting our single chart later on.

The next step is to get ggplot to generate our side by side charts. I started off by plotting both types of response on the same chart:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  geom_bar(binwidth=1)

2014 07 25 22 14 28

This one stacks the ‘yes’ and ‘no’ responses on top of each other which isn’t what we want as it’s difficult to compare the two.

What we need is the facet_wrap function which allows us to generate multiple charts grouped by key. We’ll group by ‘response’:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  geom_bar(binwidth=1) + 
  facet_wrap(~ response, nrow=2, ncol=1)

2014 07 25 22 34 46

The only thing we’re missing now is the red and green colours which is where the scale_fill_manual function comes in handy:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response, nrow=2, ncol=1)

2014 07 25 22 39 56

If we want to show the ‘yes’ chart on top we can pass in an extra parameter to facet_wrap to change where it places the highest value:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response, nrow=2, ncol=1, as.table = FALSE)

2014 07 25 22 43 29

We could go one step further and group by response and day. First let’s add a ‘day’ column to our data frame:

allRSVPs$dayOfWeek = format(allRSVPs$eventTime, "%A")

And now let’s plot the charts using both columns:

ggplot(allRSVPs, aes(x = difference, fill=response)) + 
  scale_fill_manual(values=c("#FF0000", "#00FF00")) + 
  geom_bar(binwidth=1) +
  facet_wrap(~ response + dayOfWeek, as.table = FALSE)

2014 07 25 22 49 57

The distribution of dropouts looks fairly similar for all the days – Thursday is just at an order of magnitude below the other days because we haven’t run many events on Thursdays so far.

At a glance it doesn’t appear that so many people sign up for Thursday events on the day or one day before.

One potential hypothesis is that people have things planned for Thursday whereas they decide more last minute what to do on the other days.

We’ll have to run some more events on Thursdays to see whether that trend holds out.

The code is on github if you want to play with it

Categories: Blogs

Welcome to SAFe 3.0!

Agile Product Owner - Fri, 07/25/2014 - 22:53

Hello,

Well, it seems like it’s been a long time coming, but we are happy to announce the release of SAFe 3.0, right on time! This latest version features extensive refinements to many elements of the methodology infrastructure, as well as new content and guidance that helps enterprises better organize around value delivery, and improve coordination of large value streams.

With a primary focus on a substantially improved representation of the Portfolio level and organizational structure optimized for better flow of value, highlights of changes in this new version include:

In addition, almost all articles have been updated for additional clarity, and better fit in the 3.0 context. (I hope not to have to do that again for some time … ). And for those would like an informal introduction, our colleague and SPC Inbar Oren produced this five minute overview.

It is our sincere hope that this new version helps you and your enterprise achieve the benefits you all deserve for working so hard in building the world’s biggest, and best, software intensive systems. And as always, SAFe remains free and publicly available, for all to use as they can.

————————-

Of course, with every new release, you are never done. The end of one thing is the beginning of another, and I’d guess some of you look forward to future versions. At this moment, I’m not so sure I do, but I’m sure I will feel differently next week. (Or maybe the week after.)

This has certainly been a group effort amongst the authors (see below), but we also owe a special thanks to all the great folks at Scaled Agile, Inc. who helped us push out this release, including Regina Cleveland, Matt Clinton, and Vanessa Villarreal.

 

Regards,

-Dean Leffingwell, Chief Methodologist, Alex Yakyma, Associate Methodologist, and Richard Knaster, Scaled Agile Thought Leader and SAFe Principle Contributor

Categories: Blogs

Assembla now allows automatic payments with PayPal

Assembla Blog - Fri, 07/25/2014 - 20:17

Paying for your Assembla subscription with PayPal has never been easier. We recently added the ability to set up recurring payments with PayPal that will automatically pay for your Assembla subscription every billing period, whether that be monthly or annually. Previously, it was a manual process that required logging in and paying every time an invoice was created.

To set up automatic payments with PayPal, visit your billing page > select the PayPal option > and follow the steps.

assembla paypal option1

If you have any questions or issues, please contact Assembla support at support@assembla.com.

Categories: Companies

Marketing scrum vs IT scrum - a report published and presented at agile 2014

Xebia Blog - Fri, 07/25/2014 - 18:49

As we know, Scrum is the perfect framework for IT / software development projects to learn, adapt to change and deliver great software of value, faster.

But is Scrum also usable outside of software development? Can we apply similar or maybe even the same principals in other departments in the enterprise?

Yes, we can! And yes there are differences but there are also a lot of similarities.

We (Remco en Me)  successfully implemented Scrum in the marketing departments of two large companies: The Dutch AAA and ING Bank. Both companies are now using Scrum for the development of new campaigns, their full commercial expressions and even at the product development level. They wanted a faster time to market, more ownership, and greater innovation. How did we approach and realized a transition with those goals in the marketing environment? And what are the results?

So when we are not delivering software but other things, how does Scrum change? Well, a great deal actually. The people working in these other departments are, in general, quite different to those in Software Development (and yes more than you would expect). This means coaches or change agents need to take another approach.

Since the people are different, it is possible to go faster or ‘deeper’ in certain areas. Entrepreneurial skills or ambitions are more present in marketing. This gives a sense of ‘act first apologize later’, taking ownership, a higher drive to succeed, and upfront and willing behavior. Scrumming here means thinking more about business goals and KPIs (how to go from department to scrumteam goals for example). After that the fun begins…

I will be speaking about this topic at agile 2014. A great honor offcourse to be standing there. I will also attende the conference and therefor try to post some updates here.

To read more about this topic you can read my publication about marketing scrum. It has the extensive research paper I publisched about this story. Please feel free to give me comments and questions either about agile 2014 or the paper.

 

Enjoy reading the paper:

Marketing scrum vs IT scrum – two marketing case studies who now ‘act first and apologize later'

 

Categories: Companies

The Drag of Old Mental Models on Innovation and Change

J.D. Meier's Blog - Fri, 07/25/2014 - 18:06

“Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” — Howard Aiken

It's not a lack of risk taking that holds innovation and change back. 

Even big companies take big risks all the time.

The real barrier to innovation and change is the drag of old mental models.

People end up emotionally invested in their ideas, or they are limited by their beliefs or their world views.  They can't see what's possible with the lens they look through, or fear and doubt hold them back.  In some cases, it's even learned helplessness.

In the book The Future of Management, Gary Hamel shares some great insight into what holds people and companies back from innovation and change.

Yesterday’s Heresies are Tomorrow’s Dogmas

Yesterday's ideas that were profoundly at odds with what is generally accepted, eventually become the norm, and then eventually become a belief system that is tough to change.

Via The Future of Management:

“Innovators are, by nature, contrarians.  Trouble is, yesterday's heresies often become tomorrow's dogmas, and when they do, innovation stalls and the growth curve flattens out.”

Deeply Held Beliefs are the Real Barrier to Strategic Innovation

Success turns beliefs into barriers by cementing ideas that become inflexible to change.

Via The Future of Management:

“... the real barrier to strategic innovation is more than denial -- it's a matrix of deeply held beliefs about the inherent superiority of a business model, beliefs that have been validated by millions of customers; beliefs that have been enshrined in physical infrastructure and operating handbooks; beliefs that have hardened into religious convictions; beliefs that are held so strongly, that nonconforming ideas seldom get considered, and when they do, rarely get more than grudging support.”

It's Not a Lack of Risk Taking that Holds Innovation Back

Big companies take big risks every day.  But the risks are scoped and constrained by old beliefs and the way things have always been done.

Via The Future of Management:

“Contrary to popular mythology, the thing that most impedes innovation in large companies is not a lack of risk taking.  Big companies take big, and often imprudent, risks every day.  The real brake on innovation is the drag of old mental models.  Long-serving executives often have a big chunk of their emotional capital invested in the existing strategy.  This is particularly true for company founders.  While many start out as contrarians, success often turns them into cardinals who feel compelled to defend the one true faith.  It's hard for founders to credit ideas that threaten the foundations of the business models they invented.  Understanding this, employees lower down self-edit their ideas, knowing that anything too far adrift from conventional thinking won't win support from the top.  As a result, the scope of innovation narrows, the risk of getting blindsided goes up, and the company's young contrarians start looking for opportunities elsewhere.”

Legacy Beliefs are a Much Bigger Liability When It Comes to Innovation

When you want to change the world, sometimes it takes a new view, and existing world views get in the way.

Via The Future of Management:

“When it comes to innovation, a company's legacy beliefs are a much bigger liability than its legacy costs.  Yet in my experience, few companies have a systematic process for challenging deeply held strategic assumptions.  Few have taken bold steps to open up their strategy process to contrarian points of view.  Few explicitly encourage disruptive innovation.  Worse, it's usually senior executives, with their doctrinaire views, who get to decide which ideas go forward and which get spiked.  This must change.”

What you see, or can’t see, changes everything.

You Might Also Like

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Who’s Managing Your Company

Categories: Blogs

The 4-Step Action Plan for Agile Health: Step 1. Understand “Legacy Mindset” and “Non-Lean” behaviors to move away from

Agile Management Blog - VersionOne - Fri, 07/25/2014 - 16:59

Agile development requires a cross-functional, self-organized team to deliver potentially shippable software at the end of each sprint, with analysis, design, code developing and testing activities going on concurrently (not sequentially) within each sprint.      Combining Agile/Scrum development with some of the lean methods is also becoming popular (so-called “Scrumban” methods). These methods emphasize reducing Work-in-Process (WIP), reducing feature cycle time and increasing throughput (feature completion rate).

In my blog on From Agile Pathologies to Agile Health I explained that some “agile” teams suffer from the following common pathologies, i.e., major dysfunctions in practicing agile methods, while claiming that they are “doing agile”:

  1. Agile development “sprints” assigned to software development lifecycle phases: analysis, design, code development, testing and defect fixing, delivery and deployment activities are carried out in sequential “sprints.”
  2. Agile development “sprints” assigned to technology tiers: user interface tier, business logic tier and data management tier work are carried out in sequential “sprints.”
  3. Mini-Waterfall inside sprints: for example, in a 4-week sprint, week 1 is used for analysis and design, week 2 for code development, week 3 for testing, and week 4 for defect fixing, i.e., these activities (design, code development, testing, defect fixing) are sequential, creating a mini water fall; they are not concurrent as required by true agile development.
  4. Water-Scrum-Fall: Although design, code development, testing and defect fixing go on concurrently in each “sprint,” there is a fairly long Requirements Analysis phase carried out upfront prior to the first sprint by a group of business analysts producing a comprehensive Requirements Specification document; and at the end of the last sprint, IT Operations staff spends a fair amount of time in deploying the system in a production environment, often finding problems that are expensive to fix.

While an agile team may not be exhibiting gross dysfunctions (pathologies) listed above, it may still behave in harmful or unhealthy ways that would prevent it from realizing the benefits of agile development, such as higher productivity, throughput and quality.    Absence of major pathologies or sickness doesn’t imply health; agile teams may still not be healthy due to one or more harmful behaviors.

In this 4-part blog series, I focus on 6 harmful behaviors exhibited by agile teams and how to move away from them, and transition to 6 healthy agile-lean practices in order to get more benefits (improved productivity, throughput and quality).  I also present 4 specific steps to transition to healthy agile-lean practices.   Table 1 summarizes these 4 steps, and labels 6 harmful behaviors (as 1B through 6B) and 6 healthy agile-lean practices (as 1P through 6P) for ease of cross-referencing. Table 1 also indicates what is covered in each of the 4 parts of the blog series: In this Part 1 (highlighted in pink color), Step 1 – understand “Legacy Mindset” behaviors (1B and 2B) and 2 “non-lean” behaviors (3B and 4B) – is described. Parts 2 to 4 will be presented in the future.

R_Part1_Table1

In this blog series, I typify an agile team A consisting of a business analyst, 2 full-time developers and 2 half-time developers, 1 full time QA tester and 2 half-time QA testers, a Product Owner and a ScrumMaster.  Business analyst works with product owner to write and clarify feature specifications as part of Analysis tasks.  As you will see shortly, Team A exhibits all 6 harmful behaviors (1B to 6B) listed in Table 1; it is our prototypical “Struggling Agile Team.”

In this blog series, I use an example of a typical 4-week (20 workdays) sprint.  Figure 1 illustrates a Sprint History Map of this 4-week sprint for the Struggling Agile Team after the sprint is over; it is not a sprint plan projecting what will happen in future on each of those 20 days; this kind of plan would be a total anti-pattern for agile development.  In Part 4 I will explain how to get information to construct a Sprint History Map as a visual metric for gaining critical insights that you can act on.  The 20 workdays of the sprint are shown as columns numbered 1 through 20 Figure 1.

The Struggling Agile Team completed sprint planning on Day 1 where the Product Owner rank ordered the sprint backlog of features.  In our example, the sprint backlog has 8 rank-ordered features (also called “stories”) shown under the backlog column in Figure 1.  During sprint planning, the Struggling Agile Team also estimated the relative effort of features in story points.  For example, Feature 1 is of 1 story point (indicated by “1, 1”), Feature 2 is of 3 story points (indicated by “2, 3”), and Feature 8 is of 1 story point (indicated by “8, 1”).  In this example the Struggling Agile Team planned 8 rank-ordered features representing a total effort of 16 story points.  Each feature was broken down into its implementation tasks either during sprint planning or actual sprint execution.  These tasks are indicated in the legend at the bottom of Figure 1.

R_Part1_Figure1

Figure 1:  Sprint History Map of the Struggling Agile Team suffering from
“Legacy Mindset” and “Non-Lean” behaviors

Two harmful “Legacy mindset” behaviors to avoid:   I have called these 2 behaviors as “legacy mindset” behaviors because they arise primarily from legacy habits and practices of old waterfall days, silo mindset, and resistance to cultural change needed for agile adoption to succeed.  These behaviors give rise to many difficulties in successful agile adoption and prevent the teams from getting full benefits of agile development.

1B. Multiplexing and Multitasking is common:  Members are tasked by management to work on multiple concurrent teams of the same project or even different projects.  The behavior is rooted in the traditional project management mindset of treating people as resources and attempting to maximize their utilization by assigning them to multiple concurrent projects or teams.  This behavior is further driven by the fact that most people are narrow specialists; they are assigned to do work in their specialization on multiple teams or projects, as there is usually a shortage of other people to do that kind of work.

It has been reported that a team member working on two concurrent teams or projects loses 20% productivity due to frequent context-switching.  A member working on 3 or 4 concurrent teams or projects may lose 40% or 60% of productivity respectively, due to frequent context switching (Reference: Quality Software Management: Systems Thinking, Gerald Weinberg, 1991).

Multitasking behavior is also common.  Team members work on several tasks of different features of a sprint that match their skills often concurrently (frequently switching from one task to other), or team members may be pulled away for production support work or customer demos.   Their work gets frequently interrupted by meetings, phone calls, people stopping by for discussions, as well as self-imposed urge to check and respond to emails and instant messages incessantly, and in general, due to lack of self-discipline to focus on one task at a time.   Like multiplexing, multitasking also diminishes productivity.  Unlike CPUs, humans are not well suited for multiplexed or multitasked work.  Unfortunately, the trend is to do extreme multitasking: doing work, web surfing, tweeting, “Liking” on Facebook, talking over phone, replying to email and instant messages, and eating…all concurrently.   Single-minded focus means “while eating, only eat with single-minded focus on enjoying food.”

Thus in reality, the Struggling team has at best 2.8 (not 3) equivalent developers with its 2 full-time and 2 part-time developers, and 1.8 (not 2) equivalent testers with its 1 full-time and 2 part-time testers.  The team members’ effectiveness and productivity is further reduced proportional to the degree of multitasking they may indulge in.

2B. Team members with silo mindset develop features in a “micro waterfall” manner:  Each member works on only specialized tasks suited to his/her expertise or interests.   Departmental silo mindset is common.  A developer is reluctant to test code written by other developers, a tester is not skilled to write automated acceptance tests, an architect is reluctant to get down into trenches and write code, a technical writer rarely does usability testing, etc.

A team is also likely to develop each feature in a sequential “micro waterfall” by performing the implementation tasks (analysis, design, code development and unit testing, acceptance test case development, acceptance test case execution, defect fixing, etc.) within the sprint timebox in a linear order.  Although the Sprint History Map in Figure 1 shows only three micro waterfalls (for Features 1 through 3) for brevity, all 8 features are developed in a micro waterfall manner.    In a micro waterfall there are very few concurrent activities.  As a result, in the first half of a sprint, most developers of the Struggling Agile Team are too busy and testers believe they don’t have much work to do until developers have completed all their work.  This picture reverses in the second half of a sprint; testers get too busy and developer don’t have much to do.

Two harmful non-lean behaviors to avoid: I have called these 2 behaviors as “non-lean” behaviors because they arise from not understanding the relevance and importance of key lean principles and practices.  Just like “Legacy mindset” behaviors explained above, these non-lean behaviors give rise to many difficulties in successful agile adoption and prevent the teams from getting benefits of agile development.

3B. Team members start new work without any focus on completing the work already started: Most team members eagerly start working on new features and tasks of their personal interest with almost total disregard to features and tasks that were already started and are still in progress (Work in Process).  There is no focus on completing the features and tasks they have already started.   After all, starting new features is more exciting work compared to fixing defects in features that were already started.  The less exciting work of completing the features already started and getting them accepted by the product owner gets short shrift.     Similarly, due to multitasking habits, starting a new exciting task within a feature is much more tempting, leaving many “boring” tasks unfinished until the very end of the sprint timebox.

As shown in the Sprint History Map of the Struggling Agile Team (see Figure 1), the Work in Process (WIPs) on Day 2 through 19 (total 18 days) for 4 completed features (Feature 1 through 4) in the sprint are 4, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 2, with the average WIP = (68/18) = 3.77.  However, the WIPs on Day 2 through 19 (total 18 days) for all 8 features are 3, 4, 4, 6, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 6, with the average WIP shooting up to (123/18) = 6.83, i.e., almost 7 features (out of 8 in the backlog) being worked on by the team in parallel!   The Struggling Agile Team does not have the required focus on completing the work in process before starting new work.  As a result, at the end of the sprint, only 4 out of 8 features were completed by the team and accepted by the product owner.   Sadly this Struggling Agile Team claimed that the remaining 4 features (Features 5 through 8) are “almost done” while in reality they took considerably more time in the next sprint to complete.  With proper coaching and experience the team learns that a feature is either 100% Done or Not Done, and there is no such thing as “Almost Done” or “98% Done.”  It is very binary.

4B. Work within a sprint proceeds oblivious to feature cycle time:  Feature cycle time (cycle time in short) is the total end-to-end time starting from when team members pull the feature from the sprint backlog to work on until the feature is accepted by the product owner after it passes the acceptance tests.   In Figure 1, cycle time for Feature 3 is 18 days, i.e., day 2 through 19 of the 4-week sprint.  Cycle times for the 4 completed features (Features 1 through 4) are 16, 17, 18, 17 days, with an average cycle time of (68/4) = 17 days.  This Struggling Agile Team is unaware of the importance of reducing the average cycle time, and could not complete 4 features within the sprint timebox due to their long cycle times.

I will now use the well-known Little’s Law to calculate the average throughput (rate of feature completion and acceptance by product owner) of the Struggling Agile Team.

Average Cycle Time = Average WIP / Average Throughput, or alternatively

Average Throughput = Average WIP / Average Cycle time

In this example, Average WIP = 68/18 (see Behavior3B above), and Average Cycle Time = 68/4 (as explained just above).

Therefore, Average Throughput = [(68/18) / (68/4)] = 4/18 features per day.

You can use the Sprint History Map (Figure 1 ) to visually verify that the team has delivered 4 accepted features (Features 1 through 4) in 18 actual work days of the sprint, confirming the average throughput of 4/18 features per day.

This low throughput is caused by the fact that the team is new to agile (low productivity due to learning curve), and also due to its own harmful behaviors:  it lost productivity due to frequently non-available members (NT events) and due to impediments (IMP events) marked in Figure 1.  This will be explained in Part 3 of the blog series.  Also due to multiplexing and multitasking, silo mindset, no emphasis on finishing the work that was already started, long cycle times (average cycle time of 17 days in an 18 workday sprint), and manual testing (explained in Part 3 of this blog series).  The Struggling Agile Team could not do any regression testing within the sprint due to its practice of mostly manual testing.  This will be explained in detail in Part 3 of the blog series.

When Features 5 through 8 are completed in the next sprint (if that were to happen), it would be possible to consider the WIP on every day of two consecutive sprints for these 8 features and any new features started and completed in the next sprint, and also consider the cycle times for these 8 and new features over two sprints to calculate the average throughput per day over two sprints by following the procedure above.

The Struggling Agile Team failed to recognize the need for limiting the WIP and lowering the cycle time.  Large average WIP means large average cycle time, which increases the risk of many features not being completed and accepted within the sprint timebox.    The Struggling Agile Team is simply oblivious to the implications of Little’s Law.

The Struggling Agile Team attempted to do all work for each feature (shown in Figure 1 as A: analysis, D: Design, C: Coding, TD and TE: Acceptance Test Develop and Execution of QA tester, DF: Defect fixing, AT: Acceptance testing by product owner) in the same single sprint timebox as this workflow is simple to understand and to follow.   However, this behavior created many adverse effects as the answers to some analysis questions raised by team members could not be available immediately; a product owner had to talk to customers or senior management to seek clarifications, and those answers took too much time to complete all development within a tight sprint timebox.  As user interface design work could be tested only through working code, the feedback cycle was too long to correct the UI issues in the same single sprint timebox.  Analysis and UI design work, and associated clarifications and corrections for certain features took too much time to complete the feature within a single sprint.  What the Struggling Agile Team failed to recognize was that Analysis and UI design work had become bottleneck hampering the overall flow and increasing the cycle time; but the team didn’t know what action to take to remove the bottlenecks.

Is your team experiencing the harmful legacy mindset and non-lean behaviors, and looking for solutions?   I would love to hear from you either here or by e-mail (Satish.Thatte@VersionOne.com) or on twitter (@smthatte).

Stay tuned for these future parts of the blog series:

Part 2: Understand healthy Agile Mindset and Lean Practices to adopt

Part 3: Understand how to use additional enablers of Agile Health

Part 4: Develop and implement your customized plan for adopting healthy agile-lean practices

 

Categories: Companies

Conventional HTML in ASP.NET MVC: Building larger primitives

Jimmy Bogard - Fri, 07/25/2014 - 16:52

Other posts in this series:

We’ve taken our individual form elements quite far now, adopting a variety of conventions in our output to remove the boring, duplicated code around deciding how to display those individual elements. We have everything we need to extend and alter the conventions around those single elements.

And that’s a good thing – we want to keep using those primitives around individual elements. But what if we peeked a little bit larger? Are there larger common patterns in play that we can start incorporating? Let’s look at the basic input template, from Bootstrap:

<div class="form-group">
    @Html.Label(m => m.Password)
    <div class="col-md-10">
        @Html.Input(m => m.Password)
        @Html.ValidationMessageFor(m => m.Password, ""
          , new { @class = "text-danger" })
    </div>
</div>

Starting with the most basic element, the input tag, let’s look at gradually increasing scope of our building block:

  • Input
  • Input and validation
  • Input, validation in a div
  • Label and the div input
  • Form group

The tricky part here is that at each level, I want to be able to affect the resulting tags, some or all. Our goal here is to create building blocks for each level, so that we can establish a set of reusable components with sensible defaults along the way. This is a similar exercise as building a React class or Angular directive – establish patterns early and standardize your approach.

From the above list of items, I’ll likely only want to create blocks around increasing scopes of DOM elements, so let’s whittle this down to 3 elements:

  • Input
  • Input Block
  • Form Block

We already have our original Input method, let’s create the first input block.

Basic input block

Because we have our HtmlTag primitive, it’s trivial to combine elements together. This is a lot easier than working with strings or MvcHtmlStrings or the less powerful TagBuilder primitive. We’ll return the outer div tag, but we still need ways of altering the inner tags. This includes the Label, the Input, and Validator. Here’s our input block:

public static HtmlTag InputBlock<T>(this HtmlHelper<T> helper,
    Expression<Func<T, object>> expression,
    Action<HtmlTag> inputModifier = null,
    Action<HtmlTag> validatorModifier = null) where T : class
{
    inputModifier = inputModifier ?? (_ => { });
    validatorModifier = validatorModifier ?? (_ => { });

    var divTag = new HtmlTag("div");
    divTag.AddClass("col-md-10");

    var inputTag = helper.Input(expression);
    inputModifier(inputTag);

    var validatorTag = helper.Validator(expression);
    validatorModifier(validatorTag);

    divTag.Append(inputTag);
    divTag.Append(validatorTag);

    return divTag;
}

We create an Action<HtmlTag> for the input/validator tags. If someone wants to modify those two elements directly, instead of wonky anonymous-objects-as-dictionaries, we allow them full access to the tag via a callback, similar to jQuery. Next, we default those two modifiers to no-op if they are not supplied.

We then build up our input block, which consists of the outer div with the input tag and validator tag as children. In our view, we can replace the input block:

<div class="form-group">
    @Html.Label(m => m.Email)
    @Html.InputBlock(m => m.Email)
</div>
<div class="form-group">
    @Html.Label(m => m.Password)
    <div class="col-md-10">
        @Html.Input(m => m.Password)
        @Html.Validator(m => m.Password)
    </div>
</div>

Just to contrast, I included the non-input-blocked version. Now that we have this piece, let’s look at building the largest primitive, the form block.

Form input block

In the same tradition of Angular directives, React classes and Ember views, we can build larger components out of smaller ones, reusing the smaller components as necessary. This also ensures our larger component automatically picks up changes from the smaller ones. Here’s our FormBlock method:

public static HtmlTag FormBlock<T>(this HtmlHelper<T> helper,
    Expression<Func<T, object>> expression,
    Action<HtmlTag> labelModifier = null,
    Action<HtmlTag> inputBlockModifier = null,
    Action<HtmlTag> inputModifier = null,
    Action<HtmlTag> validatorModifier = null
    ) where T : class
{
    labelModifier = labelModifier ?? (_ => { });
    inputBlockModifier = inputBlockModifier ?? (_ => { });

    var divTag = new HtmlTag("div");
    divTag.AddClass("form-group");

    var labelTag = helper.Label(expression);
    labelModifier(labelTag);

    var inputBlockTag = helper.InputBlock(
        expression, 
        inputModifier, 
        validatorModifier);
    inputBlockModifier(inputBlockTag);

    divTag.Append(labelTag);
    divTag.Append(inputBlockTag);

    return divTag;
}

It’s very similar to our input block method, where we provide defaults for our initializers, create the outer div tag, build the child tags, apply child modifiers, and append those child tags to the outer div. Going back to our view, it becomes quite simplified:

@Html.FormBlock(m => m.Email)
@Html.FormBlock(m => m.Password)
<div class="form-group">
    <div class="col-md-offset-2 col-md-10">
        <div class="checkbox">
            @Html.Input(m => m.RememberMe).RemoveClasses()
            @Html.Label(m => m.RememberMe).RemoveClasses()
        </div>
    </div>
</div>

We have one outlier, our “remember me” checkbox, which I try to avoid at all costs. Let’s look at a couple of other examples. Here’s our register view:

@Html.ValidationSummary("", new { @class = "text-danger" })
@Html.FormBlock(m => m.Email)
@Html.FormBlock(m => m.Password)
@Html.FormBlock(m => m.ConfirmPassword)
<div class="form-group">
    <div class="col-md-offset-2 col-md-10">
        <input type="submit" class="btn btn-default" value="Register" />
    </div>
</div>

And here’s our reset password view:

@Html.ValidationSummary("", new { @class = "text-danger" })
@Html.Input(m => m.Code)
@Html.FormBlock(m => m.Email)
@Html.FormBlock(m => m.Password)
@Html.FormBlock(m => m.ConfirmPassword)
<div class="form-group">
    <div class="col-md-offset-2 col-md-10">
        <input type="submit" class="btn btn-default" value="Reset" />
    </div>
</div>

Much more simplified, with conventions around the individual input elements and HtmlHelper extensions around larger blocks. I would likely go an additional step and create an HtmlHelper extension around the buttons as well, since Bootstrap buttons have a very predictable, standardized set of HTML to build out.

We’ve removed a lot of our boilerplate HTML, but still allowed customization as needed. We also still expose the smaller components through InputBlock and Input, so that if the HTML varies a lot we can still keep the smaller building blocks. There’s still a bit of magic going on, but it’s only a click away to see what “FormBlock” actually does. Finally, what conventions really allow us to do is stop focusing on the minutiae of the HTML we have to include on every display/input block of HTML.

We remove bike shedding arguments and standardize our approach, allowing us to truly hone in on what is interesting, challenging or different. This is the true power of conventions – stop pointless and wasteful arguments about things that truly don’t matter through a standardized, but extensible, approach built on conventions.

In our next (and final) post, we’ll look at how we can extend these approaches for client-side templates built in Angular, Ember and more.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Why Agile Estimates Don’t Work – Part 1

TargetProcess - Edge of Chaos Blog - Thu, 07/24/2014 - 20:34

As you read this headline, many things came to your mind, probably. You might have recalled those many hours of meetings as you tried to come up with a time estimate for a project or for a product release. Or, you might have remembered the planning poker sessions, which were intended as a spot-on pragmatic business activity, but in the long run proved to be nothing else than a child’s game, because the estimates attained as a result of planning poker sessions differed 2 or 3 times from what the actual work really took. The sharp question that I want to ask is: how many times did you feel deep inside that when they make you do an estimate (them being managers, or clients, or anyone else in charge), you end up with nothing else but a waste of time, because later in the project you still face the need to explain why your initial estimate proved to be that different from how things actually turned out, and feeling guilty in the process, though probably nothing of it was actually your fault?

Don’t get me wrong. My initial intent was pure and well-behaved. I humbly wanted to write an article to sum up techniques for estimation used in agile, describe their pros and cons, and provide people with a single-point reference for all those techniques. However, as I went deeper into the research,  I was astounded. It turned out that there are many more articles and write-ups out there in orthodox agile circles on “How to estimate?” as opposed to “Why estimate at all?” In those few cases, where I saw some attempts at explaining “why?”, they stroke me as incongruent and built on some very loose logic. This very fact of the looseness of “why?” puts a big question mark on the validity of the “hows”, because the “how” is a product of “why?” or “what?’ I’ve cited this in one of my previous articles, and I’ll repeat it one more time, because this axiom is universal, and works for all things life and project management alike: The hows will appear if the what becomes clear.

Let’s take the scalpel of pragmatism and dissect the faulty logic behind all things agile estimates.

What is an estimate?

Is it a measure of commitment? Or is it a lazy talk? I tried to find some stats on the actual usefulness of the estimates in story points, and how they’ve proven themselves valid in the bottomline world of business. I found none. From my own experience, I know that estimates never work. I’ve seen this in project-by-project software development and in product development. A slightly modified quote from here:

It’s impossible to estimate something that is being built for the first time.

We never build the same feature twice.

The only viable example of valid use of estimates that comes to my mind goes as far back as to the early 2000′s, when people wanted simple e-commerce web-sites, or dating sites, or something of that kind. Having built a handful of such web-sites, software vendors were more likely to give their clients a realistic estimate of completion, because these web-sites didn’t have a heavy baggage of residual debris, such as technical debt, bulky databases, or an octopus-like architecture, which just spreads, rendering futile any attempts to commit to the bottomline “get the sh..t done on time” stuff.

Next, if any attempts at estimating are futile, then why do most companies continue to play this game, which resembles courting, but unlike courting promises no pleasure ahead, only the ever increasing snowball of mess, feeling guilty, unproductive and unaligned with the only goal that matters: get the job done well and on time?

Sometimes-the-fool-who-rushes-in-gets-the-job-done.1

Stay tuned for the answers and even sharper disclosures in the upcoming part 2 of this article.

Related articles:

5 Reasons Why You Should Stop Estimating User Stories

Joy Spring and Estimated Deadlines

2 Meta-Principles for User Interface Writing

UX: Why User Vision Design Matters

Why Agile Estimates Don’t Work – Part 2

Categories: Companies

Conventional HTML in ASP.NET MVC: Validators

Jimmy Bogard - Thu, 07/24/2014 - 20:33

Other posts in this series:

We’re marching towards our goal of creating conventional components, similar to React classes or Angular directives, but there’s one last piece we need to worry about before we get there: validators.

Validators can be a bit tricky, since they largely depend on the validation framework you’re trying to use. My validation framework of choice is still Fluent Validation, but you can use others as well. Since the data annotation validators are fairly popular, this example will use that as the starting point.

First, we have to figure out what our validation HTML should look like, and how it should affect the rest of our output. But we don’t have a “Validator” convention, we only have “Label”, “Editor”, and “Display” as our possibilities.

Luckily, underneath the covers these Label etc. conventions are only categories of elements, and we can easily create a new category of conventions for our own purposes. The Label/Editor/Display properties are only convenience methods. In our base convention class, let’s create a new category of conventions via a similar property:

public class OverrideHtmlConventions : DefaultHtmlConventions
{
    protected ElementCategoryExpression Validators
    {
        get
        {
            var builderSet = Library
                .For<ElementRequest>()
                .Category("Validator")
                .Defaults;
            return new ElementCategoryExpression(builderSet);
        }
    }

    public OverrideHtmlConventions()
    {
        Editors.Always.AddClass("form-control");
        Labels.Always.AddClass("control-label");
        Labels.Always.AddClass("col-md-2");

This will allow us to append additional conventions to a Validator category of element requests. With a custom builder, we can create a default version of our validator span (without any validation messages):

public class SpanValidatorBuilder : IElementBuilder
{
    public HtmlTag Build(ElementRequest request)
    {
        return new HtmlTag("span")
            .AddClass("field-validation-error")
            .AddClass("text-danger")
            .Data("valmsg-for", request.ElementId);
    }
}

And add it to our default conventions:

public OverrideHtmlConventions()
{
    Validators.Always.BuildBy<SpanValidatorBuilder>();

    Editors.Always.AddClass("form-control");
    Labels.Always.AddClass("control-label");
    Labels.Always.AddClass("col-md-2");

The last part is actually generating the HTML. We need to do two things –

  • Determine if the current field has any validation errors
  • If so, build out the tag with the correct error text

The first part is actually fairly straightforward – ha ha just kidding, it’s awful. Getting validation messages out of ASP.NET MVC isn’t easy, but the source is available so we can just copy what’s there.

public static HtmlTag Validator<T>(this HtmlHelper<T> helper,
    Expression<Func<T, object>> expression) where T : class
{
    // MVC code don't ask me I just copied
    var expressionText = ExpressionHelper.GetExpressionText(expression);
    string fullHtmlFieldName 
        = helper.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(expressionText);

    if (!helper.ViewData.ModelState.ContainsKey(fullHtmlFieldName))
        return new NoTag();

    ModelState modelState = helper.ViewData.ModelState[fullHtmlFieldName];
    ModelErrorCollection modelErrorCollection = modelState == null 
        ? null 
        : modelState.Errors;
    ModelError error = modelErrorCollection == null || modelErrorCollection.Count == 0 
        ? null 
        : modelErrorCollection.FirstOrDefault(m => !string.IsNullOrEmpty(m.ErrorMessage)) 
        ?? modelErrorCollection[0];
    if (error == null)
        return new NoTag();
    // End of MVC code

    var tagGeneratorFactory = DependencyResolver.Current.GetService<ITagGeneratorFactory>();
    var tagGenerator = tagGeneratorFactory.GeneratorFor<ElementRequest>();
    var request = new ElementRequest(expression.ToAccessor())
    {
        Model = helper.ViewData.Model
    };

    var tag = tagGenerator.Build(request, "Validator");

    tag.Text(error.ErrorMessage);

    return tag;
}

Ignore what’s going on in the first section – I just grabbed it from the MVC source. The interesting part is at the bottom, where I grab a tag generator factory, create a tag generator, and build an HtmlTag using the Validator category for the given ElementRequest. This is what our Label/Editor/Display methods do underneath the covers, so I’m just emulating their logic. It’s a bit clunkier than I want, but I’ll amend that later.

Finally, after building the base validator tag, we set the inner text to the error message we determined earlier. We only use the first error message – too many and it becomes difficult to read. The validation summary can still be used for multiple errors. Our view is now:

<div class="form-group">
    @Html.Label(m => m.Email)
    <div class="col-md-10">
        @Html.Input(m => m.Email)
        @Html.Validator(m => m.Email)
    </div>
</div>
<div class="form-group">
    @Html.Label(m => m.Password)
    <div class="col-md-10">
        @Html.Input(m => m.Password)
        @Html.Validator(m => m.Password)
    </div>
</div>

Since we know that every validation message will need that “text-danger” class applied, applying it once to our conventions means that we’ll never have to copy-paste that portion around ever again. And much easier to develop against than the MVC templates, which quite honestly, are quite difficult to develop against.

We could go a step further and modify our Label conventions to pick up on the “Required” attribute and show an asterisk or bold required field labels.

Now that we have quite a bit of consistency in our groups of form elements, in the next post we’ll look at tackling grouping multiple tags into a single input/form component.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Why Agile Estimates Don’t Work – Part 2

TargetProcess - Edge of Chaos Blog - Thu, 07/24/2014 - 19:55

In Why Agile Estimates Don’t Work – Part 1 I’ve explained why estimates don’t work if someone sees them primarily as a commitment to timing. And, just as I expected, some aficionados rushed to educate me on the subject of estimates in agile, that they are not a commitment but, in short, a discussion of chances and odds of how the development will go, considering the challenges of this particular production environment. Probably, some of those aficionados have accused me of the gravest sin ever, and namely, not reading Mike Cohn’s “Agile Estimating and Planning”. Relax, guys. I studied Cohn’s book long ago, and time after time I would flip its pages to refresh things in my memories, not to mention other books, articles and from-the-trenches stories. My most reliable source for making conclusions, however, is my work. If someone stays out-of-the trenches and theoretizes about estimates, this is just theory. My view on estimates lies in the practical, pragmatic context: if they don’t work as commitment to timing, but as a discussion of chances and odds, why most companies continue to play this game? What makes them go on with it? Why spending lots of time on discussing chances is valued more than action itself?

What Is an Estimate? (take 2)

I’ve cited two options to answer this question in Part 1. Some people, who are, likely, not educated in agile theory, look at agile as a next best silver bullet to complete projects on time and they might wrongly view estimates as a promise of that. They genuinely believe that agile estimates will give them so much sought after reliable reference point about the time of completion. The second group of believers consciously accepts that estimating is a discussion of chances, a probability forecast. The burndown chart provides such forecast based on velocity. Let’s refresh the classical definition of velocity in our memory, quoting from here: “The main idea behind velocity is to help teams estimate how much work they can complete in a given time period based on how quickly similar work was previously completed“. Does it ring any bells now? If we never build the same feature twice, just as you can’t step twice into the same river, then why velocity-based forecast should be relied on? In general, this stands true for all the forecast techniques based on past performance, including forecast models. Yes, there are cases when a team’s work is monotonous, iteration in, iteration out, but from what I’ve been able to observe, it happens very rarely. Mostly, in any company and team, the tasks to be done and challenges to be resolved are unique, for each iteration, and for each release. You never know when something pops up and kicks this neat forecast in the butt.

The Devil Is In…

.. not only in the details. The second most common habitat of the said devil, which goes after the details, is human nature itself.  Nothing else explains this better than the good old Parkinson’s Law:

quote-C.-Northcote-Parkinson-work-expands-so-as-to-fill-the-40851

Yes, indeed. Having all the time in the world is loose. It’s either you have time, or you don’t have it. It’s either you have the guts and sixth sense to define what should be included to the minimal viable product, for instance, or not. Let’s don’t forget that no one cares about software development for its own sake, except the software developers who view their work as craft. We do things for the market. For customers, and they don’t care about the development kitchen constraints, challenges and brilliant solutions. Same stands true for UX.

Now, how this reasoning fits into the subject of estimates, someone might ask? Here’s the astounding truth. Teams and companies start playing around and messing with estimate rituals when they have some extra fat to burn. There’s no room for activities that are waste in a bootstrapped, mission-oriented, do-or-die start-up squad of several people. If you are in such a team, and tempted to start a planning poker session, don’t do this. Rather than waste your time on playing with probabilities, get some real work done. Write code, do a UI sketch,  instill clarity to the work of your team. Some mathematical forecast model surely has it that a brick will fell on your head one day. But you’d hardly be wasting your time to estimate how many more bottles of champagne are likely to slip out of a torn plastic bag, when one of those bottles has already hit the concrete, and there are 3 more in the bag. You’d rush to catch the rest of the bottles, not to let them slip, right? Or will you freeze and estimate the probability of all of the bottles being shattered? This reminds me of the fact, that some business people who are skeptical about shamanism, astrology and other such things, devotedly indulge into what is, in essence, shaman rituals with estimates. Come on, the estimate of completion based on burndown or a planning poker session, is as valid as an astrological forecast. There’s no big difference. It’s either you’re “fat” enough as an organization to afford wasteful rituals or not. In fact, even in large companies that seem to be so safe and secure there’s always the bottomline point of “do or die”. That’s what a recent story with massive job cut by Microsoft proves. Ritual is a waste. If there’s time for rituals left, this is a sign of unhealthy fat. Burn it.  If a workgroup discusses development, there’s no need to wrap it in the ritual of estimating, because when a discussion turns into a draining debate of “how probable” this timeframe is, the work suffers. Someone said, there’s a limited number of brains to do the job, and they should be used efficiently. One can suffice with a draft estimated timeframe, there’s no use trying to gauge on the likelihood of this happening, when there’s real work to be done.

Worship the Idol: How Do I Tell My Higher-Ups ..?

As life has it, however, most of us have to cope with the fallacy around estimates being employees in fat organizations, and, hard as you might, a mere human being can not move a mountain. There’s no way to persuade a higher-up non-developer manager, or a client, or a stakeholder in the vanity of estimates. That’s why people go on playing games, as they attend to those who expect a feature or a project to be done on time, as derived from estimate-related shamanic rituals. And, that’s where another interesting booster for evolution is hiding. Luckily — and, yes, I mean it, luckily — there are more non-developers in the position of authority than developers. There’s always a point of litmus test, when someone with a developer background (a project manager, team leader, or someone in middle management) meets the non-developer stakeholder. Why I call it a booster for evolution? If every stakeholder were a developer, they would have probably ended up whining on each other’s shoulder about how difficult life is, and how impossible it is to commit to any timeframe. Having to deal with a non-developer stakeholder about a deadline is stimulating. If you’ve been thinking that something has changed from the hunter-gatherer times, I have bad news for you. The seeming “comfort” guises the basic instinct to act. You either act, or you rot. There’s no other option. No one cares for reactive rants. It’s your actions that define you. It’s your choice to agree to play the estimate game by the rules and accept this as a given, or to quit and find a job where they will not f…k your brain with estimates. If you choose to deal with ruthless stakeholders that are oh-so-not-understanding of how hard a life of a true software craftsman is,  move the conversation from the level of rant to the level of action. Use every opportunity to spread the awareness of the challenges that software development portends, and why this domain is un-deadline-ifiable by nature. Amazingly, there are so many people in this world who sincerely believe that an estimate is a credible measure for completion date. Write articles, speak on conferences, join the “no estimates” movement. Fix the gap between what you know, and what they now. If everyone has their say, this world will become a better place, with less projects and software screwed. And, even if you’d still have to deal with the waste of estimates, you’d feel better inside, because you’d be doing your all to change things, instead of ranting.

Enough of thought boosters (or busters?). In Part 3 of the series I will give an outline of some techniques, commonly regarded as techniques for estimates, that might work as a tool for workgroup discussions in some teams. Keep in mind the waste-value balance, though.

Related articles:

Why Agile Estimates Don’t Work – Part 1

Categories: Companies

Compressed Backlog Refinement

Leading Agile - Mike Cottmeyer - Thu, 07/24/2014 - 15:38

Lots has been written about backlog refinement (what we in the US used to call grooming), and a lot of it is good. There is lots to say about this practice. However, I’ve not seen any treatment on whether you should do it differently for your initial backlog. Therefore, I’m setting out in this post and in the next to answer these questions:

  • How do you refine and estimate the initial backlog?
  • How do you refine and estimate additional stories or subsequent backlogs for work that comes along later?
  • Wouldn’t those approaches be the same?

Usually, when I spin up a new agile team, they don’t have an initial backlog. Nevertheless, the parent organization just about always knows what they want the team to do The vision or mission or charter is there–though it might not be communicated clearly. Sometimes there are documented requirements in what is often called a PRD, MRD, or BRD. Whatever the case, if there is no backlog of user stories, I typically hold a story writing workshop to get the ball rolling. Now that we have a crude backlog, it needs to be refined.

This post is not about where the ideas come from or even about how to convert what already exists into user stories. The focus here is about how to conduct that initial refinement meeting, then what might be different in subsequent refinement sessions.

For the initial refinement, I recommend compressed refinement. After that, I recommend continuous refinement.

Compressed Backlog Refinement

When refining that initial backlog, often after spinning up a new agile team, there is usually someone in the chain of command wondering, “Why isn’t anyone coding yet?” There is a great deal of pressure to just get started already. The team is probably anxious to get started too, unless they are still trying to wrap up the prior project. At the same time, we want to know when we expect to get done with this new work,  when we can get to the next release or how much of the backlog we’ll have when the release date rolls around. It’s not sufficient for my clients to tell them they’ll get what they’ll get when they get it.  My clients need predictability. We also need to refine the backlog to identify dependencies, risks and items with long lead time. Anyway, it takes some time to create, refine and estimate a backlog, to form teams, to do the sprint-0 stuff and get rolling. We want to do a good job refining the backlog, but we can’t take forever to get it done.

Therefore, we do a compressed approach to backlog refinement.

Stories are written, we have a backlog of crude stories, and we need to refine them all quickly. In the compressed refinement approach, we’ll have a series of refinement meetings. It could be half-day to full-day meetings for the better part of a week. The whole team attends. Please bring in lunch, preferably a good lunch. You may find yourself updating stories in your agile management tool in real-time during the meeting. That’s generally unproductive, so strike the right balance. Some edits might have the whole team collaboratively involved, arguing over wording and meaning. Other edits can be done by someone else in the room on the side. Of course, other edits will be made offline.

This compressed approach tends to be very detailed and slow because we’re going from crude stories to well refined stories during the meeting. Since we’ve talked about them in such depth, we might as well estimate with planning poker as we go.

Because our story discussions are so detailed when using this approach, you might be able to get by with an abbreviated sprint planning meeting for the first 2 or 3 sprints.

Punctuated Refinement

Organizations that operate in project mode rather than product mode tend to do this compressed refinement approach over and over for each project that comes along. They spend little time getting the backlog ready for the next project while the current project is still underway.

This is backlog refinement done in a big batch. This is rolling wave planning with big tidal waves. That’s usually not the best approach. After this 1st compressed refinement, let’s move on to continuous refinement.

Not Talking About Progressive or Elaboration

I’m not talking about progressive elaboration or even elaboration. Elaboration in an agile sense is more about decomposing epics into features and features into user stories and then providing the details for the stories (the acceptance criteria) and their visual specifications. Adding the term Progressive to Elaboration means we do it more or less just in time over the course of time.

Here’s how we may do both Progressive Elaboration and Compressed Refinement: We may still progressively elaborate epics and feature, then have an intense period of backlog refinement, kick off a project, yet still do more progressive elaboration of user stories, their acceptance criteria, and visual specifications over time. What I’m writing about is whether we refine and estimate most of our stories in a compressed period of time, say, a week, maybe two or whether we do that refinement over time and further in advance.

Recommendation

There are some downsides to this. A compressed refinement does not allow for the long lead-times that may be necessary to do some research, figure some things out, learn some new technology, put the necessary architectural run-way in place, work out contracts with 3rd parties, or get other groups to build out dependent pieces. Also, we sometimes need additional time for things to soak in. We need to take a step back and look again at the big picture. Because of these things, we may end up with a poorly refined backlog or a poor plan.

For these reasons, I recommend making the switch from project thinking and punctuated refinement to product thinking and continuous refinement. That’s the topic of my next post.

The post Compressed Backlog Refinement appeared first on LeadingAgile.

Categories: Blogs

Make Agile 2014 PEACHy

Agile Management Blog - VersionOne - Thu, 07/24/2014 - 00:20

This year, I’m excited to be returning the Agile Conference. Having organized and participated in the conference from 2001-2005, my attendance became sporadic as the beginning of the youth football season (my other coaching gig) won out over the annual agile pilgrimage. Preparing for the show brings back many memories, but also some lessons learned that I wanted to share for anyone new to the conference or looking to get a little more out it. In true agile form, I’ve made a pithy acronym. Follow these suggestions and after the conference, you’ll be saying, “That’s a PEACH, hon!”

Prepare
The Agile Alliance Agile2014 Pre-Conference Planner is a great resource for planning your conference activities. Obviously, you’ll get the most out the conference if you review the sessions and target the sessions you don’t want to miss (MoSCoW anyone?). For your most important sessions, check out the room size. As part of a commitment to an open learning environment, the conference does not reserve seats, nor limit your ability to participate. The most popular sessions will fill their rooms, leaving some standing or not able to attend. In your conference guide, you can get a sense of the room size. If your must-see session is in a smaller room, GET THERE EARLY!!

There is great content to choose from. Pick your targets, but include one or two sessions that are out of your box and may give you empathy for someone else’s kind of work. If you need ideas, check out these sessions being given by the nearly 20 speakers from VersionOne and our partners:

VersionOne: Matt Badgely, John Krewson, Steve Ropa
agile42: Richard Dolman, Martin Kearns
Cognizant: Raje Kailasam
DavidBase: Jeffery Davidson
DevJam: Matt Barcomb, David Hussman
Improving: Ken Howard
LeadingAgile: Mike Cottmeyer, Derek Huether
Lithespeed: David Bulkin
SolutionsIQ: Daniel Gullo, Tim Myer, Dhaval Panchal, Charlie Rudd, John Rudd

Engage
The conference can be overwhelming with tons of people, tons of sessions, tons of exhibitors (read: free stuff!), tons of activities. Sensory overload. So pick your focus , but try at least one thing that will stretch you a little, socially and intellectually.

If you are new the conference, I highly recommend the First Time Attendee Orientation. At that session you’ll see everyone else who is wondering about the things you are and then some. The Early Registration Meet & Mingle and Ice Breaker Reception (Moscow mule, anyone?) is another great way to start the week. Between sessions, look up from your screen enough to meet some new folks and hang out. Dinner with New Agile Friends is a great way to meet people, especially if you are not attending with a group of colleagues. And, of course, the Conference Party should not be missed.

Don’t miss the Sponsors reception and the booths through out week. Check out our @VersionOne mobile photo challenge on Twitter where you can win daily prizes or a GoPro HERO3+.

Adapt
You’ll learn new things about the conference as you go through it. Your time is the most valuable thing to manage, with perhaps sleep a close second. Use the Open Space Law of Two Feet to adapt what you do. Is a session not quite what you expected? Don’t be shy, get up and move on. Take the time to find another session, hang out, catch up with those incessant emails and posts, or just take a breather (or nap)

Collaborate
You develop deeper understanding by interacting with others. The conference favors interactive sessions, but there are many other ways to share and learn. Be sure to swing by the Open Jam and catch the latest buzz. The Scrum Alliance is holding a Coaches Clinic each day where you can share ideas and challenges with Scrum experts. Our conference organizers, the Agile Alliance will have the Agile Alliance Lounge open each day.

And if you see any of us VersionOne folks, in our sporty shirts, please stop us, introduce yourself, and mention what a wonderful blog post this is. We love agile and learning by sharing with others.

Hangout
The most valuable experiences I hear about are the interactions people have hanging out in the hallways. These by-chance encouters lead to amazing insights and relationships. If you are are an extrovert, this will come naturally. If not, here are a few tips:

  • The Agile conference has always been known for its community atmosphere and approachable speakers and attendees. Its kinda who we are. So, realize that you are in an environment that specifically values Individuals and Interactions.
  • It may be awkward at first. Yes, it can feel very weird to approach someone you don’t know and initiate a conversation. Work to overcome that fear (for the XPers and Scrummers, this is living up to your Courage value). Asking about a favorite session or the basics like what one does in their job are simply ways to get a conversion started.
  • Don’t give up. Not every encounter will be amazing. If a conversation does not take off, no worries, the next one prpbably will.
  • Approach a group. If you hear a small group talking about a topic of interest to you, don’t hesitate to approach, express your interest in the topic and join in. You’ll be well received and add another interesting perspective to the dialogue.
  • You probably know more than you think. Many attendees have told me after the conference, “you know, from our experience, we know as much as most of the people at the conference.” If you are feeling intimidated that others seem to know more than you, realize that either a) they don’t and are just more comfortable talking about what they know or b) were in your shoes just a few years ago and are passionate about helping you learn what they’ve learned.

Do you have recommendations for conference goers? Please share them by commenting below.

Have a great conference. I hope our paths cross during the week.

Categories: Companies

Java: Determining the status of data import using kill signals

Mark Needham - Thu, 07/24/2014 - 00:20

A few weeks ago I was working on the initial import of ~ 60 million bits of data into Neo4j and we kept running into a problem where the import process just seemed to freeze and nothing else was imported.

It was very difficult to tell what was happening inside the process – taking a thread dump merely informed us that it was attempting to process one line of a CSV line and was somehow unable to do so.

One way to help debug this would have been to print out every single line of the CSV as we processed it and then watch where it got stuck but this seemed a bit over kill. Ideally we wanted to only print out the line we were processing on demand.

As luck would have it we can do exactly this by sending a kill signal to our import process and have it print out where it had got up to. We had to make sure we picked a signal which wasn’t already being handled by the JVM and decided to go with ‘SIGTRAP’ i.e. kill -5 [pid]

We came across a neat blog post that explained how to wire everything up and then created our own version:

class Kill3Handler implements SignalHandler
{
    private AtomicInteger linesProcessed;
    private AtomicReference<Map<String, Object>> lastRowProcessed;
 
    public Kill3Handler( AtomicInteger linesProcessed, AtomicReference<Map<String, Object>> lastRowProcessed )
    {
        this.linesProcessed = linesProcessed;
        this.lastRowProcessed = lastRowProcessed;
    }
 
    @Override
    public void handle( Signal signal )
    {
        System.out.println("Last Line Processed: " + linesProcessed.get() + " " + lastRowProcessed.get());
    }
}

We then wired that up like so:

AtomicInteger linesProcessed = new AtomicInteger( 0 );
AtomicReference<Map<String, Object>> lastRowProcessed = new AtomicReference<>(  );
Kill3Handler kill3Handler = new Kill3Handler( linesProcessed, lastRowProcessed );
Signal.handle(new Signal("TRAP"), kill3Handler);
 
// as we iterate each line we update those variables
 
linesProcessed.incrementAndGet();
lastRowProcessed.getAndSet( properties ); // properties = a representation of the row we're processing

This worked really well for us and we were able to work out that we had a slight problem with some of the data in our CSV file which was causing it to be processed incorrectly.

We hadn’t been able to see this by visual inspection since the CSV files were a few GB in size. We’d therefore only skimmed a few lines as a sanity check.

I didn’t even know you could do this but it’s a neat trick to keep in mind – I’m sure it shall come in useful again.

Categories: Blogs

Kanban Litmus Test

Over the years, there has been a commonly recurring question, "Are we doing Kanban or not?" I've always answered that the answer isn't based in practice adoption but is rather a question of intent. Do you have the intent to pursue evoutionary improvement of service delivery using the Kanban Method? If so then you are doing Kanban and if not then you are not.

read more

Categories: Companies

Let’s Shake Things up at Agile 2014

BigVisible Solutions :: An Agile Company - Wed, 07/23/2014 - 22:06

Let’s shake things up a bit.

No more stuffy sales pitches or inky flyers. Let’s get loud.

Drop by, escape the ordinary.

Be Heard

Subscribe Now to Follow the Action

The post Let’s Shake Things up at Agile 2014 appeared first on BigVisible Solutions.

Categories: Companies

Reinvigorating an existing Kanban implementation with STATIK

This spring I noticed [1] that the rather clunky name we give to the implementation process for Kanban that we teach in our classes and private workshops has a catchy acronym, STATIK. Just as we hoped, this is turning out to be much "stickier" than the systems thinking approach to introducing Kanban; we find that not only do we refer to it a lot, so do our clients.

 

read more

Categories: Companies

Conventional HTML in ASP.NET MVC: Data-bound elements

Jimmy Bogard - Wed, 07/23/2014 - 19:03

Other posts in this series:

We’re now at the point where our form elements replace the existing templates in MVC, extend to the HTML5 form elements, but there’s still something missing. I skipped over the dreaded DropDownList, with its wonky SelectListItem objects.

Drop down lists can be quite a challenge. Typically in my applications I have drop down lists based on a few known sets of data:

  • Static list of items
  • Dynamic list of items
  • Dynamic contextual list of items

The first one is an easy target, solved with the previous post and enums. If a list doesn’t change, just create an enum to represent those items and we’re done.

The second two are more of a challenge. Typically what I see is attaching those items to the ViewModel or ViewBag, along with the actual model. It’s awkward, and combines two separate concerns. “What have I chosen” is a different concern than “What are my choices”. Let’s tackle those last two choices separately.

Dynamic lists

Dynamic lists of items typically come from a persistent store. An administrator goes to some configuration screen to configure the list of items, and the user picks from this list.

Common here is that we’re building a drop down list based on set of known entities. The definition of the set doesn’t change, but its contents might.

On our ViewModel, we’d handle this in our form post with an entity:

public class RegisterViewModel
{
    [Required]
    public string Email { get; set; }

    [Required]
    public string Password { get; set; }

    public string ConfirmPassword { get; set; }

    public AccountType AccountType { get; set; }
}

We have our normal registration data, but the user also gets to choose their account type. The values of the account type, however, come from the database (and we use model binding to automatically bind up in the POST the AccountType you chose).

Going from a convention point of view, if we have a model property that’s an entity type, let’s just load up all the entities of that type and display them. If you have an ISession/DbContext, this is easy, but wait, our view shouldn’t be hitting the database, right?

Wrong.

Luckily for us, our conventions let us easily handle this scenario. We’ll take the same approach as our enum drop down builder, but instead of using type metadata for our list, we’ll use our database.

Editors.Modifier<EnitityDropDownModifier>();

// Our modifier
public class EnitityDropDownModifier : IElementModifier
{
    public bool Matches(ElementRequest token)
    {
        return typeof (Entity).IsAssignableFrom(token.Accessor.PropertyType);
    }

    public void Modify(ElementRequest request)
    {
        request.CurrentTag.RemoveAttr("type");
        request.CurrentTag.TagName("select");
        request.CurrentTag.Append(new HtmlTag("option"));

        var context = request.Get<DbContext>();
        var entities = context.Set(request.Accessor.PropertyType)
            .Cast<Entity>()
            .ToList();
        var value = request.Value<Entity>();

        foreach (var entity in entities)
        {
            var optionTag = new HtmlTag("option")
                .Value(entity.Id.ToString())
                .Text(entity.DisplayValue);

            if (value != null && value.Id == entity.Id)
                optionTag.Attr("selected");

            request.CurrentTag.Append(optionTag);
        }
    }
}

Instead of going to our type system, we query the DbContext to load all entities of that property type. We built a base entity class for the common behavior:

public abstract class Entity
{
    public Guid Id { get; set; }
    public abstract string DisplayValue { get; }
}

This goes into how we build our select element, with the display value showed to the user and the ID as the value. With this in place, our drop down in our view is simply:

<div class="form-group">
    @Html.Label(m => m.AccountType)
    <div class="col-md-10">
        @Html.Input(m => m.AccountType)
    </div>
</div>

And any entity-backed drop-down in our system requires zero extra effort. Of course, if we needed to cache that list we would do so but that is beyond the scope of this discussion.

So we’ve got dynamic lists done, what about dynamic lists with context?

Dynamic contextual list of items

In this case, we actually can’t really depend on a convention. The list of items is dynamic, and contextual. Things like “display a drop down of active users”. It’s dynamic since the list of users will change and contextual since I only want the list of active users.

It then comes down to the nature of our context. Is the context static, or dynamic? If it’s static, then perhaps we can build some primitive beyond just an entity type. If it’s dynamic, based on user input, that becomes more difficult. Rather than trying to focus on a specific solution, let’s take a look at the problem: we have a list of items we need to show, and have a specific query needed to show those items. We have an input to the query, our constraints, and an output, the list of items. Finally, we need to build those items.

It turns out this isn’t really a good choice for a convention – because a convention doesn’t exist! It varies too much. Instead, we can build on the primitives of what is common, “build a name/ID based on our model expression”.

What we wound up with is something like this:

public static HtmlTag QueryDropDown<T, TItem, TQuery>(this HtmlHelper<T> htmlHelper,
    Expression<Func<T, TItem>> expression,
    TQuery query,
    Func<TItem, string> displaySelector,
    Func<TItem, object> valueSelector)
    where TQuery : IRequest<IEnumerable<TItem>>
{
    var expressionText = ExpressionHelper.GetExpressionText(expression);
    ModelMetadata metadata = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData);
    var selectedItem = (TItem)metadata.Model;

    var mediator = DependencyResolver.Current.GetService<IMediator>();
    var items = mediator.Send(query);
    var select = new SelectTag(t =>
    {
        t.Option("", string.Empty);
        foreach (var item in items)
        {
            var htmlTag = t.Option(displaySelector(item), valueSelector(item));
            if (item.Equals(selectedItem))
                htmlTag.Attr("selected");
        }

        t.Id(expressionText);
        t.Attr("name", expressionText);
    });

    return select;
}

We represent the list of items we want as a query, then execute the query through a mediator. From the results, we specify what should be the display/value selectors. Finally, we build our select tag as normal, using an HtmlTag instance directly. The query/mediator piece is the same as I described back in my controllers on a diet series, we’re just reusing the concept here. Our usage would look something like:

<div class="col-md-10">
    @Html.QueryDropDown(m => m.User,
        new ActiveUsersQuery(),
        t => t.FullName,
        t => t.Id)
</div

If the query required contextual parameters – not a problem, we simply add them to the definition of our request object, the ActiveUsersQuery class.

So that’s how we’ve tackled dynamic lists of items. Depending on the situation, it requires conventions, or not, but either way the introduction of the HtmlTag library allowed us to programmatically build up our HTML without resorting to strings.

We’ve tackled the basics of building input/output/label elements, but we can go further. In the next post, we’ll look at building higher-level components from these building blocks that can incorporate things like validation messages.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.